Linux Deployment Playbook

WELCOME to my Linux Deployment Playbook

This little project started out years ago when I was trying to learn how to get better at IT. I wanted a place to store important information that I would be able to recall later. From this, the Linux Deployment Playbook was born.

There are NOT just copy and pasted instructions. I HAVE deployed everything in this guide at some point in my network whether it push it is in a live or test environment.

  • Linux Deployment Playbook – LDP.v1
    The first version started very humbly in a little Microsoft Word document. It wasn’t very much outside of literally Linux commands and helpful things to add that would assist me with quick and easy deployments.
  • Linux Deployment Playbook – LDP.v2
    At this point in my career I started taking technical writing very seriously. I had started to learn how to use Sphinx and Read the Docs in order to write simple documentation that could be very presentable. It took all the tedious work out of making everything look pretty in MS Word and allowed me to focus more on the content. With this, my knowledge base grew ever faster. At this point, i started adding more things involving networking as well as coding and scripting while still retaining a lot of its Linux-based foundation. I also started to introduce topics like running docker on Linux and deploying services like apache, WordPress, Nextcloud, and Plex.
  • Internal Deployment and Configuration Guides
    This is where things took a dive. You see, Originally I had developed my Linux Deployment Playbook (version 1) alongside another set of documentation for my network. One guide was focused heavily on Linux, the other on networking. As the internal documentation grew, and the projects I began to take on started increase, I started dividing my internal documentation up. One of them was an Engineering Aid. While this was primarily designed as a guide to allow an outside user to have full knowledge about my internal systems, this started to become my new knowledge dumping point. That’s right, LDPv2 started to become neglected and just like that I reverted back to writing all my documentation in rich text documents again.
  • Linux Deployment Playbook – LDP.v3
    Up until now, most of my knowledge base has been kept mostly to myself, except for the few friends that I would share this with. Not to mention reverting back to MS Word felt like a massive downgrade and I needed to revive the once great Linux Deployment Playbook again! At the time I was already hosting a website (this one) and I figured that this was the perfect platform to host my new playbook. The advantage of this was my ability to showcase my experience and knowledge in one area. After all, that was the original purpose of this website, was to serve a a personal and professional representation of myself. Here, the Linux Deployment Playbook has extended FAR past what its original intention was, Linux. This now hosts a large (and random) knowledge base of projects, but lets be real its all mostly Docker.
  • Linux Deployment Playbook – LDPv4 [TBD]
    This will be the next upcoming release of my LDP. Currently I am still working on version 3 (here) and I have not moved onto plans on what a 4th version would look like. I am still working on adding more things here and getting this cleaned up while adding more content. Keep your eyes peeled for updates.

If you took the time to read this I want to thank you. Seriously, I appreciate it. I have put a lot of hard work into this and I hope that by doing this I can save someone months (or even years) of just splashing around in the shallow section of the pool and afford them the opportunity to just headfirst into the deep end.

Enjoy and please while you are here check out the rest of the website!

This is a list of items that I am currently testing, learning, or have already worked on but I am just in the phase of adding them to the Playbook.

This is my deployment script (I have nowhere to put this for now)

bash -c "$(curl -fsSL"

change zsh to bash
ENTER: /usr/bin/bash

Stin / stout / sterr



Fail2ban (Docker)


VIM tutorial


To-Do List with Vikunja (Docker)

Dashboard with Homer (Docker)

Using Low Code to Develop Internal Tools with Appsmith (Docker)

Scanning Webapps with Hawkscan (Docker)

Reverse Proxy with Traefik (Docker)

System Monitoring with Prometheus (Docker)

So you wanna learn Linux? But what exactly is Linux? You may have heard of it as an alternative to Windows or Mac OS. Maybe you are interested in it for its free or open source software. Maybe you want to be different, and who doesn’t?

So lets start out with what Linux actually is. Most people refer to Linux as the operating system itself however Linux is just the system kernel. The kernel is the part of the system that allows software to interact with hardware through the use of software drivers. What is referred to as Linux is actually GNU/Linux. GNU (which actually stands for “GNU’s not Unix”). GNU is a collection of free and open-source software that runs on the Linux kernel. Linux is considered a “Unix based” system as it shares many similarities however the main difference is the proprietary of the software and the kernel. If you really want to learn more about Linux and its history, please click here.

Linux isn’t an operating system but rather the kernel. The kernel is what allows the operating system to interact with hardware through the use of device drivers. The kernel is one of the first programs loaded during boot and is always present in memory. The Linux kernel is essentially the same no matter which version of Linux you are using, however, it is not wrong to refer to the entire operating system as Linux much like you would refer to Windows. If you want to know more about the history of Linux, click here.

Linux Distributions

Linux operating systems are classified into “Linux distributions” (also known as distros). A Linux distro consists of the Linux kernel, the GNU library of software, and a package manager for managing and downloading additional software called “packages”. One of the most common questions among Linux newbies is “which distro should I start out on?”. If you really want to know my opinion, just download the latest version of Ubuntu Desktop. The real answer is that it doesn’t really matter, although there are some Linux distros that are considered to be more geared towards advanced Linux users such as Arch Linux.

When you look at a map of Linux operating systems, you will see a lot of different paths branching off of a main Linux system. These distros are all branches of the original Linux distro from which they are based on. For example, Manjaro is a branch of Arch Linux from which it is based. There are thousands of different distros to choose from and often picking one can sometimes seem daunting. There are very little differences between the functionality of different distros as most of them are just preloaded with different software packages or maybe a different desktop environment. For example, Ubuntu Mate is just a branch of Ubuntu but with the Mate desktop environment.

Some of the most common base distros include Slackware, Debian, RedHat (RHEL), Arch Linux, Gentoo and SuSE Linux. For a timeline of Linux distros please click here.

Package Managers

Another major difference between distros is the package manager utilized. The package manager is what will probably be one of the biggest deciding factors between which distro to download as it is one of the single most variables between Linux distros that can result in different experiences and compatibility issues between distros. Different package managers include Dpkg used with Debian, Pacman sued with Arch Linux, and RPM used with RHEL. I will go into more detail about package managers in another section but when selecting the distro to choose, this will likely be the deciding factor that helps narrow down your choice.

Desktop Environment

Linux controls about 50% of the market share on web-based servers and about 4% of the desktop market share. For this reason, the desktop environment is probably not going to be a big factor in choosing a Linux distro. Even so, a lot of Linux distros come with different Dekstop environments to choose from when setting up such as Garuda and Mint.

A desktop environment is entirely personal and while it might have a major impact in experience for some users, for most it will not. Some desktop environments include:

  • Gnome
  • KDE Plasma
  • MATE
  • Cinnamon
  • XFCE
  • Unity

At the end of the day, choosing a Linux desktop environment is purely by choice and if you don’t like one, you can always install a different one.

Distro Watch

Distro watch is the one stop shop for information about Linux distros. They maintain ranked list of Linux distros based on hits, a section with information about the latest updates to different distros as well as general information about most distros.

If you want to see what everyone else is using and find a different distro to change it up, I highly recommend checking out Distro Watch.

One of the most annoying questions asked is “which Linux distro should I use to start off?“. If this is your question and is the only reason you are here, then I will just say to download Ubuntu Desktop and start from there. Another question would be “which Linux distro id the best for [insert goal]?”. The answer for all of these is that it really doesn’t matter although there are some Linux distros that would be more suited for beginners and some that would probably be reserved for people with more….. I wouldn’t say experience but more exposure.

Debian Based Distros

Lets start out with Debian. This is what most people start with when they are introduced to Linux.


Ubuntu is arguably the most popular branch from Debian spawning multiple branches itself and ranking higher on Distro Watch than Debian itself. Ubuntu is designed by Canonical and comes in 3 different edition: Desktop, Server, and Core. Canonical provides support for a number of different CPU architectures and configurations including support for Raspberry Pi and cloud infrastructure.

Zorin OS

Zorin was actually one of the first Linux distros that I started to daily drive to give my old laptop some life (though I had been a very familiar Ubuntu and Debain user by then). Zorin is lightweight, simple and aimed at users who are familiar with Windows systems but want to make the transition into using Linux due to the similar graphical interface.

Linux Mint (Ubuntu)

Linux Mint is an Ubuntu-based Linux distro with a custom desktop interface, menu, configuration tools and web-based package installation. Comes in Cinnamon, Mate and XFCE.

Ubuntu MATE

Ubuntu Mate is popular alternative to Ubuntu that uses the MATE desktop environment (which was the original desktop environment of Ubuntu). This project was born as a means to restore the old look of Ubuntu though you will find very little else functionally different. I used Ubuntu MATE extensively when I was originally deploying Raspberry Pis in my network though I have since switched to Ubuntu Server due to the low overhead of running a headless OS.


Pop!_OS is a minimalist designed distro based on Ubuntu that focuses of removing clutter from the desktop environment for users.


Kubuntu is a Linux distro based on Ubuntu with the KDE Plasma desktop interface and known for being highly customizable.


Lubuntu is a lightweight Linux distro based on Ubuntu that gives users a quick experience and is excellent for working on older systems or systems with limited resources.


Xubuntu is a Linux distro based on Ubuntu that is focused on stability. Using the SFCE XFCE desktop, this distro is ideal for running lightweight on systems with limited resources.

elementary OS

elementryOS is a Linux distro based on Ubuntu that uses a unique desktop environment called Pantheon. The user desktop experience mimics that closely of MacOS devices.

MX Linux

A very popular version of Debian that is focused on stability and performance.

Kali Linux

Formerly known as BackTrack, Kali Linux is based on Debian. Kali is focused on information security (specifically red teams) and serves as the standard pentesting platform for many in the community. Kali comes with a vast number of tools to accomplish this and comes in many different configurations for different deployments.

Parrot OS

Parrot OS is another information security Debian based Linux distro. This distro also comes in many different configurations for a variety of different deployments.

Raspberry Pi OS

Originally called Raspbian, Raspberry Pi OS is a Debian based Linux distro that is designed specifically for the Raspberry Pi computer board (though it can really be installed like any normal distro.

Arch Linux Based Distros



An Arch Linux based distro, EndeavourOS paims to provide an easy installation and preconfigured desktop environment for users to enjoy using Arch Linux.


Red Hat Enterprise for Linux (RHEL) Based Distros



Rocky Linux



Alpine Linux



RedStar OS


A package manage is a tool used by different Linux distros to install, remove, upgrade and configure software on the system called “packages”. Package managers allow for the automatic management of software versus manually. Depending on the distro you select will greatly depend on the package manager that is used as most package managers are not cross compatible.


Debian package (dpkg) is used by Debian and all Debian-based systems such as Ubuntu, MX Linux, and Kali. Dpkg is used with “deb” package files.

Install a package

sudo dpkg -i <pkg_name>.deb

Install multiple packages in a given directory

sudo dpkg -R --install /path/to/pkgs

Remove a package

sudo dpkg -r <pkg_name>

List all installed packages

sudo dpkg -l

Verify if a package is installed or not

sudo dpkg -s <pkg_name>

Find the location of an installed package

sudo dpkg -L <pkg_name>

Help command

sudo dpkg -help


APT (Advanced Package Tool) is a package manager popularly used on Debian and Debian-based systems. APT makes the process simpler by retrieving, configuring and installing packages. APT is essentially a front end manager for dpkg and installs the packages in a way that makes it much easier than with dpkg alone.

Update package index
NOTE: You should ALWAYS update before upgrading

sudo apt update

Upgrade packages

Upgrade all packages
sudo apt upgrade

Upgrade a single package
sudo apt upgrade <pkg_name>

Perform an upgrade with conflict resolution upgrading priority packages first
sudo dist-upgrade

Full upgrade
sudo apt full-upgrade

Install a package

Install a single package
sudo apt remove <pkg_name>

Install multiple packages
sudo apt install <pkg1> <pkg2> <pkg3> ...

Install local deb package
sudo apt install /path/to/<pkg_name>.deb

Remove a package

Remove a single package
sudo apt remove <pkg_name>

Remove multiple packages
sudo apt remove <pkg1> <pkg2> <pkg3> ...

Remove unused packages
When a package dependency is no longer needed, it stays on the system until it is manually removed. Autoremove automatically removed all package dependencies that are no longer needed on the system.

sudo apt autoremove <pkg_name>

Purge a package
Sometimes removing a package may not be enough. There still may be traces of the package such as config files. Purging ensures that all traces are removed.

sudo apt purge <pkg_name>

List packages

List all available packages
sudo apt list

List a specific package
sudo apt list | grep <pkg_name>

List only installed packages
sudo apt list --installed

List upgradable packages
sudo apt list --upgradeable


All Arch Linux-based distros use PACMAN to install, upgrade and remove packages. Arch Linux has 2 repositories, the official repository and the Arch User Repository (know as AUR). PACMAN is only used to install

Install a package

Install a single package
sudo pacman -S <pkg_name>

Install multiple packages
sudo pacman -S <pkg2> <pkg2> <pkg3> ...

Remove a package

Remove a single package
sudo pacman -R <pkg_name>

Remove multiple packages
sudo pacman -R <pkg1> <pkg2> <pkg3>

Remove a package and its dependencies
sudo pacman -Rs <pkg_name>

Remove package dependencies no longer needed

sudo pacman -Qdtq | sudo pacman -Rs -

Upgrade packages

sudo pacman -Syu

Search Packages

Search packages (either by name or description)
sudo pacman -Ss <string>

Search installed packages
sudo pacman -Qs <string>

Search for package file names in remote packages
sudo pacman -F <string>



Systemd is a Linux software suite that provides an array of services for Linux operating systems as a replacement for init.

Systemd Units

  • SERVICE (start, stop, restart, enable a service or application on Linux)
  • SOCKET (Network or IPC socket, or a FIFO buffer used for socket-based activation)
  • DEVICE (A device that is targeted for management by a systemd service)
  • MOUNT (A mount point on the system managed by systemd)
  • AUTOMOUNT (A mountpoint that will be mounted automatically)
  • SWAP (System swap space)
  • TARGET (A unit used to provide synchronized points for other units on boot or state change)
  • PATH (A location used for path-based activation)
  • TIMER (A timer unit managed by systemd similar to cron for scheduled or delayed activation)
  • SNAPSHOT (Allows you to reconstruct the state of the system after changes are made)
  • SLICE (Allows resources to be restricted or assigned to any process associated with a slice)
  • SCOPE (Used for information received from bus interfaces)


/lib/systemd/system # standard systemd location (distro maintainers)
/usr/lib/systemd/system #distribution package manager (apt, pacman, dpkg)
/usr/local/lib/systemd/system #system files installed by local admin
/etc/systemd/system #system files created by local admin

/run/systemd/system #runtime (transient) unit files

Creating Unit File (Service) with Systemd

Creating a service with systemd will allow you to offload tasks to automatically start on system boot and restart in the event of a failure. You also get useful logging information in the event that the service is unable to start.

Create a Service File

Make a file with the name of the service and place it in the directory ‘/usr/lib/systemd/system’

sudo vim /usr/local/lib/systemd/system/<name>.service

The service file will depend heavily on what the desired aim of the service is. I recommend looking at the supporting documentation in the reference links provided below but an example of what the file may look like is provided below.

Description=<Unit Title>
Documentation=https://<URL>.com #optional (you may use http:// https:// file: info: man:)

ExecStart=/bin/bash /location/of/


You can start/stop/enable the service like you normally would with any other systemd service.

Changes won’t take effect until the system is restart. You can optionally run this command

systemctl daemon-reload



Linux Shell

The Linux shell is a user interface (called command line interface or CLI for short) and interpreter, and a powerful programming language.

Bourne Again Shell (BASH)

Bash Logo Media Assets - Download Bash shell logo - Bourne-again shell logo

Create a file and make it executable

touch <filename>
sudo chmod +x <filename>

TO add the shebang, we need to determine the interpreter:

which bash

Add that to the top of the file




A memory location that is assigned a name (variable) used to store data.

Environmental/Global Variable

VAR="global string"
echo $VAR

global string

Local Variable

Local variables exist only in the shell that they are created

local VAR="local variable"
echo $VAR

local variable

To use local variables in a child process, you need to export the variable

export VAR=variable

Additionally you can import variables in the form of listing a reference file with the variables contained in it.



Aliases are command shortcuts listed as variables.

you can create an alias with the ‘alias’ command

alias ALIAS=<command>

You can also remove an alias with the ‘unalias’ command

unalias ALIAS

Aliases are stored under ~/.bash_aliases


When another bash script file within a bash file, this creates a subshell. All variables that are called in the host shell are not called in the subshell and all variables in the subshell are not retained in the host shell.

There are two solutions for this.

If you want to just use variables in a subshell, you can export those variables in the host script like so

export VAR="var"

This variable will be carries over into the subshell.

If you want to use variables throughout without exporting them, you can create a separate file with declared variables and scripts and just call that file

Source file



Example variable file:


source <filename>

User Input







If / Elif / Else / Fi Statement

Adding this for now (since Yes/no query in BASH):

while true; do
  read -p "statement" YN
  case ${YN:0:1} in
    [Yy]* )
    [Nn]* )
    * )
      echo "Please answer either Y/y or N/n";;

Executing Bash Scripts

There are a few ways to execute a shell script

change editor to VIM in Ubuntu

sudo update-alternatives --config editor

Linux Network Management



Other Linux Network Commands


its deprecated. Don’t use it. If you see someone say “use this command….ifconfig” don’t. D.A.R.E. told you not to use drugs when you were a kid, and I am telling you, don’t use ifconfig.

IP command

Get IP information:

Get IP information:

ip a

Add/remove an IP to an interface (this is not permanent and will reset when the host is restarted):

sudo ip a add <ipaddr> dev <iface>
sudo ip a del <ipaddr> dev <iface>

To permanently add a static IP to an interface, do the following:

Bring and interface up or down

sudo ip link set <iface> up
sudo ip link set <iface> down

Show routing tables

ip r

sudo ip r add <ipaddr> via <gateway_ip>


Network Manager

Installing Network manager:

# Debian (Ubuntu)
sudo apt install NetworkManager

# Arch
sudo pacman -S NetworkManager

Connection Management

Check the status of Network Manager:

sudo systemctl status NetworkManager

List connection profiles:

# List connection profiles
nmcli connection show [option] {argument}

# Activate a connection
nmcli connection up [option] {argument}

#Deactivate a connection
nmcli connection down [option] {argument}

# Modify a connection
nmcli connection edit [option] {argument}

Device Management

Show device status:

nmcli device status

Show device

Enable Network Manager in Debian (Ubuntu)

sudo mv /usr/lib/NetworkManager/conf.d/10-globally-managed-devices.conf  /usr/lib/NetworkManager/conf.d/10-globally-managed-devices.conf_orig

sudo touch /usr/lib/NetworkManager/conf.d/10-globally-managed-devices.conf

sudo sed -i 's/managed\=false/managed\=true/g' /etc/NetworkManager/NetworkManager.conf

sudo systemctl restart NetworkManager

# You need to add the following to the yaml file in 'etc'netplan':
# renderer: NetworkManager

# For Raspberry Pi
sudo sed -i '/^network:.*/a \ \ renderer: NetworkManager' /etc/netplan/50-cloud-init.yaml

# You can just purge the system from netplan

sudo apt update && sudo apt install ifupdown ; \
sudo apt --purge remove nplan -y ; \
sudo systemctl stop systemd-networkd ; \
sudo systemctl stop systemd-networkd.socket ; \
sudo systemctl disable systemd-networkd ; \
sudo systemctl disable systemd-networkd.socket ; \
sudo systemctl mask systemd-networkd ; \
sudo systemctl mask systemd-networkd.socket

To view the status of a device use the following
ip device status

sudo nmcli device connect <iface>

Linux Filesystem and Storage

Here are some tips and information and tips about how Linux filesystems and storage works.

Disk Partitioning

sudo fdisk /dev/<device>

Press "G" to create a new empty GPT partition table
Press "n" to add a new partition
Press "ENTER" to select default partition number
Press "ENTER" to sent the first sector
Enter last sector and press "ENTER" to continue or to select the default last sector to fill the disk
Press "t" if you need to change the partition type
Press "p" or "i" to verify the information about the new partition
Press "w" to write the information or "q" if you need to start over

Mounting Disks

Mount a disk manually

sudo mnt /mount/location

Unmount a disk manually

sudo umount /mount/location

Mount disk automatically with fdisk
To do this, you need make an entry in ‘/etc/fstab’. You can do this with a command line editor like Vim

sudo vim /etc/fstab

Get the UUID for the disk


Add the following information about the disk and mount location

UUID=<UUID> /mount/location auto defaults 0 0

The disk will automatically mount on next boot but you can also automatically mount drives listed in ‘etc’fstab’

sudo mount -a

Local Storage

List information about block devices:

lsblk -Mf

# useful list

# alternative (lbslk is prefered)
sudo blkid 

List information about mounted block devices:


List information for files in a directory:

ls -lah

# look recursively
ls -lahR
ls -lah *

List information for storage space on a file system:

df -h

List information for storage space for a directory

sudo du -hd1

You can alternatively use NCDU which is a better tool for quickly identifying and searching for disk space usage:

#Debian (Ubuntu)
sudo apt install ncdu

#Arch Linux
sudo pacman -S ncdu

Just type “ncdu” in the terminal and you’re done.

Mounting exFAT

Windows filesystem “exFAT” is not natively recognized by Linux. Install the following utility to use exFAT filesystems:

sudo apt-get install exfat-fuse exfat-utils

Mounting Filesystems with NFS

Network File System allows the mounting of remote filesystems on a client from a remote server.

NFS Server

sudo apt update && sudo apt install nfs-kernel-server -y

#Arch Linux
sudo pacman -S nfs-utils

NFS Client

sudo apt update && sudo apt install nfs-common -y

#Arch Linux
sudo pacman -S nfs-utils

Remote Storage with SAMBA (SMB)

File:Logo Samba Software.svg - Wikimedia Commons

SAMBA Server

Verify if SAMBA is installed:

ls -l /etc/samba

To install SAMBA:

sudo apt-get update && sudo apt-get install samba -y

Set a password for smb user:

sudo smbpasswd -a <user_name>

echo "What is the name of the Samba user? " && read -p ">> " USR && sudo smbpasswd -a $USR

Edit the smb conf file (If the smb.conf file does not exit, you can find it HERE):

sudo vim /etc/samba/smb.conf

Enter the following information based on your needs:

<share name>
path = /<location of the shared folder>
valid users = <username1>, <username2>, etc.
read only = no
guest ok = yes
guest only = yes
writable = yes
force user = <username>
force group = <groupname>

Mount SAMBA Shares to Client

List all available shares on a server:

smbclient -L //<host>

Install cifs-utils

sudo apt-get install cifs-utils

Manually mount the SAMBA share:

sudo mount -t cifs //<host>/<share> /mount/location

To automatically mount shares on boot, edit the fstab file:

sudo vim /etc/fstab

Enter the following information (you need the ):

//<host>/<share> /mount/location cifs credentials=/<filename>,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0

You will need to make a password file and include it on the client with the following information:


The file will mount on boot, but you can automatically mount with the following:

sudo mount -a

Mount Filesystems with SSHFS (SSH Filesystem)

Install SSHFS:

# Debian (Ubuntu)
sudo apt install sshfs

# Arch
sudo pacman -S sshfs

Mount a remote share with SSHFS:

sudo sshfs <user>@<host>:/remote/dir /mount/location <options>

To unmount:

sudo umount /mount/location

Mount automatically by adding to fstab:

sudo vim /etc/fstab

Add the following to ftsab:

<user>@<host>:/remote/dir /mount/location fuse.sshfs defaults 0 0

To login without a password, make sure the SSH key is stored on the remote server. You can do with with “ssh-copy-id”. The private SSH also needs to be in the root directory “/root/.ssh”.

With the default entry, you will have to use the root user in order to cd into that directory or access the files. To allow users to access directories and files, use the -o option or “allow_others”.

sshfs -o allow_other

#or mount to /etc/fstab

<user>@<host>:/remote/dir /mount/location fuse.sshfs defaults,allow_other 0 0

Mount Filesystems with S3FS (S3 Buckets)

S3FS allows you to mount remote S3 buckets to a local system.

Download S3FS

sudo apt update && sudo apt install s3fs -y

Arch Linux
sudo pacman -Syu && sudo pacman -S s3fs-fuse

Mount Filesystems with VIRTIOFS (From Host to VM)

You can mount directories from the host OS into a VM as fileshares with VIRTIOFS.

First you need to create a shared directory with a “source path” nd a “mount tag”.

Manually mount with Virtiofs:

sudo mount -t virtiofs <share> /mount/location

You can mount automatically by editing the fstab file:

sudo vim /etc/fstab

Add the following information:

<share> /mount/location virtiofs rw,_netdev 0 0

The file will mount on boot, but you can automatically mount with the following:

sudo mount -a

SSH is a powerful networking tool for connecting to a host remotely over the command line interface. Here are some tips on how to use SSH to your advantage.

SSH usage

ssh <user>@<host>

Alternatively you can SSH over a specified port:

ssh -p <port> <user>@<host>

To specify a private key to use:

ssh -i </file/path> <user>@<host>

Generate an SSH key


Copy SSH keys to remote server for passwordless login:

ssh-copy-id <user>@<host>

Remove keys belonging to a user from the known_hosts file:

ssh-keygen -r <host>

This is very useful if you encounter this when you encounter this message when trying to login to a remote server (This is due to the host server being different than the one in the known_hosts file)

Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.

SSH Tunneling (Port Forwarding)

Local Port Forwarding

Forward a port on a local SSH client to a remote SSH server that is forwarded to the destination:

ssh -L <lport>:<dest_host>:<dest_port> <user>@<rhost>

This is used for gaining access to a remote service.

ssh -L <lport>:<dest_host>:<dest_port> <lport>:<dest_host>:<dest_port> <user>@<rhost>

Remote Port Forwarding

Forward a port on a remote SSH server to a local SSH client that is forwarded to the destination:

ssh -R <rport>:<dest_host>:<dest_port> <user>@<rhost>

This is used for granting access to a remote user for a local service.

Dynamic Port Forwarding

Create a socket on the local SSH client

ssh -D <port> <user>@<host>

This is mostly used to tunnel web traffic (an alternative to using a VPN)

The -N option instructs not to execute a remote command and the -v is for verbosity:

ssh -NvD <port> <user>@<rhost>

Running Commands and Scripts on a Remote Host with SSH

This will allow you to execute one command or a series of commands using SSH. The command will login to the remote host, execute the command and terminate:

ssh -t <user>@<host> "<command> && <command>"

Here is a script that can be executed to run commands on multiple machines:

for s in <host1> <host2> <host3>
   ssh <user>@${s} <command>

You can even run a local bash script to a remote host:

ssh <user>@<host> 'bash -s' < /path/<>

The script for executing bash scripts on multiple hosts is similar to before:

for s in <host1> <host2> <host3>
   ssh <user>@${s} 'bash -s' < /path/<>

SSH Config File

SSH file should be located at ‘~/.ssh/config’. The file may not exist and in that case needs to be generated:

touch ~/.ssh/config

chmod 600 ~/.ssh/config

SSH config file format

Host <hostname>
    <SSH_OPTION> <value>

Use the following as an example:

Host sshserver
    Port 2222
    Compression yes
    IdentityFile ~/.ssh/keys/id_rsa

With this config file, you an automatically log into the server by using the following:

ssh sshserver

You can view the manpage for more info

man ssh_config


Transfer Files with SFTP (SSH File Transfer Protocol)

Use SFTP to transfer files to/from a remote server:

sftp <user>@<host>

These are the following common commands to use with SFTP

ls    (list files and directories on remote host)
lls   (list files and directories on local host)
cd    (change directories on remote host)
lcd   (change directories on local host)
pwd   (list present working directory on remote host)
lpwd  (list present working directory on local host)
get   (retrieve file from remote host)
mget  (retrieve multiple files from remote host)
put   (place file on remote host)
mput  (pace multiple files on remote host)

Transfer Files with SCP (Secure Copy Protocol)

Copy remote file to a local system:

scp <file> <user>@<host>:/remote/directory

Copy a local file to a remote system:

scp <user>@<host>:<file> /local/directory

Copy files between two remote hosts:

scp <user>@<host>:/remote/directory <user>@<host>:/remote/directory

Use these options:

-P   (Specify SSH port)
-r   (Recursive)
-C   (Compress data)

Mount Filesystems with SSHFS (SSH Filesystem)

Install SSHFS:

# Debian (Ubuntu)
sudo apt install sshfs

# Arch
sudo pacman -S sshfs

Mount a remote share with SSHFS:

sudo sshfs <user>@<host>:/remote/dir /mount/location <options>

To unmount:

sudo umount /mount/location

Mount automatically by adding to fstab:

sudo vim /etc/fstab

Add the following to ftsab:

<user>@<host>:/remote/dir /mount/location fuse.sshfs defaults 0 0

To login without a password, make sure the SSH key is stored on the remote server. You can do with with “ssh-copy-id”. The private SSH also needs to be in the root directory “/root/.ssh”.

With the default entry, you will have to use the root user in order to cd into that directory or access the files. To allow users to access directories and files, use the -o option or “allow_others”.

sshfs -o allow_other

#or mount to /etc/fstab

<user>@<host>:/remote/dir /mount/location fuse.sshfs defaults,allow_other 0 0

VPN over SSH with sshuttle

Sshuttle is a VPN solution for SSH without the complexities of port forwarding.

#Install Debian (Ubuntu)
sudo apt get update && apt get install sshuttle

#Install Arch
sudo pacman -S sshuttle

If you choose, you can use Git to install.

git clone
cd sshuttle
sudo ./ install


So you got a Raspberry Pi and are looking for some awesome projects to use with it. Well the Raspberry Pi is a small “credit card” shaped server that is praised for its small form factor and power (for its size). This tool is used to teach programming, robotics, network engineering, and security as well as a list of other things.

So what are some cool projects that you can do RIGHT NOW with your Raspberry Pi?

Install Docker on Raspberry Pi

Install script:

curl -fsSL -o

Execute script:

sudo sh

Install a Network Adblocker with Pi-hole (Docker)

Pi-hole – Network-wide protection

The Pi-hole is a POWERFUL system that can be used as a DNS server, DHCP server, and adblocker all in one.
Use Pi-hole to block advertisements on a network level.
Get useful network stats in an easy to understand graphical layout using the Pi-hole web interface.
Deploy instantly and use now!

Install Pi-hole with Docker (Recommended)

Pull the image:

docker pull pihole/pihole

Docker run:

docker run -d \
    --name pihole \
    -p 53:53/tcp -p 53:53/udp \
    -p 8080:80 \
    -v "${PIHOLE_BASE}/etc-pihole:/etc/pihole" \
    -v "${PIHOLE_BASE}/etc-dnsmasq.d:/etc/dnsmasq.d" \
    --dns= --dns= \
    --restart=unless-stopped \
    --hostname pi.hole \
    -e VIRTUAL_HOST="pi.hole" \
    -e PROXY_LOCATION="pi.hole" \
    -e ServerIP="" \

Docker Compose

version: "3.3"

# More info at and
    container_name: pihole
    image: pihole/pihole:latest
    # For DHCP it is recommended to remove these ports and instead add: network_mode: "host"
      - 53:53/tcp
      - 53:53/udp
      - 67:67/udp # Only required if you are using Pi-hole as your DHCP server
      - 80:80/tcp
      TZ: 'America/Chicago'
      PUID: 1000
      PGID: 1000
      # WEBPASSWORD: 'set a secure password here or it will be random'
    # Volumes store your data between container upgrades
      - NET_ADMIN # Recommended but not required (DHCP needs NET_ADMIN)
      com.centurylinklabs.watchtower.enable: "false"
    restart: unless-stopped

Access the web interface for Pi-hole, simply type in the IP address and the port assigned or <ipaddr>/admin.

NOTE: I added the label com.centurylinklabs.watchtower.enable=”false” for Watchtower because Pi-hole docker container needs to be updated on its own.

Install Pi-hole Bare Metal

Docker run command:

curl -sSL | bash

To access the web interface for Pi-hole, simply type <ipaddr>/admin.

Once Pi-hole is installed, you need to configure the router or individual devices to use the Raspberry Pi as a DNS server.

Post Installation Steps (Ubuntu)

An issue I have with Ubuntu is that systemd-resolve uses port 53 as well.

Edit /etc/systemd/resolv.conf and uncomment the following line

sudo sed -r -i.orig 's/#?DNSStubListener=yes/DNSStubListener=no/g' /etc/systemd/resolved.conf

Change the symlink for /etc/resolv.conf

sudo sh -c 'rm /etc/resolv.conf && ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf'

Restart the service

sudo systemctl restart systemd-resolved.service

Using Pi-hole as a DHCP Server

Pi-hole can be used as a DHCP server to assign static IP addresses and control dynamic IP address leases.

In order to do this, it is first important that you DISABLE this feature on your network router. This step is crucial.

Purge DHCP IP Addresses

Reset Password

sudo pihole -a -p

sudo docker exec -it pihole pihole -a -p 


Install a Private VPN Server with PiVPN

For VPN access, it is recommended to install WireGuard for Docker on the Pi server.
PiVPN uses OpenVPN and/or WireGuard with custom commands for simple deployment and management.


curl -L | bash


Install a Honeypot with HoneyPi

HoneyPi: The smart beehive scale - Apps on Google Play

Download the zip file




Navigate into the directory

cd HoneyPi-master

Make the file executable

chmod +x *.sh

Execute the script

sudo ./

From here you just need to follow the prompts in the terminal in order to setup the rest of the honeypot


Install a Game Server with RetroPie

WiFi/LAN Intrusion Detection with Pi.Alert


Job Automation using CRON

Cron is a useful Linux utility for automating scheduled jobs to run at specified times or intervals. These jobs (known as cron jobs) can be programs or scripts.

The file ‘/etc/crontab’ shows more information about cron

# ┌───────────── minute (0 - 59)
# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of the month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday;
# │ │ │ │ │                                   7 is also Sunday on some systems)
# │ │ │ │ │
# │ │ │ │ │
# * * * * * <command to execute>

The crontab command is what is used to edit the crontab for the given logged in user.

#List user's crontab
crontab -l

#Edit user's crontab
crontab -e

#Delete user's crontab
crontab -d
crontab -i #prompt before deleting

Cron is super easy to use. Just edit the crontab file, configure the time frequency, and input the script that you want cron to update on.

Run hourly

*   * * *   root    ~/ #This script will run every hour

Run daily

6   * * *   root    ~/ #This script will run every day

Run weekly

6   * * 7   root    ~/ #This script will run on the 7th day of each week

Run monthly

6   1 * *   root    ~/ #This script will run on the first day of every month


Searching for a file or directory name (use the “find” command):

sudo find . -name sample.txt

#search in a specific directory
sudo find /home -name sample.txt

#use astrisk as a wildcard character
sudo find . -name *sample*

#ignore case sensitivity using "iname"
sudo find . -iname sample.txt

#search for a directory name with "-type d"
sudo find . -type d -name sample


Speeding up grep searches


LC_ALL=C grep example file.txt

Use fgrep if you are searching for a fixed string instead of a regular expression

fgrep example file.txt

Use parallel

Search for an IP address for an IP address

grep -oE "(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)"
The Python Logo | Python Software Foundation
Automate the Boring Stuff with Python: Practical Programming for Total  Beginners 1, Sweigart, Al, eBook -

Install Pip

Install Python3 Pip for Debian (Ubuntu):

sudo apt update && sudo apt install python3-pip -y

Verify version number:

pip3 --version

Webtop - Running Linux Desktop inside a Container in the Browser | by Hari  Prasad | featurepreneur | Medium

A webtop is a container that runs a virtual desktop environment in a web browser. Webstops are quick and easy to deploy and can be super useful for testing environments. Unlike a traditional Docker container, you would update the webtop like you would normally update any other Linux server.

One more consideration, webtops should NOT be exposed to the public and should instead lie safely behind a firewall.

docker pull linuxserver/webtop

Refer to the table below for different architectures:


Refer to the table below for different installs

XFCE Alpinelinuxserver/webtop:latest
XFCE Ubuntulinuxserver/webtop:ubuntu-xfce
XFCE Fedoralinuxserver/webtop:fedora-xfce
XFCE Archlinuxserver/webtop:arch-xfce
KDE Alpinelinuxserver/webtop:alpine-kde
KDE Ubuntulinuxserver/webtop:ubuntu-kde
KDE Fedoralinuxserver/webtop:fedora-kde
KDE Archlinuxserver/webtop:arch-kde
MATE Alpinelinuxserver/webtop:alpine-mate
MATE Ubuntulinuxserver/webtop:ubuntu-mate
MATE Fedoralinuxserver/webtop:fedora-mate
MATE Archlinuxserver/webtop:arch-mate
i3 Alpinelinuxserver/webtop:alpine-i3
i3 Ubuntulinuxserver/webtop:ubuntu-i3
i3 Fedoralinuxserver/webtop:fedora-i3
i3 Archlinuxserver/webtop:arch-i3
Openbox Alpinelinuxserver/webtop:alpine-openbox
Openbox Ubuntulinuxserver/webtop:ubuntu-openbox
Openbox Fedoralinuxserver/webtop:fedora-openbox
Openbox Archlinuxserver/webtop:arch-openbox
IceWM Alpinelinuxserver/webtop:alpine-icewm
IceWM Ubuntulinuxserver/webtop:ubuntu-icewm
IceWM Fedoralinuxserver/webtop:fedora-icewm
IceWM Archlinuxserver/webtop:arch-icewm

Use Docker run:

docker run -d \
  --name=webtop \
  --security-opt seccomp=unconfined `#optional` \
  -e PUID=1000 \
  -e PGID=1000 \
  -e SUBFOLDER=/ `#optional` \
  -e KEYBOARD=en-us-qwerty `#optional` \
  -p 3000:3000 \
  -v /docker/images/staging/webtop/config:/config \
  -v /var/run/docker.sock:/var/run/docker.sock `#optional` \
  --device /dev/dri:/dev/dri `#optional` \
  --shm-size="1gb" `#optional` \
  --restart unless-stopped \
Kasm: A secure computing platform



Building a Website

Building a Webserver with NGINX (Docker)

HGINX can act as the following:

  • webserver
  • reverse proxy
  • load balancer
  • mail proxy
  • http cache

Reverse Proxy Manager with NGINX (Docker)

Nginx Proxy Manager

A reverse proxy manager will

docker run -d \
--name=nginx-proxy-manager \
-p 8080:80 \
-p 81:81 \
-p 4433:443 \
-v /mnt/docker/staging/nginx_proxy_manager/data:/data \
-v /mnt/docker/staging/nginx_proxy_manager/etc/letsencrypt:/etc/letsencrypt \
--restart unless-stopped \

After creating the container, login to the web interface by typing in the IP address with the port 81.

The default login credentials are:
USER: [email protected]
PASSWD: changeme

Once you successfully login, you will be asked to change these.


Reference: docker pull linuxserver/nginx

Building a Webserver with Apache (Docker)

Building a Website with WordPress (Docker)

Traefik (Docker)

This is useful for taking items separated by commas and turning them into a list of items, sorting and removing duplicate entries:

tr ' ' '\n' | tr -d , | sort -V | uniq

I used this to get a list of filenames based on date

ls -lat | grep -i <name> | grep -i <date> | cut -f2- -d: | cut -c 4-

Replace word in a file with a matching string

sed -i 's/word1/word2/g' input.file

For variables, add double quotes around the variable

Replace line with file with matching string

Cockpit (Web User Interface)

cockpit-project · GitHub

Cockpit is a web user interface for managing a remote server.

Install Cockpit web interface (Ubuntu):

sudo apt update && sudo apt install cockpit

Install Cockpit web interface (Arch Linux):

sudo pacman -S cockpit && sudo systemctl enable --now cockpit.socket

Cockpit web interface is accessed over port 9090.

To access virtual machines in the Cockpit web UI, you need to install the “Cockpit-Machines” package