This is a collection of projects that I currently have deployed or have deployed in the past on my home environment.


This got deleted

A Simple Understanding of Docker :Analogy with Cargo Ship | by Akash  Agarwal | Medium
Docker Logos - Docker

Docker is a Platform as a Service (PaaS) product that leverages OS-Level Virtualization to deliver software packages referred to as “containers”. These containers are small VMs that run isolated from the host OS and provide software, services, libraries, and config files. The cool thing about Docker is that the containers are easy to setup and remove.

Like Virtual Machines there are a few concepts that containers have that are similar to a traditional bare metal computer:

  • CONTAINER: A runtime instance of the Docker image (similar to a VM)
  • IMAGE: Like a system image, these are preconfigured
  • VOLUME: A place to store persistent data since images are static
  • STACK: A cluster of containers that are managed together
  • ENV: Environmental variables are used to set the environment for commands, daemons, and processes.

Containerization with Docker

Install Docker Debian (Ubuntu)

Setup the repo:

sudo apt-get update && sudo apt-get install ca-certificates curl gnupg lsb-release -y

Add docker GPG key:

curl -fsSL | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Setup stable repo:

echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Install docker:

sudo apt-get update && sudo apt-get install docker-ce docker-ce-cli -y

Docker Commands

Pull a docker image

docker pull <image_name>

Create a docker container from an image

docker -it -d <image_name>

List running containers

docker ps

Show all containers
docker ps -a

Starting a container

docker start <container>

Stopping a container

docker stop <container>

docker kill <container>

Accessing docker container CLI

Execute commands from the host
docker exec -it <container> <command>

Access the container CLI from the host
docker exec -it <container> /bin/bash

Using Docker Compose

Using Docker-Compose is a very easy and efficient way to deploy containers.

first you need to create a YAML file and label it “docker-compose.yml”

vim docker-compose.yml

An example of docker-compose.yml file would look something like this

version: '3'

    image: <image_name>
    command: --cmd=value --cmd=value
      VAR: <value>
      VAR: <value>
      - ./host:/container
    restart: unless-stopped
    image: <image_name>
      VAR: value
      VAR: value
      - ./host:/container
      - name
    restart: unless-stopped
    image: <image_name>
    restart: unless-stopped
    image: <image>
      - frontend
      - backend
    restart: unless-stopped

Once the docker-compose file is created, run the following command:

docker-compose up -d

If you get an “Inerpolation Error” with docker (I have run into this before) it might be because you have a character not being translated literally. I ran into this issue with the dollar sigh “$” being mistaken as a variable. In order to escape the dollar sign, you just need to add another (like so “$$”) and it will be taken as 1 literal dollar sign.


Docker Python Module

pip install docker

Docker Container CLI

Access Container CLI on Host using Bourne Again Shell
docker exec -it <container> /bin/bash

Accessing Container CLI on Host using Almquist Shell
docker exec -it <container> /bin/ash

Accessing Container CLI on Host using Bourne Shell
docker exec -it <container> /bin/sh

Executing Shell Commands on Container from Host
docker exec -it <container> <command>

Building a Docker Registry

version: '3'

    image: registry:2
    container_name: registry
      - 5000:5000
    restart: unless-stopped
        - /docker/registry:/var/lib/registry

Creating a Docker Image

Creating a Docker image from Scratch

Install Docker package, engine and server, as well as the Docker registry

sudo apt install docker docker-registry

Docker Registry can be downloaded and configured from a docker image as a container itself (which I find more complex but also a lot more useful)

docker run -d -p 5000:5000 --restart=always --name registry registry:2

Config file


Use boostrap to create a Debain image

apt install bootstrap
deboostrap buster buster

Creating a Container from an Existing Image

Using the command ‘docker commit’ we are able to sav the current container (and all of the contents of it) as a new image.

Pull the ubuntu image (you can pull anything like alpine, debian, or arch, I just used ubuntu)

docker pull ubuntu

Run up the image

docker run -it -d ubuntu --name ubuntu

Once you have the image installed, enter into a shell on the new container

docker exec -it ubuntu /bin/bash

Make any configurations, download packages or whatnot that you need to do in the CLI, then exit.

docker stop ubuntu

Saved changes with docker commit

docker image tag ubuntu localhost:5000/<name>

Pushing a Docker Image to a Repository

Portainer Web UI (Docker)

Portainer is a Web User Interface for Docker.

Pull the image:

docker volume create portainer_data

Use Docker run to deploy the container:

docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v /mnt/docker/images/staging/portainer/data:/data portainer/portainer-ce

You can access the portainer web interface over port 9000. You will need to initially setup by entering a username and password


Watchtower (Update Containers Automatically)


Watchtower is an application that will monitor your running Docker containers and watch for changes to the images that those containers were originally started from. If watchtower detects that an image has changed, it will automatically restart the container using the new image.

Watchtower is a containerized application that runs inside of Docker and monitors existing containers that are running and monitors for the latest image updates in the Docker hub. When Watchtower observes a new image, it pulls that image and restarts the container with that new image automatically

Deploy Watchtower Container


docker run -d --name=watchtower --restart=always -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower

ENABLE MODE allows Watchtower to only update specifically flagged containers with the following label

LABEL com.centurylinklabs.watchtower.enable="true"

MONITOR MODE prompts Watchtower to notify of container updates but will not update them automatically

To FULL EXCLUDE a container, add the following label in the container that you do not want to be updated by Watchtower

LABEL com.centurylinklabs.watchtower.enable="false"

Run Watchtower Once (then Exit)

This will start a running Watchtower container.
Set Watchtower to RUN ONCE and close afterwards:

docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once

You can add container names to the end of the command to only run on specific containers


Converting VMDK to Docker Image

Convert the WEMU image into a raw image:

sudo -img convert -f vmdk -O raw diskimage.vmdk image.img

After the Raw image is created, apply Guestfish:

guestfish -a image.img --ro
><fs> run

><fs> list-filesystems
/dev/sda1: ext4
/dev/VolGroup/lv_root: ext4
/dev/VolGroup/lv_swap: swap
><fs> mount /dev/VolGroup/lv_root /
><fs> tar-out / - | xz --best >>  myimage.xz
><fs> exit

Import the image into Docker:

cat mytry.xz | docker import - mydockerimage

Run the container:

docker run -it mydockerimage bash


Add a Docker Network with NO Internet Access

docker network create --subnet no-internet

sudo iptables --insert DOCKER-USER -s -j REJECT --reject-with icmp-port-unreachable

sudo iptables --insert DOCKER-USER -s -m state --state RELATED,ESTABLISHED -j RETURN

Connect Docker Container to Host Network

Install Packages in Docker Container with APK

This is useful for docker containers that don’t use APT:

apk update && apk add vim

CNCF Branding | Kubernetes

I’ll add this eventually….. chill.

Configuring Docker Swarm as Container Orchestrator on Windows | by Aram  Koukia | Koukia

I’ll get to this too… day.

Building a Webserver

Having a website is a pretty good way to both market yourself and stand out amongst your peers. Building a website might seem daunting and you may be under the impression that you have to learn complex languages like HTML, CSS and JavaScript and while these are incredibly useful and I encourage understanding at least some basics, it is not really needed. There are web services that allow you to build websites from graphical tools that are often referred to as WYSIWYG (What You See is What You Get). Sites like GoDaddy, SquareSpace and Wix will host your website for you but at a cost. the idea of hosting your own website should be at a minimal cost to you. I use WordPress because of its flexibility and long history in the webspace. This works perfectly for blogs and personal website (like the one you are on now) but once you start getting into things like ecommerce, it can be more beneficial to spend the extra money and host your website with a hosting provider since you will have an SLA (Service Level Agreement), fast speeds, good security and other benefits that are hard to come by when self hosting.

That being said, this guide does not discuss buying a domain name, setting up DNS, and whatnot, it is strictly about deploying your webserver and configuring your network to allow public access to it. That being said, you MUST ENSURE that your network is properly secured. I will not be covering any of this, I will not be going into great detail about port forwarding, I will not be talking about network specific details since a lot of this depends GREATLY on the hardware you have.

Install Portainer Web UI for Docker

Portainer is a Web User Interface for Docker.

Pull the image:

docker volume create portainer_data

Use Docker run to deploy the container:

docker run -d \
-p 8000:8000 \
-p 9000:9000 \
--name=portainer \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /mnt/docker/staging/portainer/data:/data \

You can access the portainer web interface over port 9000. You will need to initially setup by entering a username and password


Install NGINX Reverse Proxy Manager

Nginx Proxy Manager

A reverse proxy manager will

docker run -d \
-p 8080:80 \
-p 81:81 \
-p 4433:443 \
-v /mnt/docker/staging/nginx_proxy_manager/data:/data \
-v /mnt/docker/staging/nginx_proxy_manager/etc/letsencrypt:/etc/letsencrypt \
--restart unless-stopped \

After creating the container, login to the web interface by typing in the IP address with the port 81.

The default login credentials are:
USER: [email protected]
PASSWD: changeme

Once you successfully login, you will be asked to change these.


Build a Website with WordPress (Docker)

WordPress is the most popular CMS (Content Management System) in the world. Its easy to deploy a WordPress container and start building your website today. Using Docker is much better than setting up a traditional LAMP server.

Deploy WordPress Stack (Docker)

Login to Portainer web interface and navigate to “App Templates”

This image has an empty alt attribute; its file name is image-14-1024x287.png

Name the stack and pick a password for the database.

This image has an empty alt attribute; its file name is image-13-1024x565.png

Determine which external port the WordPress container is mapped to (for example you might see 48312:80). In this case, you would navigate to the IP address and the port number 48213.

This image has an empty alt attribute; its file name is image-18.png

Complete the login

Increase WordPress MAX File Upload Size

Edit .htaccess:

php_value upload_max_filesize 64M
php_value post_max_size 64M
php_value max_execution_time 300
php_value max_input_time 300

Change the user permissions on the file otherwise it may revert back:

sudo chmod 600 .htaccess

Install OpenSSH Server (Docker)

The OpenSSH server in a container will allow you to give SSH permission for a client to a resource without giving them full permission to the entire server.

Pull the “OpenSSH Server” container image from Docker Hub

docker pull linuxserver/openssh-server

Use Docker run:

docker run -d \
--name=openssh-server \
-e PUID=1000 \
-e PGID=1000 \
-e SUDO_ACCESS=false \
-e USER_PASSWORD_FILE=/path/to/file `#optional` \
-p 2222:2222 \
-v /path/to/config:/config \
--restart unless-stopped \


Install Endlessh (Docker)

Endlessh is an SSH tarpit that very slowly sends an endless, random SSH banner. It keeps SSH clients locked up for hours or even days at a time. The purpose is to put your real SSH server on another port and then let the script kiddies get stuck in this tarpit instead of bothering a real server.

Before setting up Endlessh, make sure you change the default ssh port on the server from port 22 for a different port.

Pull the “Endlessh” container image from Docker Hub:

docker pull linuxserver/endlessh

Use Docker run:

docker run -d \
  --name=endlessh \
  -e PUID=1000 \
  -e PGID=1000 \
  -p 22:2222 \
  -MSDELAY=7000 \
  -MAXLINES=32 \
  -MAXCLIENTS=4096 \
  -LOGFILE=true \
  -v /mnt/docker/images/staging/endlessh/config:/config \
  --restart unless-stopped \


Surfshark (With Glutun Docker Container)

Media Center - Surfshark

Surfshark is an awesome public VPN that can be used to anonymize your traffic and surf the web without exposing a lot of your personal information. Using a public VPN service provider is not a magic one-stop solution for internet security, but it is an important layer.

Most VPN providers limit the number of devices you can have connected under your account, but Surfshark does not. This is important because I like to configure Surfshark to run on a lot of my systems (and I have a lot) and removing that cap that most other providers implement gives me a lot of freedom.

I deploy Surfshark in a Docker Container called “Glutun”.

sudo docker run -d \
--name <name> \
--cap-add=NET_ADMIN \
-e VPNSP="surfshark" \
-e OPENVPN_USER=<uname> \
-e OPENVPN_PASSWORD=<passwd> \
-v /mnt/docker/staging/gluetun:/gluetun \

There are a few variables that you are going to want to change here and I will list them below:

Environmental Variables

COUNTRYRefer to “Server Locations” below
CITYRefer to “Server Locations” below
OPENVPN_USERSurfshark username
OPENVPN_PASSWORDSurfshark password

WireGuard (Docker Container)

File:Logo of WireGuard.svg - Wikimedia Commons

WireGuard Server

Wireguard is installed using a docker container image provided by

sudo docker run -d \
  --name=wireguard \
  --cap-add=NET_ADMIN \
  --cap-add=SYS_MODULE \
  -e PUID=1000 \
  -e PGID=1000 \
  -e SERVERURL=host \
  -e SERVERPORT=port \
  -e PEERS=1 \
  -e PEERDNS=auto \
  -p port:port/udp \
  -v /mnt/docker/staging/wireguard/config:/config \
  -v /lib/modules:/lib/modules \
  --sysctl="net.ipv4.conf.all.src_valid_mark=1" \
  --restart unless-stopped \

Environmental Variables

SERVERURLIP address or URL location of the VPN server
SERVERPORTExternal port
PEERSNumber of simultaneous connected peers



WireGuard Client

VPN over SSH with sshuttle

Sshuttle is a VPN solution for SSH without the complexities of port forwarding.

#Install Debian (Ubuntu)
sudo apt get update && apt get install sshuttle

#Install Arch
sudo pacman -S sshuttle

If you choose, you can use Git to install.

git clone
cd sshuttle
sudo ./ install




Ansible is Simple IT Automation

Ansible is a set of tools developed by Red Hat that enables IaC and is used in the provisioning, configuration management, and deployment functionality of Windows and Unix-like machines. This is an excellent tool used for automation and is an excellent place to get started with learning automation, IaC, and CI/CD in DevOps.

Installing Ansible

Install Ansible (RHEL)

sudo yum update -y && \
sudo yum install epel-release -y && \
sudo yum install ansible -y

Install Ansible (Debian)

sudo apt update && \
sudo apt install ansible -y

Install Ansible (Arch Linux)

sudo pacman -Syu && \
sudo pacman -S ansible-core && \
sudo pacman -S ansible

Getting Started

If you want to know more about Ansible and learn how to use it, I recommend checking out the documentation or watching the video I linked below.

Ansible Command Line Interface

Ansible “ad hoc” commands using the Ansible CLI are great for executing commands that are infrequent or do not require playbooks.

The command structure for ad hoc commands

ansible [pattern] -m [module] -a "[module options]"


All hostsall (or *)
One host<host>
Multiple hosts<host1>:<host2> of (<host1>,<host2>)
One group<group>
Multiple groupswebservers:dbserersall hosts in webservers plus all hosts in dbservers
Excluding groupswebservers:!atlantaall hosts in webservers except those in atlanta
Intersection of groupswebservers:&stagingall hosts in webservers that are also in staging

You can also combine different patterns


Semaphone Web Interface (Ansible)

Install Ansible Semaphore on Linux | Snap Store

This container is installed using a yaml file with docker-compose

version: '3'


      - 3306:3306
    image: mysql
      - /var/lib/mysql
        condition: unless-stopped
    hostname: mysql
      MYSQL_DATABASE: semaphore
      MYSQL_USER: semaphore
      MYSQL_PASSWORD: semaphore

      - 3000:3000
    image: ansiblesemaphore/semaphore:latest
        condition: unless-stopped
      SEMAPHORE_DB_USER: semaphore
      SEMAPHORE_DB_PASS: semaphore
      SEMAPHORE_DB_HOST: mysql
      SEMAPHORE_DB: semaphore
      SEMAPHORE_PLAYBOOK_PATH: /tmp/semaphore/
      SEMAPHORE_ADMIN_EMAIL: admin@localhost
      SEMAPHORE_ADMIN: admin
      - mysql

Generate Access Key Encryption with the following command:

head -c32 /dev/urandom | base64

You can use docker-compose but I prefer to use Portainer web UI.

You can access the web interface over port 3000.

The default credentials are:

  • user: admin@localhost
  • passwd: user

Its fairly obvious that you are going to want to change these ASAP.

Its under the user profile at the bottom left of the screen and click “Edit User”

New Project


The first thing you are prompted to do when you login to the web console (after logging in) is to create a new project. At a minimum, the project name is required.

To create a new project later, all you have to do is hit the arrow next to the project name at the top left corner of the screen and select “New project”

Key Creation (Key Store)

You first need to create a key. The key is going to allow us to authenticate with our hosts so this is an important step.

In the Key Store tab, select “NEW KEY” in the upper right hand corner of the screen.

Enter the Key Name (mandatory).

You have a couple of choices here with keys and its going to depend on which one you want to use. Personally I use SSH keys since they are incredibly secure, as long as you aren’t sharing them or storing them insecurely.

Whatever option you select, you are going to need to fill out the required information (i.e. username, password, key, access token….etc.)

Once this is done hit CREATE.

You can always edit this later by selecting the edit icon on the right side of the key.


Inventory is a file that stores the list of host for Ansible to run its plays against. Inventories are flexible and can be stored in a number of different filetypes. To create a new inventory, select the “NEW INVENTORY” option on the top right corner of the screen. Ansible Semaphore is able to read inventory information from 2 different options, a static file already created that is located on the system, or you can store the information in a file via the Semaphore web UI.

AWX Web Interface (Ansible)





Honey Uhd 4k Wallpaper - Honey Wallpaper 4k (#3270407) - HD Wallpaper &  Backgrounds Download


Installing DShield Honeypot

Update the system and install Git

sudo apt update && sudo apt install -y git

Clone the DShield Git repository

git clone

Run the install script

sudo ./dshield/bin/

Select YES


You need to create a dshield account at

Get the API key and enter the information

Select OK

Select the default interface

Enter the network information

Confirm by clicking OK

Enter IPs to ignore

Confirm by clicking OK

Next enter IPs and ports to disable

Confirm by selecting OK

Enter information to create the SSL certificate

Confirm by clicking YES

Reboot the machine and login using the new SSH port (12222)

sudo reboot



Git is a software tool that is used for team collaboration with software development teams,

Git commands

git --version

#initialize a repository
git init <directory>

#state of git repository
git status

#add changes from a working directory to the staging area
git add

#commit files to the commit history
git commit

#list all commits in the git repository
git log

#create a new branch
git branch <branch_name>

#switch to a different branch
git checkout <branch_name>

GitLab is an open-source software package that allows DevOps teams to develop, secure, and operate software within this single application. This software package can be deployed as a self hosted Docker container giving you the ultimate control over your DevOps environment.

Install GitLab with Docker (Community Edition)

You can install GitLab with one of the following methods:

  • Docker Engine
  • Docker-Compose
  • Docker Swarm

Install GitLab CE with Docker Engine

docker run -d \
--name gitlab \
-p 22:22 \
-p 80:80 \
-p 443:443 \
-v /docker/gitlab/config:/etc/gitlab \
-v /docker/gitlab/logs:/var/log/gitlab \
-v /docker/gitlab/data:/var/opt/gitlab \
-e PUID=1000 \
-e PGID=1000 \
--shm-size 256 \
--hostname \
--restart always \

Install GitLab CE with Docker-Compose

version: '3.8'
    image: gitlab/gitlab-ce:latest
    container_name: girlab
    restart: unless-stopped
    hostname: ''
      GITLAB_OMNIBUS_CONFIG: external_url ''
      PUID: 1000
      PGID: 1000
      - 22:22
      - 80:80
      - 443:443
      - config:/etc/gitlab
      - logs:/var/log/gitlab
      - data:/var/opt/gitlab
    shm_size: '256m'

You can optionally expose port 4000 which is where the GitLab documentation is hosted locally.

You can then use ‘docker-compose up -d’ or I just prefer to use a stack in Portainer.

Install GitLab CE with Docker Swarm

Ill get to this later… when I get to it.


Initial Login and Account Setup

GitLab is going to take a while to initially setup, like a while (as little as 10-15 minutes or more).

After everything is setup and the container appears to be running normally, you need to navigate to the localhost HTTPS port you assigned. By default it might be 443 but for reasons I would change that.

The default login username is “root”

To get the password you can look in the ‘initial_root_password’ folder under the docker containers mapped ‘config’ directory or you can execute the following command in the host machine

sudo docker exec -it gitlab grep 'Password' /etc/gitlab/initial_root_password

IMMEDIATELY after login there are some very obvious security configurations we are gong to want to make.

Change the Root Username

On the VERY top right portion of the screen you should see an icon with a dropdown menu, click ‘Edit Profile’.

On the left side menu, select ‘Account’.

Replace the username ‘root’ with a non obvious admin username and click ‘Update username’.

Change the Root Password

In the same left menu, click ‘Password’.

Enter the current password and then type in a new complex password or passphrase.

Disable Account Registration

When you immediately login, you will probably notice the following message banner.

This means that anyone that can access the frontend login interface is able to register for an account. This does not mean that they will be able to access their account immediately (as the root user still need to confirm the account) but you do not want anonymous users to have the ability to just create accounts.

This can easily be disabled by clicking ‘Turn Off’, then when you are navigated to the new page, uncheck the box that says “Sign-up enabled”.

Install Gitlab with Container Repository


Linode Price Calculator

Resize Linode (Smaller)

Run the following command to determine your disk usage

You need to make sure your “Used” disk size combined is less than the amount of the maximum disk size you are wanting to move to.

Make sure to have a backup of the server BEFORE doing this since there are risks involved with shrinking volumes.

Power off the linode

In the dashboard, click “StorStorage” and “Disks” then resize to the desired size.

Mount Object Storage with S3FS

Make sure you have S3FS installed on the system you want to mount he object storage to

sudo apt install s3fs

Arch Linux
sudo pamcam -S s4fs-fuse

sudo yum install epel-release
sudo yum install s3fs-fuse

Add the following to /etc/fstab

<bucket_name> /<mount>/<point> fuse.s3fs _netdev,allow_other,use_path_request_style,url=https://<geolocation> 0 0

Then create a file “/etc/passwd-s3fs” and insert the following information


Mount the object storage automatically

sudo mount -a

Dashboard with Grafana

Grafana is an open-source analytics and interactive visualization web application. With Grafana, you are able to take raw data and present it in a variety of visual formats such as graphs, charts, and alerts in the web console. The interface is highly expandable and readily customizable.

Using Grafana as a Container (Docker)

Grafana produces a preconfigured container image that can be downloaded and deployed easily with Docker

Deploy Grafana with Docker Run

docker run -d --name=grafana -p 3000:3000 grafana/grafana

Deploy Grafana with Docker Compose

    container_name: grafana
    image: grafana/grafana

Default login credentials:

  • USERNAME: admin
  • PASSWORD: admin


Prometheus is an open-source application for event monitoring and alerting used for recording real time metrics

Deploying Grafana + Prometheus with Docker Compose

Grafana is an open-source analytics and monitoring software while Prometheus is open-source software for event monitoring, logging, alerting. These tools are useful on their own but when combined together, they provide a powerful software stack that allows admins the ability to monitor events in real time as well as reviewing logs for historical events.

I use Docker Compose for its simplicity and ease of use, especially when dealing with 2 containers that need to work together.

version: '3.8'

    driver: bridge

    image: prom/node-exporter:latest
    container_name: node-exporter
    restart: unless-stopped
      - /docker/dashboard/node-exporter/proc:/host/proc:ro
      - /docker/dashboard/node-exporter/sys:/host/sys:ro
      - /:/rootfs:ro
      - '--path.procfs=/host/proc'
      - '--path.rootfs=/rootfs'
      - '--path.sysfs=/host/sys'
      - '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
      - 39100:9100
      - monitoring

    image: prom/prometheus:latest
    container_name: prometheus
    restart: unless-stopped
      - /docker/dashboard/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
      - /docker/dashboard/prometheus/_data:/prometheus
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--web.console.libraries=/etc/prometheus/console_libraries'
      - '--web.console.templates=/etc/prometheus/consoles'
      - '--web.enable-lifecycle'
      - 39090:9090
      - monitoring

    container_name: grafana
      - 33000:3000
    image: grafana/grafana


Uptime Kuma is a self-hosted tool ran in a docker container for monitoring network activity such as

Deploy Uptime-Kuma with Docker Run

docker run -d --restart=always -p 3001:3001 -v uptime-kuma:/app/data --name uptime-kuma louislam/uptime-kuma:1

Deploy Uptime-Kuma with Docker Compose

version: '3.8'

    image: louislam/uptime-kuma:1
    container_name: uptime-kuma
      - /docker/uptime-kuma/_data:/app/data
      - 3001:3001
    restart: unless stopped



Vikunja is a super simple, super clean easy to use project/task management software that provides a simple easy to use clean interface.

version: '3.8'

    image: mariadb:10
    container_name: vikunja_db
    command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
      MYSQL_ROOT_PASSWORD: vikunja
      MYSQL_USER: vikunja
      MYSQL_PASSWORD: vikunja
      MYSQL_DATABASE: vikunja
      PUID: 1000
      PGID: 1000
      - ./db:/var/lib/mysql
    restart: unless-stopped

    image: vikunja/api
    container_name: vikunja_api
      VIKUNJA_SERVICE_FRONTENDURL: http://<ipaddr>/
      PUID: 1000
      PGID: 1000

      - 3456:3456
      - ./files:/app/vikunja/files
      - db
    restart: unless-stopped

    image: vikunja/frontend
    container_name: vikunja_frontend
      - 8080:80
      VIKUNJA_API_URL: http://<url>.com:3456/api/v1
    restart: unless-stopped


Sphinx is a powerful documentation tool for technical writers that presents web based documentation via HTML files that are hosted by a webserver (like NGINX or Apache)

Sphinx is written in reStructured text, which is a lightweight markup language used for document creation.

The rst files are then converted into HTML files that the webserver can then use to present the documents.

Installing Sphinx

Install the Sphinx package

sudo apt install python3-sphinx

sudo yum install python-sphin

Creating a project

Execute the following command


You have two options for placing the build directory for Sphinx output.
Either, you use a directory "_build" within the root path, or you separate
"source" and "build" directories within the root path.
> Separate source and build directories (y/n) [n]: <PRESS ENTER>

The project name will occur in several places in the built documentation.
> Project name: <project_name>
> Author name(s): <name>
> Project release []: <release_info>

If the documents are to be written in a language other than English,
you can select a language here by its language code. Sphinx will then
translate text that it generates into that language.

For a list of supported codes, see
> Project language [en]: <PRESS ENTER>

Finished: An initial directory structure has been created.

You should now populate your master file /docs/index.rst and create other documentation
source files. Use the Makefile to build the docs, like so:
   make builder
where "builder" is one of the supported builders, e.g. html, latex or linkcheck.

Deploying Sphinx with Docker

There are a number of ways to install and deploy Sphinx. Traditionally, I have always installed the package on my Linux servers as a base installation. With tools such as Docker available, it is much easier to setup and configure this to get started writing documentation as quickly as possible.

Creating a Project

Run the following command to start initial Sphinx documentation base

docker run -it --rm -v /path/to/document:/docs sphinxdoc/sphinx sphinx-quickstart

Follow the prompt

You have two options for placing the build directory for Sphinx output.
Either, you use a directory "_build" within the root path, or you separate
"source" and "build" directories within the root path.
> Separate source and build directories (y/n) [n]: <PRESS ENTER>

The project name will occur in several places in the built documentation.
> Project name: <project_name>
> Author name(s): <name>
> Project release []: <release_info>

If the documents are to be written in a language other than English,
you can select a language here by its language code. Sphinx will then
translate text that it generates into that language.

For a list of supported codes, see
> Project language [en]: <PRESS ENTER>

Finished: An initial directory structure has been created.

You should now populate your master file /docs/index.rst and create other documentation
source files. Use the Makefile to build the docs, like so:
   make builder
where "builder" is one of the supported builders, e.g. html, latex or linkcheck.

Building Documentation

The Official Sphinx Docker image recommends using the following commands from the host in order to build documentation

Build HTML

docker run --rm -v /path/to/document:/docs sphinxdoc/sphinx make html

Build EPUB

docker run --rm -v /path/to/document:/docs sphinxdoc/sphinx make epub

Build PDF

docker run --rm -v /path/to/document:/docs sphinxdoc/sphinx-latexpdf make latexpdf

Automatically Rebuild Documentation with Sphinx-Autobuild

Sphinx-Autobuild is a package that automatically builds your packages when any changes in the source files are detected.

Install Sphinx-Autobuild

pip install sphinx-autobuild

Initiating Sphinx-Autobuild

Initiating sphinx-autobuild will start a server on the localhost over port 8000. You will need to declare two things, the directory where your project is stored and where you want sphinx to build your docs at (HTML folder)

sphinx-autobuild <project> <build>/<location>

You can use the ‘–help’ argument to bring up the help message with more information about sphinx-autobuild

sphinx-autobuild --help

usage: sphinx-autobuild [-h] [--port PORT] [--host HOST] [--re-ignore RE_IGNORE] [--ignore IGNORE] [--no-initial] [--open-browser]
                        [--delay DELAY] [--watch DIR] [--pre-build COMMAND] [--version]
                        sourcedir outdir [filenames [filenames ...]]

positional arguments:
  sourcedir             source directory
  outdir                output directory for built documentation
  filenames             specific files to rebuild on each run (default: None)

optional arguments:
  -h, --help            show this help message and exit
  --port PORT           port to serve documentation on. 0 means find and use a free port (default: 8000)
  --host HOST           hostname to serve documentation on (default:
  --re-ignore RE_IGNORE
                        regular expression for files to ignore, when watching for changes (default: [])
  --ignore IGNORE       glob expression for files to ignore, when watching for changes (default: [])
  --no-initial          skip the initial build (default: False)
  --open-browser        open the browser after building documentation (default: False)
  --delay DELAY         how long to wait before opening the browser (default: 5)
  --watch DIR           additional directories to watch (default: [])
  --pre-build COMMAND   additional command(s) to run prior to building the documentation (default: [])
  --version             show program's version number and exit

sphinx's arguments:
  The following arguments are forwarded as-is to Sphinx. Please look at `sphinx --help` for more information.
    -b=builder, -a, -E, -d=path, -j=N, -c=path, -C, -D=setting=value, -t=tag, -A=name=value, -n, -v, -q, -Q, -w=file, -W, -T, -N, -P

My usage


The ultimate way I like to run sphinx-autobuild is by creating a systemd service enabling it to start on boot, and freely stop it and restart it easily.

reStructured Text (Markup)

Inline Markup

*one asterisk*
**double asterisk**


* bullet
* bullet
* bullet
  * nested list
  * nested list

1. number
2. number
3. number

#. number
#. number
#. number


Sphinx Theme (Read the Docs)

Install Read the Docs

pip install sphinx-rtd-theme

Under ‘’ in your project, edit the HTML Theme line to read the following

html_theme = "sphinx_rtd_theme"


Cloud9 is an IDE (integrated development engine) for developers that supports multiple languages such as C, C++, Go, JavaScript, Perl, PHP, Python, Ruby and more.

This image is DEPRECATED

Installing Cloud9 as a Container

Pull image

docker pull linuxserver/cloud9

Docker Compose

version: "3.8"
    container_name: cloud9
      - PUID=1000
      - PGID=1000
      - GITURL= #optional
      - USERNAME= #optional
      - PASSWORD= #optional
      - /path/to/your/code:/code
      - /var/run/docker.sock:/var/run/docker.sock #optional
      - 8000:8000
    restart: unless-stopped

VS Code with Code-Server

docker pull linuxserver/code-server

Linux Distros


docker pull ubuntu
docker run -it -d ubuntu --name=ubuntu

Arch Linux

docker pull archlinux
docker run -it -r archlinux --name=arch-linux


docker pull alpine
docker run -it -r alpine --name=alpine

Amazon Linux

docker pull aamazonlinux
docker run -it -r amazonlinux --name=amazon-linux





GitLab is an awesome open-source DevOps software package that allows teams to develop, secure, and operate software in a single application. This software package can be self hosted (for free using the Community Edition or with a paid Enterprise Edition) to have the ultimate control over your DevOps process.

Deploying with Docker (Community Edition)

There are a few methods when installing GitLab as a Docker container:

  • Using Docker Engine
  • Using Docker-Compose
  • Using Docker swarm mode

Installing GitLab with Docker Engine

sudo docker
--name: gitlab 
-p 22:22
-p 80:80
-p 443:443