Website and Deployment

31/05/25

Deployment Script

Since I wanted to host this website on my Debian machine, I had to come up with a deployment workflow. My idea was to have a bash script that built the Docker image on my main PC, pushed it Docker Hub, SSH'd into my server, pulled the image, and then started the container. That way the build process would be offloaded to my beefier PC and save the laptop some potential misery.

Some learning outcomes whilst I was creating the script:

After finishing the script, I made it executable with chmod +x deploy.sh. I also made a dev.sh script to start my development container more easily. The dev.sh script worked but I was yet to test deploy.sh.

Before I could test the deployment script, I had to prepare the server and my client for deployment.

Client & Server Setup

The first and easy step was installing Docker on the server and that went fine. Then I had to create a new user in order for my deployment script to SSH with. I ran sudo adduser ci and added them to the docker group with sudo usermod -aG docker ci so they could run Docker commands without sudo. I tested that this was now the case by switching to the user with su - ci and running docker run hello-world which went fine.

Next I had to generate some SSH keys on my main machine that would be the SSH client. Out of curiosity, I ended up looking into the differences between RSA keys and ED25519 keys. Essentially, I learnt that RSA keys are slower and ED25519 keys are faster and more modern so they are preferred. I ran ssh-keygen -t ed25519 -C "debian-box ci user (continuous integration)" and didn't include a passphrase since I would be SSH'ing with an automated script. I aptly named the key "debian-box-ci".

I then tried to copy the SSH public key to the server with ssh-copy-id -i ~/.ssh/debian-box-ci.pub [email protected] but that didn't work now since I disabled password authentication.

I had to manually switch to the ci user, create the .ssh directory, give 700 permissions to it, create the authorized_keys file, add the contents of the public key to it and give 600 permissions to it. A more manual process, but it made me learn about what goes into the process of ssh-copy-id. Now I tested to see if I could SSH with ssh -i ~/.ssh/debian-box-ci and it worked.

Out of curiosity, I tried to SSH without specifying the private key file and it also worked. That made me question: does the ssh command check all possible keys when trying to authenticate? Essentially yes, ssh will check all keys loaded into ssh-agent. Since I already SSH'd once with the specified key, the private key was loaded into the agent. If nothing is loaded into the agent, ssh checks some common file names like ~/.ssh/id_ed25519 and ~/.ssh/id_rsa.

Some useful commands I learnt for working with the SSH agent:

But then, getting back on track, the next thing I had to do was generate an access token / password in order to read images from my Docker Hub account. I did that and added it my .env file as well as other necessary values.

When using the docker login command, apparently there are a few ways to go about it. Initially I was going to use docker login -u $DOCKERHUB_USERNAME -p $DOCKERHUB_PASSWORD. However, apparently when you do that the password is exposed in your shell history and can be found using ps aux. So the best practice to read the password from the standard input with echo "$DOCKERHUB_PASSWORD" | docker login -u $DOCKERHUB_USERNAME --password-stdin which is what I included in the script and what prevents password exposure.

Now everything was ready in order for me to run the script. I had SSH and Docker sorted out.

Running the Script

Then I tried running the script and everything was going fine. I got the following warning when using docker login:

WARNING! Your credentials are stored unencrypted in '/home/ci/.docker/config.json'. Configure a credential helper to remove this warning. See https://docs.docker.com/go/credential-store/

I'd seen that before but I learnt that apparently docker logout will remove the credentials from the file so the warning should be safe to ignore since I added that to my script. It does create a small attack window whilst the script is running, but it's not a huge concern.

I had a few hiccups and bugs along the way getting the script to run from start to finish so I had to spend some time debugging, but eventually I managed to get it to run.

Reverse Proxy

My website's container was now running internally on port 8080 after successfully running the script. The next goal would be to make it accessible on my local network over HTTP which is where my web server would come in that I installed a few days prior. I was going to use a reverse proxy with Nginx in order to direct requests to my container.

I made a new site configuration with sudo nano /etc/nginx/sites-available/homelab-blog and added the following:

server { listen 80; server_name _; location / { proxy_pass http://127.0.0.1:8080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }

This apparently will proxy incoming requests on port 80 of my host machine to my container running internally at 127.0.0.1:8080 as well as necessary headers. Then I enabled the site by creating a symlink with: sudo ln -s /etc/nginx/sites-available/homelab-blog /etc/nginx/sites-enabled/homelab-blog. I ran sudo nginx -t to test the file for syntax errors and all was good. After disabling the default site and reloading Nginx with sudo systemctl reload nginx, the site was now accessible on my local network at the machine's IP.

So now I had my site accessible on my local network as well as a deployment script—mission accomplished.

Also, some post-deployment considerations I had was that I may want to set up a pipeline for the site to auto-deploy when I push to Git. I'm thinking of running a local Git server on my Debian machine so that I can self-host the repository and then create the deployment pipeline.