How To Host An Application On A Server (VPS) Using Docker?

Modified: 26.09.2023

Looking to host your side project on a VPS using Docker? Look no further! In this post, we’ll learn everything you need to know to containerize and host your first application using Docker and Docker Compose. As a bonus, we will also automate the image building and deployment of your application. So let’s dive in and get your side project up and running!

Don’t want to read? Watch the video instead!


In the previous post of this series, we set up a virtual private server (VPS). Now that we have our server up and running, it’s time to host a simple application on the server (VPS). In this post, we will use Docker and Docker Compose to accomplish this task. By the end of this tutorial, you’ll have the skills to create a Docker image for your application, configure a Docker Compose file, and run the container on your VPS. Once the container is running, you’ll be able to access your application through the IP address of your server. As a little bonus, we will also automate the image building and the deployment of the application! So, let’s get started by installing Docker on our system and creating a Dockerfile for our application.

Server icon

VPS Hosting Course

Learn everything you need to know about servers and hosting your own applications!

Installing Docker

Docker is a tool that allows you to containerize your applications and run them on your system. To begin using Docker, you’ll first need to install it. For this, follow the link to the installation guide provided by Docker. This guide contains step-by-step instructions for installing Docker on Ubuntu. Simply copy and paste the commands in the guide into your terminal to complete the installation.

After you complete the installation process, there are a few additional steps you’ll need to take. First, add your user to the Docker group using the following command: sudo usermod -aG docker $USER. Next, run newgrp docker to re-evaluate your group membership. Once you’ve completed these steps, your user should be able to run Docker commands without using sudo.

Lastly, before jumping into Docker, let’s uninstall apache2 on the server by running (this is only on a Hostinger server and we will not need apache after this guide):

sudo apt-get remove apache2 && sudo rm -rf /etc/apache2/

Application and Dockerfile

Now that Docker is installed on our system, we can move on to containerizing our simple counter application built with Svelte and hosting it using Nginx. The application code can be found in this repository (The application is actually just created using a vite template, but the repository contains all other files created in this post).

Need help or want to share feedback? Join my discord community!

To host our application using Docker, we need to create a Dockerfile inside the application directory. The Dockerfile is a template for an image and specifies the commands that will be run when we want to create an image.

Create a file called Dockerfile in the directory of the application and add the following:


If this guide is helpful to you and you like what I do, please support me with a coffee!

# build stage
FROM node:lts-alpine as build
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build

# production stage
FROM nginx:stable-alpine as production
COPY --from=build /app/dist /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]

In this Dockerfile, we use an Alpine image of Node.js for the build steps. We create a working directory and copy the package.json file into that directory. Once the package.json file is in the directory, we can install the necessary node modules by running npm install. Next, we copy the rest of the directory into the container. Once this is done, we build the application by running npm run build.

After the application is built, we create the final image using Nginx Alpine and copy the dist directory of the previous image into our new one. Finally, we expose port 80 and start nginx using the CMD command.

By creating this Dockerfile and building an image with it, we can easily containerize our application and run it on any system with Docker installed.

Building the image of our application

To store our Docker image, we will use the GitHub Container Registry, which allows us to easily access it from anywhere, especially our server. To store an image in the GHCR, we first need to create a GitHub Personal Access Token (PAT). For more details on the scopes you need and other information, you can check out the GitHub Packages Registry guide. If you want to get started quickly, you can use this link, which includes all the required scopes:,read:packages,delete:packages

After you click on the link, you need to specify the duration and give the PAT a name, then save the value somewhere secure.

Now, open your terminal and follow these steps from the directory containing the Dockerfile:

  1. Store your PAT inside an environment variable: export CR_PAT=<PAT>
  2. Sign in to the container registry: echo $CR_PAT | docker login -u <username> --password-stdin
  3. Build and upload the container image: docker build . -t<username>/<image-name>:latest && docker push<username>/<image-name>:latest

By following these steps, your Docker image will be built and uploaded to the GHCR, and you will be able to access it from your server using the specified image tag. You need to run step 1 and 2 on your server as well.

Automating the image build of our application

Optionally, to speed up the deployment process, you can automate the image build and upload by setting up a GitHub Action. Before creating the action, we need to set up a secret. Go to your repository’s Settings > Secrets and Variables > Actions and create a secret named PAT containing your GitHub PAT.

Next, create the following directory and file in your repository: .github/workflows/docker-publish.yml. The content of the action should be:

name: publish

    branches: [ "main" ]

  # Use for Docker Hub if empty
  IMAGE_NAME: ${{ }}/<image-name>:latest

    name: publish image
    runs-on: ubuntu-latest

    - uses: actions/checkout@v3
    - name: Login
      run: |
        echo ${{ secrets.PAT }} | docker login -u ${{ }} --password-stdin
    - name: Build and Publish Backend
      run: |
        docker build . --tag ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
        docker push ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}

This action listens for any push to the main branch, and it uses your GitHub PAT to log in to the container registry. It then builds the Docker image and uploads it to the registry using the image name<username>/<image-name>:latest.

Running the container to host the application on the server (VPS)

The next step is to create a docker-compose.yml file on our server (inside of /home/<username>/<project>), which is a template for our Docker container. This way, we can easily move it to another machine or rebuild the containers. Inside the file, we specify the name of the service, image, and the ports to use:

    container_name: frontend
      - 80:80

After creating the docker-compose.yml file, we can build the container by running docker compose up -d. This command will start the container in detached mode, meaning it runs in the background. Now we should be able to access the application in the web browser under the IP address of our server. To be able to access it under IP, we needed to create the port mapping so that port 80 of the container is also exposed on the server itself.

Note that currently, the application does not have a specific domain or SSL, so we will cover these in the next post.

Automate the deployment of our application

Automating the deployment of our application is another optional step, but it can significantly reduce manual work and speed up the deployment process. Before we create the secrets and the GitHub Action, we need to set up a new SSH key for it. To do so, log in to your server as the user you want to run the action as and follow these steps:

  1. Check that user has access to the directory containing the repository and is able to run docker
  2. Create an SSH key: ssh-keygen -t rsa -b 4096
  3. Copy content of the key file: more <path/to/private/key>
  4. Add the public key to the authorized_keys file: cat <path/to/public/key> >> ~/.ssh/authorized_keys

After creating the SSH key, we need to create the following secrets in the repository by going to Settings > Secrets and Variables > Actions:

Once the secrets are set up, we can append the following job to the workflow (on the same level as the deploy job) in the file .github/workflows/docker-publish.yml:

    needs: publish
    name: deploy image
    runs-on: ubuntu-latest
    - name: install ssh keys
      # check this thread to understand why its needed:
      # <>
      run: |
        install -m 600 -D /dev/null ~/.ssh/id_rsa
        echo "${{ secrets.SSH_PRIVATE_KEY }}" > ~/.ssh/id_rsa
        ssh-keyscan -H ${{ secrets.SSH_HOST }} > ~/.ssh/known_hosts
    - name: connect and pull
      run: ssh ${{ secrets.SSH_USER }}@${{ secrets.SSH_HOST }} "cd ${{ secrets.WORK_DIR }} && docker compose pull && docker compose up -d && exit"
    - name: cleanup
      run: rm -rf ~/.ssh

This job basically logs into the server, pulls the new version of the image, and then rebuilds the containers.

Important: I had a problem running this action and got connection reset errors. After restarting the server, these were resolved.


In this post, we learned how to host your side project (application) on the server (VPS) we set up in the last post. We used docker and Docker compose to do so and also created a GitHub Action to automate the whole process!

I hope this post was helpful to you. If so, share it with your friends, and let me know if you have questions!

In case you liked this consider subscribing to my newsletter and joining my discord community!

Discussion (2)

Add Comment

Your email address will not be published. Required fields are marked *