Automating Spring Boot Application Deployment with a CI/CD Pipeline Using Coolify

Step-by-Step Guide to Deploying a Spring Boot Application with Coolify on a Self-Hosted Server
Manually connecting to the server to rebuild and redeploy a Spring Boot application after every code change often leads to unnecessary time consumption and increased risk of errors. Even minor modifications require repetitive steps, which gradually raise the cost of maintenance.
By using Coolify, it becomes possible to configure Git-based automated deployments alongside domain management, HTTPS certificate issuance, and server monitoring—all within a single platform. Since it requires no complex infrastructure, Coolify is particularly well-suited for small-scale services and personal projects.
This article presents a step-by-step walkthrough of how to install Coolify on a Vultr virtual server and set up a CI/CD pipeline that automatically deploys a Spring Boot application using Docker Compose.
1. Introduction to Coolify for Deployment Automation
One of the most tedious aspects of managing a personal project or a small-scale service is handling deployments. After making and testing changes locally, developers typically need to connect to the server via SSH, pull the latest code, rebuild the application if necessary, and restart the container—repeating this process every time. This manual approach is not only time-consuming but also prone to human error. As deployments become more frequent or multiple services are maintained simultaneously, the operational overhead increases significantly.

Coolify addresses these issues with a lightweight PaaS (Platform as a Service) solution that can be installed on a self-hosted server. It offers features similar to Heroku or Render, providing Git-based automatic deployments triggered by push events. Without requiring complex configuration, developers can also set up custom domains, enable HTTPS via Let’s Encrypt, and monitor resource usage—all through a single dashboard.
While tools like ArgoCD or other GitOps-based CI/CD platforms are widely used in Kubernetes environments, they can be unnecessarily complex for small-scale applications. In contrast, Coolify runs on Docker Compose and offers a streamlined setup process with an intuitive user interface, making it a low-barrier option even for developers with minimal DevOps experience.
2. Installing Coolify
While Coolify can be installed in various environments, this article demonstrates the installation using Vultr cloud computing platform. Compared to AWS, Vultr offers more affordable pricing and provides a stable set of computing resources. Its intuitive user interface also makes it approachable for those setting up a deployment environment for the first time.
Coolify is also available on the AWS Marketplace as a prebuilt AMI image. If you prefer AWS, you can follow a similar installation process using EC2. Since the steps are largely equivalent, you can choose the platform that best fits your infrastructure preferences.

2-1. Creating a Server Instance
After creating a Vultr account, click the Deploy + button on the dashboard to start creating a new instance.
For the purpose of this guide, we chose the most basic configuration: Shared CPU with 1 vCPU and 1 GB of memory. Coolify itself is lightweight and runs smoothly on this setup for small-scale projects. However, if your service will be running multiple containers concurrently in a production environment, securing at least 4 GB of memory is recommended.

2-2. Installing Coolify from Marketplace
Next, navigate to the Software & Deploy Instance section and switch to the Marketplace Apps tab. Search for Coolify, and select the pre-configured image when it appears. Then, click Deploy Now to launch the server and begin the installation process.

The installation process is extremely straightforward. Without any manual Docker setup or CLI commands, the instance initializes automatically and Coolify is installed during provisioning. Once the server is ready, you can access the Coolify interface by visiting the assigned public IP address displayed in the Vultr dashboard.
3. Initial Configuration of Coolify
Once Coolify is installed, the server automatically performs its initialization process and installs the necessary packages. Depending on the instance type, the initial setup time may vary. In this test, the full initialization took approximately 5 to 10 minutes.
3-1. Accessing the Coolify Dashboard
After the setup is complete, you can access Coolify’s web-based GUI by visiting the instance’s public IP address with port :8000
appended.

3-2. Completing the Onboarding Process
On your first visit to Coolify, you will be prompted to create an admin account. Enter your email address and a password to complete the registration. All future administrative tasks will be performed using this account.

Once the account is created, the onboarding process begins automatically. This step requires minimal input from the user—just a few clicks to proceed. The Coolify interface is designed to help developers quickly move into configuring an actual deployment environment.

After onboarding, you’ll be directed to the project setup page. The next step is to configure a GitHub App integration to enable CI/CD with your Git repository. Coolify uses GitHub App–based access control to securely connect to your repositories and trigger automated workflows.
3-3. Connecting GitHub With a GitHub App
Start by clicking the Private Repository (with GitHub App) button on the project creation screen.

Then click the + Add GitHub App button at the top of the screen to open the GitHub App creation modal.

Each GitHub App must have a globally unique name. Using your GitHub username or organization name as part of the app name can help avoid conflicts. Once the name is set and the repository permissions are configured, proceed to the next step.

You’ll then be redirected to GitHub to complete the app creation and grant the required permissions. Coolify requests only the necessary scopes, and the integration process is designed to be secure and transparent. For convenience, all repositories can be granted access at once if desired.

3-4. Checking Repository Integration Status
Once the GitHub App integration is complete, Coolify will automatically display a list of accessible repositories within the dashboard.

At this point, you’re ready to prepare the source code and Docker deployment setup for the application you plan to deploy.
4. Preparing the Spring Boot Application and Docker Deployment Configuration
Next, prepare the application code to be deployed. In this example, we use a minimal Spring Boot project to verify the setup. The project includes a single HelloController class that returns a simple API response, along with the required Dockerfile and docker-compose.yml files to enable Docker-based deployment.
@RequestMapping(value = "/v1")
@RestController
public class HelloController {
@GetMapping("/hello")
public ResponseEntity<?> helloWorld() {
return ResponseEntity.ok("Hello World");
}
}
The Spring Boot project is built with Gradle and uses a multi-stage Docker build to optimize the final image. The application is first compiled in a Gradle container, then packaged into a lightweight OpenJDK image.
# Build stage
FROM gradle:jdk21 as build
WORKDIR /app
COPY . .
RUN gradle clean build -x test --no-daemon
# Package stage
FROM openjdk:21-slim
WORKDIR /app
COPY /app/build/libs/*.jar /app/application.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "application.jar"]
CMD ["--spring.profiles.active=test"]
While Coolify supports various deployment approaches, we will use the Docker Compose–based method. This approach allows us to define the application entirely through the docker-compose.yml
file, making deployment and maintenance more straightforward.
Coolify registers services based on the docker-compose.yml
file. Therefore, the file name must exactly match the configuration used in your project. Below is a sample configuration used for testing:
version: '3.8'
services:
coolify-playground:
build:
context: .
dockerfile: Dockerfile
image: catsriding/coolify-playground:latest
container_name: coolify-playground
restart: always
ports:
- "8070:8080"
networks:
- coolify
environment:
TZ: Asia/Seoul
command: [ "--spring.profiles.active=prod" ]
networks:
coolify:
external: true
build
: Specifies the build context and the Dockerfile to use.image
: Sets the name and tag for the image that will be built and reused.ports
: Maps port 8080 inside the container to port 8070 on the host machine, making the service accessible externally.networks
: Refers to the coolify Docker network, which is automatically created by Coolify during installation. By declaring it as external: true, Docker Compose uses the existing network instead of creating a new one. This enables seamless communication with internal services managed by Coolify, such as reverse proxies, load balancers, or SSL termination services.environment
: Defines environment variables required by the container, such as the system timezone (TZ).command
: Overrides the default container command to specify a particular Spring profile (prod in this case).
Once the application code and configuration files are ready, push them to a GitHub repository. This completes the setup required for Coolify to detect changes, build the Docker image, and trigger automatic deployment.
5. Connecting Your GitHub Repository and Verifying Automated Deployment
Return to the Coolify dashboard and load the repository list via the previously linked GitHub App. If your repository does not appear, make sure the GitHub App has the necessary permissions to access it from the GitHub settings.
5-1. Creating a Deployment Project
Once a repository is selected, you’ll be directed to the project creation screen. Here, configure the branch to monitor for changes, set build options, and specify the path to the Compose file. By default, Coolify assumes a .yaml extension, but in this case, it was manually changed to .yml for consistency.

5-2. Triggering an Initial Deployment
After completing the setup, you will enter the project configuration screen. This interface allows you to define environment variables, connect domains, and toggle automatic deployment. For now, keep the default settings and click the Deploy button at the top right to trigger a manual deployment.

5-3. Monitoring Build and Container Status
Coolify uses the Docker Compose configuration to build the image and run the container automatically. The live log viewer shows the build and deployment progress. Enabling debug logs reveals additional details that can be helpful during troubleshooting. If the project is large, consider running builds on a dedicated server to reduce overhead.

5-4. Verifying the Application Endpoint
Once the build succeeds and the container is running, access the application directly via a browser or API testing tool.
Since a custom domain has not yet been configured, use the public IP of your Vultr instance combined with the mapped port defined in your Docker Compose file. For example, the test endpoint can be reached at http://<your-ip>:8070/v1/hello
.
If the request to GET /v1/hello
returns the expected "Hello World"
response, the deployment is working as intended.

5-5. Reviewing Deployment and Application Logs
While it’s possible to SSH into the Vultr instance and check logs manually, Coolify provides a built-in log viewer. The Logs tab displays live output for each build, container status, and application logs, improving observability and easing operational maintenance.

5-6. Testing Automated Deployment
To confirm that CI/CD is functioning correctly, make a small change to the code and push it to GitHub. This could be as simple as updating a log statement or modifying the API response.

After the push, Coolify will receive a webhook event from GitHub and automatically initiate a new build and deployment.
By checking the Logs tab again, you can verify that the new commit triggers the expected build and container restart.
Revisit the API endpoint to ensure the changes have been applied, confirming that the CI/CD flow—from code modification to automated deployment—is functioning seamlessly.
Coolify also makes it easy to configure domains and SSL. By connecting a custom domain and enabling Let’s Encrypt, you can provision SSL certificates automatically and serve your application over HTTPS with minimal setup.
6. Enhancing Operations With Coolify’s Built-In Features

In addition to domain and SSL setup, Coolify offers several operational features that enhance deployment reliability—such as status-based notifications, automatic retries on build failures, and conditional rollbacks. These capabilities make it easy to establish a stable deployment environment, even for small-scale projects.
7. Conclusion
So far, we have walked through the process of setting up Coolify on a Vultr-based virtual server and deploying a Spring Boot application using Docker Compose with a fully automated CI/CD pipeline. Coolify stands out for its ability to quickly establish a minimal yet powerful deployment setup—complete with GitHub integration, domain mapping, and HTTPS provisioning.
Without relying on complex infrastructure like Kubernetes, Coolify makes it easy to build automated deployment workflows. Its intuitive web interface, real-time log viewer, and built-in SSL certificate management significantly reduce operational overhead, making it a great fit for small-scale services and personal projects.
Of course, there are limitations. For high-availability systems or more granular deployment strategies, a more advanced solution may be necessary. It’s important to evaluate whether Coolify fits the operational complexity of your use case.
In my case, it has proven to be an efficient choice for learning projects, MVPs, and lightweight backend services—and I’ve even adopted it in production for a few small services.
If you’re looking to experience automated deployment without diving into the complexities of modern DevOps tooling, Coolify offers a clean and capable entry point—especially for developers who want to build and operate Git-based deployment workflows with minimal friction.