Dockerising a MongoDB Microservice in Kotlin/Maven/Spring Boot

Dockerising a MongoDB Microservice in Kotlin/Maven/Spring Boot

As part of my training, I'm working on creating a smaller user microservice for a fake property website. After crafting the basic controller/service/repository and the testing alongside it, we were tasked with dockerising both the application and the database, and ensuring the two can communicate effectively.

The general process is as follows (detailed steps below):

  1. Create a new active profile and the settings associated with it
  2. Create a Dockerfile and image of the application
  3. Create a Docker network for the application and database to live on
  4. Run the containers and add them to that network
  5. Profit?

This is how the final structure should look:

A diagram showing the client connecting to the application using localhost:8080, and then the user-container(application) and the mongo-container(database) both existing within a docker network

For the sake of this article, I'm going to assume you already have a basic functional Spring Boot app with integration tests. If you don't have tests, then you can skip the part about MongoConfig.kt.

Step 1.1: Create a new active profile and the settings associated with it

If you're only ever going to use this app dockerised, then you can directly change the settings in your application.properties file rather than in a new file as below. However, if you're looking to have multiple profile options available to you in future, then it can be a good idea to get on top of active profiles, which allow you to change your settings based on which "mode" (profile) you're in.

Create a new application.properties file for your Docker profile

Maven will automatically recognise your new application.properties file as associated with your active profile if you name it correctly. After application, simply add a hyphen followed by your profile name:

application-DOCKER.properties filename screenshot

Luckily, the only thing you need to change in here is the host name.

A note on host names

A very cool thing about docker is that when two containers are linked up in the same network (as we'll do later), the hostname changes to that of the container you're trying to connect to.

Put another way, rather than trying to connect to localhost:27017, we'll now connect to containername:27017 as we're on the same Docker network. This might throw a few problems our way later, but for now we can just marvel at how cool that is and change our settings accordingly:

A side-by-side screenshot of application.properties and application-DOCKER.properties: the only change is that the spring.data.mongodb.host is now set to mongo-container instead of localhost

Step 1.2: Update MongoConfig so our tests don't bug out

The problem with these changes is that when Docker tries to launch our application, it will automatically run tests and those tests will fail because our MongoConfig points to localhost:27017, which is not accessible to us from within the Docker network.

To overcome this, we can use @ConfigurationProperties to tie in some variables with our application.properties file.

  • Firstly, let's add our mongodb URL to our application.properties file for both our default and docker profiles: Our application.properties files now both have a mongo.url= with the full url for the Mongo database (e.g. mongodb://localhost:27017/) listed

  • Next, let's refer to that property using the @ConfigurationProperties annotation and a lateinit variable: The line @ConfigurationProperties("mongo") has been added to the annotations, along with a lateinit var url: String which is now the source of the connectionString value

  • Note that to make this work, you will have to add a dependency to pom.xml: Importing org.springframework.boot artifact spring-boot-configuration-processor

That's a bingo! Now our tests won't bug out - yay!

Step 2.1: Create a Dockerfile and an image of the application

Now that our application is looking lovely and is configured properly, it's time to bring some actual Docker functionality into the mix. If you haven't installed Docker on your machine yet, go ahead and do that and then come back. It's OK, I'll wait.

...Welcome back! Now we want to create a file in the base directory of our entire application, the same level as where our .gitignore and pom.xml files live. This file is going to be called Dockerfile. Yes, it has to be called that precisely. No, it doesn't have any sort of extension: just Dockerfile.

For those who aren't familiar with it yet, a Dockerfile is where you tell Docker how to run your application.

This is what we're going to enter:

  • FROM openjdk:11 this indicates which language should be used to interpret the programme
  • COPY ./target/user-0.0.1-SNAPSHOT.jar ./app.jar this indicates where the base file of your application is right now (in our case, the project is called user and it is a .jar snapshot file) and the second path indicates where it will be copied to within Docker.
  • EXPOSE 8080 this indicates which port you'd like it to be running on on your local machine
  • ENTRYPOINT ["java", "-Dspring.profiles.active=DOCKER", "-jar", "app.jar"] this tells Docker how to start the programme: run "app.jar" and use the DOCKER active profile.

Note that if, like me, you accidentally use docker(lowercase) here, it won't work. Seriously. I lost hours on that one.

This is what the final product looks like: Dockerfile with the properties as listed above

Step 2.2: Maven Clean Install

Since we're working with Maven, and we need our SNAPSHOT.jar file to include all our latest changes, we need to run Maven Clean Install by typing:

mvn clean install

This will also flag if any of our tests are failing, which could indicate a problem with the config we added.

Step 2.3: Build an image from the Dockerfile

Dockerfiles and SNAPSHOT files do nothing just sitting there: we have to tell Docker that our project snapshot is ready to be turned into a handy little image to share with our friends, colleagues, family members, household pets, and random strangers we meet at the supermarket.

To do that, we're going to use the build command with a few handy add-ons.

In your Terminal, navigate to your project directory ( see here for my guide to using command line to navigate around) and type the following command:

$ sudo docker build -t user:latest .

Depending on your computer's settings, you may not need the sudo.

Let's break this command apart:

  • docker: tells your computer to run this command with Docker
  • build: tells Docker to build a new image based on the Dockerfile in the current directory
  • -t: tells Docker that what comes next is the name of the image ('t' stands for 'tag'). In this case, we're telling it to build an image called "user" and set it to the latest version.
  • .: tells Docker to run the commands within the Dockerfile (e.g. the COPY command) from our present directory

With any luck, this all goes smoothly and you don't have to go crawling through Stack Overflow to solve any quibbles Docker may have.

If so, let's move along to the next step...

Step 3: Create a Docker network for the application and database to live on

As I mentioned before when looking at config, we need both our application and our database to be running on the same Docker network in order for them to 'see' each other. It stands to reason, therefore, that creating such a network might be an important step.

Luckily, this is relatively simple!

$ sudo docker network create usernet

Of course, you replace "usernet" with whatever you want the network name to be. That's it! Whew! Let's move on.

Step 4: Run the containers and add them to that network

There are several ways to add containers to an existing Docker network. The easiest one is to set them up as part of that network in the first place, so that when you start the containers in future, they're already where they should be. To do this, we're going to run a container based on our image and using the --net tag to indicate the network we want it to run on. We will have to do this for both our application and for the database.

$ sudo docker run --name user-container --net usernet -p 8080:8080 user:latest
$ sudo docker run --name mongo-container --net usernet -p 27017:27017 -d mongo:latest

What do these commands do?

  • run tells Docker to create a container based on an image
  • --name {name} tells Docker to give that container a name which you then specify
  • --net {networkname} tells Docker to put that container within the network with the name you specify
  • -p {host port}:{container port} tells Docker what port to publish the container to. The 'host port' will be the one accessible from the outside (e.g. using localhost) and the 'container port' will be the one in your settings and/or application.properties file.
  • -d tells Docker to run in 'detached' mode, meaning you won't see what's going on in the terminal, it will just silently tick along in the background. In this case, we'll let MongoDB do that, but we'd like to see how our application is going so we won't use -d for that container.
  • user:latest and mongo:latest tell Docker what images to use to construct these containers.

A note on the last point there: we built the user:latest image ourselves, and so it exists on our local system and is easily accessible. But what about mongo:latest? Don't we need to download it, or build it, or something?

Well no, because Docker has this awesome inbuilt function where, if an image isn't locally available, it will look online for the relevant image and download/use it for you. In this case, the good folk at MongoDB have already prepared an image for you to use and run, so you will likely see a short message along the lines of "Could not find the image mongo:latest locally" before Docker downloads the latest version from MongoDB themselves for you and then continues to run the container.

Step 5: Profit?

TADA!! You should now have your application and a database running within one Docker network! Since we're not yet using Docker Compose or similar, there are still a few steps for others to go through to achieve the same.

Once you've committed your changes to a Git repository, for instance, and someone else were to download it, the image wouldn't go with it. This is something they would have to do themselves.

Here are the commands in one heap for someone cloning into the remote repository:

$ mvn clean install
$ sudo docker build -t user:latest .
$ sudo docker network create usernet
$ sudo docker run --name user-container --net usernet -p 8080:8080 user:latest
$ sudo docker run --name mongo-container --net usernet -p 27017:27017 -d mongo:latest

Once you're totally happy with your image, you can also load it onto Docker Hub so that it's easily downloadable/runnable for others without them having to create their own image at all.