Skip to content

Securing Docker Containers: Lessons Learned from a Data Loss Incident

Docker security

Introduction

We had set up MongoDB as a Docker container for a project and hosted it on our infrastructure server. We chose to use a Docker container for ease of deployment and management. However, one morning, we discovered that the MongoDB data was gone, prompting us to investigate the cause.

Investigation

Upon further investigation, we found that the ports exposed from the Docker container to the host were accessible from the world, even when the server was secured with UFW and configured to deny all incoming connections unless a rule is added. This was a critical security issue, as it meant that anyone could potentially access our MongoDB instance and the sensitive data stored within it.

We discovered this by attempting to connect to the MongoDB instance from a remote machine and were surprised to find that we were able to connect successfully. We also ran nmap scans multiple times before, but they did not reveal the particular port, which made this issue even more alarming. This is because the nmap by default only scans most popular 1000 ports and mongoDB port isn’t among that. Here is the faulty config without any host specified which have caused this issue –

# Other config
...
ports:
  # <Port exposed>:<DB port running inside container>
  - 27017:27017

Solution

To immediately mitigate this problem, we changed the docker-compose configuration to only expose the MongoDB port to the host and not the public. We had to add the localhost address beside the exposed port to achieve this. Which meant that only applications running on the host machine could access the MongoDB instance.

# Other config
...
ports:
  # <Port exposed>:<DB port running inside container>
  - 127.0.0.1:27017:27017

We also recommend taking backups of your volumes frequently and storing them offsite. This is an important practice for any production environment, as it can help prevent data loss in the event of hardware failures, software bugs, or security breaches. You can follow this article written by my colleague for guidance on how to implement a backup strategy for Docker volumes.

References

Comments (2)

  1. Still curious, how come the port was exposed to internet when UFW didn’t expose that port?
    Do you mean your docker config override UFW config?(highly unlikely)

    You didn’t talk about data loss, did someone from internet delete data? Was the DB protected with user/pass?

    1. Hello Manas! Nice to hear from you

      Yes docker settings overrode UFW,. UFW is just a utility to ease the usage of `ip tables` which when used raw is a bit complex to use. When Docker exposed a port through port mapping, it directly added an entry to the IP tables. UFW couldn’t comprehend the changes made in IP tables by docker, so we couldn’t see any difference in UFW’s list of rules.

      Regarding MongoDB in Docker – we used default settings, though it’s generally a good practice to review and modify that. Data loss may not have been a major issue since it was a in pilot phase back then.

Leave a Reply

Your email address will not be published. Required fields are marked *

Search