Use Docker as a sandbox playground
Always test dangerous command inside a sandbox
4 min read
Often you find that you have some new program or software to test out, but you are afraid of any bugs or unintended behaviours may break up your main machine. In the past, I would usually spawn up a new virtual machine, install the same OS into it and then test the software there. In this case, it functions as a sandbox environment, such that any things bad happens stays there, and preserving the safety of my main machine.
Most of the time you don't need a virtual machine
However, loading up a virtual machine may be way too heavy, it may often slow the whole machine down. It is not worth it just to test a tiny piece of new software. Then I found a way to create a new instance on the cloud, SSH into it and test it there. This way is much better, since it is very lightweight and still serves as a sandbox. More recently, I discovered an even better way to do it, by utilising Docker containers.
Docker container are incredibly lightweight
Each Docker containers can serve as a very lightweight sandbox environment. It runs directly in your terminal, so if you are testing out command line programs, often it is more than enough to replace the virtual machine with Docker.
docker run -it --name <container-name> <image-name> <command-to-run-in-container>
Any time you run the above command, a new container will be created, even if they are spawn from the same docker image. If you have learned object-oriented programming, you may have heard about the difference between a class and an object. Roughly speaking, a docker image is like a class, and a docker container is like object. Using the same class can create many objects, and each instance of object can be shaped differently. So from the same image, you can create 10 new containers, and test your new software there with different parameters. This probably can't be done through virtual machine because you may run out of system resource if you open up 10 virtual machine at the same time.
Test dangerous command with a disposable container safely
Normally when you create container and exit from the terminal, the container will be put into a stopped state, which will still occupy some system resources. I often found that I need to delete them one by one manually which is very annoying, until I discovered the
docker run -it --rm <image-name> <command-to-run-in-container>
The above command can create a one-time anonymous container directly inside you current shell. After you have finished testing your dangerous command, you can simply quit the container. Docker will automatically delete the container. When you type
docker ps -a you will see a clean list. For example, if you're curious about what the evil command
rm -rf / will do? Just try it inside a container this way, play with it after running, e.g. try typing
ls and see if it works, and afterward just quit the container.
I use Docker containers to test change of system configs
One of the most used case for myself, is to test out some system configs on it, see how it behave before really messing with the main operating system. For example if I install some shell plugins or vim plugins, I wanna test it out before installing them, I will create a docker container, clone my configs there and recreate a similar environment inside, and test out the new configs first. If I feel comfortable enough, then will apply them to my main system. The above is very convenient and can be done through only using the terminal.
Use a base image that contains essential applications
Docker images normally are extremely lightweight, for example the
alpine image is only 5.35 MB in size. These lightweight images often contain only absolutely necessary software. While those lightweights images are suitable for hosting containerised application, they are not always suitable for used as a sandbox for testing, because some very essential software, for example
git, may be missing and need to be installed every time. One solution is to run a lightweight container first, install those essential software first, then use
docker commit to save the state to a new image. Since then, you can always spawn a new container using this newly created image, and the resulting container will spawn to contain all the essential software.