Do you feel that containerization leads to organizations patching infrastructure less often than before? What are the best approaches for keeping containers secure?
Definitely! We're seeing more teams move to immutable infrastructure and work their security into their development pipelines a la DevSecOps. As this happens, people are questioning whether they need full blown config management or whether bash and powershell are sufficient.
I think that the lack of automation in container-aware infrastructure patching, can lead to that, but at the same time it's easier than ever to do live patching in a container environment. Also integrating container security scanner tools will force updating the base images.
How do you do live patching of container-aware infrastructure? Would you ever run a config mgmt. agent in the container and depend on it to pull updates from package repos or is that an anti-pattern?
I agreed but there are still issues with many organizations trusting public docker images, which is a large concern. We do need more automation and security automation as, like you said, they are lacking. So patching is easy, but who's doing the looking?
live patching would mean identify the host, drain it (do not allow new containers to start on it) terminate gracecefully the containers it runs, then patch, reboot, test, add it back, all automated but the automation tool needs to "talk" to the container orchestration also
This is a task for configuration management tools, more for push-type (ansible) than others
Primarily my concern was directed to getting patches onto the container image itself, but this is also an interesting point.
DC/OS currently does support this automation, and in my experience is works pretty well. There are some caveats, for instance if the images need persisted storage, but for stateless containers it handles live failovers well.
To rely on the immutable infrastructure principle there would be no patching, you would simply start a new image with the new software versions on it and join it in the cluster, and drain then destroy the old instances.
and here the config management tools would be used only in the instance launch/configuration phase, if needed, or in the image build phase.
If you were building a brand new pipeline, would you select Puppet, Chef, Salt, etc.. for this? How about if you already had Puppet/Chef/Salt deployed and working? Would it be worth changing your pipeline?