DockerCon Day 2 - Part 4

DockerCon Day 2 - Part 4

Scott Johnston, the Chief Product Officer at Docker came out next and continued the discussion about Choice.


He talked about the fact that developers want choice - different languages like Java, Python or NodeJS, different technologies for handling the workloads, and utilizing third-party tools. They want to use the best tool for the job.


This is just as true for IT Operations as well. They want to be able to innovate better ways of enabling applications to be delivered, to optimize costs, and to avoid solutions that end up locking them into a particular vendor.


A lot of companies now are starting to address these needs by turning to Cloud providers like Amazon, Google, or Azure. If you are a company that has tons of Java apps but also has tons of .NET apps, you might find yourself having to manage applications in multiple clouds. You might have some apps that require different OS base containers, and you might even use a combination of Linux and Windows containers.


Wouldn't it be great if you could manage your cloud apps in different cloud providers straight out of one place? Trying to do that with the various separate stacks from each cloud provider creates a lot of complexity and friction. They handle deployment, automation, security and infrastructure in slightly different ways.


Thankfully, Kubernetes is really starting to change that by giving developers and IT Ops a common language to describe what it is that they are actually trying to build. The deployment configuration in Kubernetes is literally a logical description of the state of the world you want, and it is up to Kubernetes to make that happen in the physical environment.

Today, Docker announced that their Enterprise Edition offering will now support integrating with Kubernetes clusters in different cloud providers. It's so simple that you can literally create a Kubernetes cluster in one of the providers and then with one simple command register that cluster with your Enterprise Edition install. You can then take applications that you have developed for Kubernetes and migrate them over to the cloud provider with one command. You could even take a Kubernetes application running in AWS and move it over to Azure. You aren't changing anything about the application, you are simply moving it based on the definitions that already exist in Kubernetes.

Now we didn't talk about migrating data - such as data in a database - so you may still have to deal with that, but fortunately there are lots of solutions out there that already deal with the issue of migrating lots of data around from one database to another. Docker doesn't need to solve that problem - it's already something that IT Ops folks know how to do. Now they can migrate their infrastructure the same way.

That was the end of the general session, and although there wasn't the same level of earth-shaking announcements like when they announced the multi-stage Dockerfile in 2017, these new additions to the Docker offerings are really nice tools that will help companies just starting out or even somewhat established in containers to make the move from apps hosted in vms over to apps hosted in containers.

Continue on to Part 5

Related Article