The Road Ahead for Docker

Container orchestration tools are making the management, operation and resiliency of compute workloads significantly easier. Orchestration tools are becoming increasingly data centre aware. The idea of data-centre aware operating systems has been developing gradually in the industry since Google released Kubernetes in July 2014. Around that time Mesosphere released a data centre operating system, Joyent also released their Smart Data Center and more recently, Microsoft announced Azure Service Fabric. Docker's future success relies heavily on enabling organisations to move workloads seamlessly into cloud infrastructure. A data-centre aware operating system will make migration of compute workloads from different data-centres onto cloud resources an automatic process which requires minimal cloud specific expertise.

Cloud Adoption in Financial Services

Adoption of cloud services in the financial sector has been driven purely by cost savings and the need to drive up utilisation on existing hardware. With shrinking I.T spending budgets, on-premise infrastructure and application estates have been shrinking in size and moving onto the cloud. Companies are moving onto the cloud in two ways: complete migration or the hybrid cloud approach. A hybrid cloud is the integration of private data-centre infrastructure and public cloud services. This can be done by providing a virtual private network between the company's data-centre with a public cloud by using services provided by companies like Equinix. Some organisations favour building a private cloud with the capability to burst-out into the public cloud for larger workloads.

The natural entry-point for the financial sector is to farm out derivative pricing and risk calculations into the cloud. A common example is the usage of Microsoft HPC by risk departments to burst-out on-premise compute workloads into Azure. The burst-out technology for this use case is simple because of the stateless nature of grid computations. However this model cannot be applied to data-centric applications or microservices. Applications with a strong data coupling would require the storage layer to be data-centre aware and to move the data around transparently to where it is needed. This isn't a trivial architectural requirement. A new range of tools and APIs would be required to satisfy this particular use-case.

The challenges of managing stateful applications include providing resiliency, scalability, parallelism and replication to workloads. The biggest problem with handling large computer resources is managing failure. Failure is expected and needs to be handled without disrupting service availability. The strategies ensuring resiliency include replication, balancing load and monitoring of service health. These capabilities need to be handled by the management layer moving workloads and data between cloud and on-site infrastructure. The next generation of cloud and container based tools will provide this level of orchestration complexity.

The management tools emerging from the container ecosystem may completely replace hybrid-cloud strategies as they mature. For example, Kubernetes will eventually allow workloads to be pushed from privately owned data-centres onto cloud infrastructure. Although it lacks the network and storage provisioning features of hybrid-cloud tools, it has the compelling feature that it will soon provide cloud federation. Kubernetes will then be positioned as the control plane for the management of on-premise workloads and burst out into cloud environments. This is a compelling incentive to organisations because it avoids cloud lock-in. The biggest challenge these technologies face is providing a location-aware storage layer that works seamlessly between data-centres and cloud environments.

Warehouse Computing (The Future)

The scheduling and orchestration tools of containers form what is called the cluster-level infrastructure. This service layer forms the foundations of the Data Center Operating System [2]. The DCOS encapsulates the idea of making thousands of servers in a data centre appear as a single machine. Cluster-level infrastructure provide data replication, distributed file systems, sharding, load-balancing and service health monitoring. Kubernetes provides this service layer for containers and microservices and so will the Azure Service Fabric when it is released in early 2016.

The internal software which runs Azure Cloud will now be available to run on-premise. Azure Service Fabric will provide the API for the orchestration of micro-services and containers within the Microsoft ecosystem. This follows Google's strategy with Kubernetes - releasing internal cloud software for use in private data-centres. Azure Service Fabric will eventually allow cloud-agnostic burst-out for microservices using these APIs. The preview version of this technology will be available at the end of summer 2015. As these tools mature on-premise compute resources, data-centre and cloud infrastructure will all be managed with the same set of tools.

Google's Kubernetes and Microsoft Azure Service Fabric mark a change in how computer resources in Data Centres and cloud infrastructure are utilised. The simplification of software APIs by providing a unified, homogeneous view of compute resources across different data-centres promises to improve developer productivity, drive up infrastructure utilisation and improve operation efficiency.

Google's strategy has been to gain traction in the market by open-sourcing their technology. Microsoft already has a foothold in most organisations through the desktop and server versions of their operating system but also has an increasing open-source portfolio. Surprisingly Amazon's voice has been absent from the recent announcements. Amazon's strength comes from it's numerous cloud services (500+). Many of which did not have an accessible on-premise equivalent. Now that Google and Microsoft are providing elastic scheduling, cloud-like storage and other services for private use Amazon's value proposition may begin to diminish. The move towards cloud federation management layers means that cloud infrastructure is becoming commoditised. Without a technology offering in this space Amazon may find itself lagging behind in the next generation of cloud technologies.


Companies that have already invested heavily in a hybrid cloud strategy will need to re-evaluate as storage and network provisioning tools mature around Docker and other container technologies. The container technology stack could potentially drive down the TCO of hybrid cloud installations. 2015 is going to be an interesting year for cloud and virtualisation technologies. Expect to see a series of product announcements at the end of Q2 2015, where Docker and startups such as Kismatic and CoreOS will announce production-ready versions of their cluster infrastructure tools. Redhat and Microsoft will also be releasing updates or announcing their container-aware versions of their server operating system. Even though Docker has captured the imagination of developers, the next 12 months will be critical. It will need to reach enterprise maturity or risk being surpassed by the efforts of the community and the strong product offerings from the Google-backed technologies and Microsoft.

Related content

Excelian recently sent a team of developers to attend the Devoxx conference in London. Devoxx is a two day event “by developers for developers” that discusses the latest trends and technologies in the IT world. Along with seminars and programs from l...
Server virtualisation provides many IT infrastructure management benefits, since it can run and manage multiple applications and operating systems on the same physical server. In theory, this allows increased utilisation and improved business continu...