In this post, we will summarise some of the most interesting insights.
The Software Defined Data Center
The main theme of the conference as a whole was stated as the “Software Defined Data Center”. What this means is that the challenges of scalable compute capacity, availability, resiliency, utilization and cost management in data centers are now met in the software. Gone are the days when data centers were self-designed and built by individual enterprises with long-term strategic vision for their usage. Nowadays they need to provide much higher levels of flexibility and scalability than ever before, which requires dynamic cost and capacity management and smart power and cooling provision. All of this has to be provided in software.
Topics and Highlights
The current challenges in front of modern data center design seek for innovative solutions in four main areas:
- Data Center Infrastructure Management (DCIM)
- Energy Efficiency and Sustainability
- People Management
- Outsourcing, Collocation and Cloud
Data Centre Infrastructure Management (DCIM)
DCIM was a buzz word at the conference, but there seemed to be a shared sentiment among all speakers, that it was not a very well defined concept. The way we think of DCIM is as a collection of all types of solutions that enable the dynamic management of data center operations, focusing on intelligent resource usage, utilization and efficiency. Thus, all products which target the problems of capacity, utilization, cost and power efficiency, resiliency and availability fall within the domain of DCIM.
Unsurprisingly, given the overall theme of the conference, the focus of many talks was on software DCIM solutions. They are based on the idea of using sophisticated analytics on big sets of data gathered as the data center runs to intelligently manage capacity, power usage and performance – either manually or automatically. The idea is simple and not necessarily new. Indeed, there was a multitude of such tools presented at the conference. The actual implementations and their efficacy are variable.
Some particular points from the presentations were:
- Failure is a part of life in very-large-scale computing and automated software tools can be used to abstract away and handle the failures transparently, thus providing software resiliency (Christian Belady, Microsoft)
- Similarly, Matt Pumfrey from Data Centre Alliance discussed how collecting, storing and intelligently processing of all the minute operational and management data can help to reduce the running costs by a few % in quite short span of time – 6 months on positive ROI. Seems trivial, but it is rarely implemented.
Data center power efficiency and sustainability is by no means a new issue, but is a very widely discussed one. This not a surprise, as the biggest chunk of data center costs goes towards the vast amounts of power they consume. What is more, environmental considerations and policies are now taken into account much more.
In general, there are two ways to look at power sustainability. The 1st approach, which is traditionally taken, is that of ensuring the full utilization and efficient usage of the power supplied. The 2nd approach, which sees slower adoption in data centers, is that of using renewable energy sources on site. The slow adoption is due to the high cost of renewable energy and its dependency on uncontrollable factors, such as weather. Nonetheless, investment in onsite renewable sources is being made using novel technologies with Microsoft, Google and Apple at the forefront.
An interesting example of innovation in the area is the EU funded RenewIT project, presented by Andrew Donoghue from 451 Research. Their ongoing study is on workload scheduling according to peaks in power production. The idea is to schedule workloads so that they execute when power is plentiful and this can prove a good solution in data centers which deal with less time-critical computations. One of the outcomes of this project is a modeling tool to allow data center designers, owners and operators to make intelligent decisions about the energy sources used at all times.
A remarkable fact we learnt at the conference is that the Banking & Financial Services sector is the biggest power consumer in the UK Data Center market with ~0.5GW out of the total 3.1GW, leaving behind IT Services, Media & Telecom and Healthcare.
A critical factor to the successful data center operations are the people involved at all stages. People management was addressed at several talks, making some strong points.
Firstly, the flexibility required by changing and growing demands extends to people’s competencies, skills and problem solving approaches. A growing need of skilled and open-minded people, who are quick learners and who thrive with change and ambiguity was pointed out more than once.
Secondly, particular consideration of the whole data center lifecycle needs to be made when it comes to people management. Robert Giles from Norland emphasized the fact that the biggest costs are induced after the data center delivery, at their operation phase, which makes proper facility management of critical importance. He focused on the soft landings approach to people management, which involves bringing the correct people to run and operate the data center early in the design stages. This should provide easy and efficient transitioning between project stages and lowering of operational costs. This correlates with the points made by Christian Belady from Microsoft, implying the need for a wide skill-set and a high degree of flexibility in individuals.
Outsourcing, Collocation and Cloud
Moving the data center design from the self-contained, all-in-one enterprise model to a flexible one, in which resources are managed dynamically, based on demand, involves outsourcing the compute capacity to a collocation server farms and/or readily available cloud services. The shift towards outsourcing is driven by factors, such as
- reduced budget
- rapid technology changes
- increased confidence in providers and reduced fear of risk
- global scope of both clients’ operations and providers’ coverage
- skills shortages for DIY solutions
Perhaps because these trends have been around for a long time, or perhaps because most of the participants in the conference seemed to be engineering companies, there wasn’t that much focus on cloud and collocation per se at the conference. However, there were some interesting outtakes:
- Globally the outsourcing went from 8% to 23% in 6 years, leaders worldwide being emerging economies like Mexico and China at 29%. The UK is at a respectable 23%.
- It is said that nearly 100% of companies are looking for outsourcing in principle, but less than 30% are really tackling this challenge for various reasons. Clearly an area open to new businesses given more encouragement to the potential customers.
Rapidly growing datasets, increasing demand for data intensive analytics and computation and the resulting growing demands for computation power mean that the data centers will continue to be built, although at a slower pace. Yet the way they are used and managed will continue to change rapidly and force innovation. Some of these innovative trends, such as cloud computing and data center outsourcing, are already ubiquitous and their presence is continuing to grow. Another trend is that using software to intelligently meet some of the challenges will prevail over the traditional simple increase in hardware resources and power. Finally, the requirement of innovation and the rapid changes we are observing make creative thinking, open-mindedness and constant learning to be the critical qualities for technologists.
By Vanya Yaneva and Andrey Kaliazin.