M8 Recruitment

The future now | Future state data centres need to meet two meta design imperatives: Adaptability and Scalability.

Press release   •   Apr 22, 2013 22:13 BST

Designing for Adaptability and Scalability can produce up to 35% savings in data centre build costs, as well as a further 35% savings in operating expenses.

The over-arching purpose of physical infrastructure is to ensure that microprocessors are kept within their operating parameters with robust and reliable power and cooling. Quoted in Multiprocessors, Luiz Andre Barroso, a principal engineer leading the Platforms Engineering Group at Google, explains why data centres are experiencing vastly increased heat densities over their capacity specifications:

[Microprocessor] Performance per watt has remained roughly flat over time even after significant efforts to design for power efficiency. In other words, every gain in performance has been accompanied by a proportional inflation in overall platform power consumption. He continues If performance per watt is to remain constant over the next few years, power costs could easily overtake hardware costs, possibly by a large margin.

This is a major concern as increased power also demands increased cooling up to 45% of data centre energy is used dealing with thermal emissions. Rising energy consumption has serious implications for the financial wellbeing of corporation as well as that of planet Earth.

The cost of implementing new facilities means that such an enterprise should only be undertaken with a clear strategy for maximising ROI, by controlling both capex and opex. This becomes a major objective as most facilities are designed for an operational life of twenty years, while new generations of IT equipment are innovated almost monthly.

The way forward

Since it hard to anticipate a time when business will be less reliant upon IT, the facilities and infrastructure which form the NCPI layer must be constructed in such as way that allows businesses to grow financially, organisationally and technologically. American Power Conversion (APC), a leading company in the provision of NCPI solutions insists that companies need to evaluate each and every component within the data centre in the context of Adaptability and Scalability. At the heart of APC's paradigm is a modular solution set called InfraStruXure, integrating the power, cooling, racks, cabling and security components of NCPI, to meet these meta imperatives for an adaptable and scalable infrastructure.

The Benefits of the Adaptability & Scalability Paradigm for Data Centre Design

Modular data centre electrical/ mechanical infrastructures allow corporations to realise savings of up to 35% on the upfront build-out costs and a like amount for data centre operating expenditures. In addition to these tangible benefits, an intangible benefit of this approach is providing IT and facilities executives the agility to quickly respond to changes in technology or business requirements, says Carl Greiner. senior vice president, technology research service.

Realising the Benefits of Adaptability & Scalability

1. Ensure that all stakeholders in the data centre are involved in the project. In addition to external consultants, include domain experts such as facilities (who have historic knowledge of problems provisioning the data centre), IT (who understand IT equipment and what will be deployed in the short and medium term) and the CIO, who owns the strategy for IT and how it supports business objectives.

2. Ensure that there is a common language with which all stakeholders can communicate their requirements. Establish a uniform measure for establishing the capacity and operational requirements of the data centre. As a general principal, work from the rack space upwards in order to calculate the overall room capacity as real data centres do not exhibit uniform power density, e.g., patch panels may draw zero power while blade server racks may draw over 20 kW.

The varying power densities of racks found in heterogeneous environments is compounded by the fact that IT equipment is constantly being refreshed and therefore the power consumption is subject to change. Conventional density specifications do not fully account for power variations and as time goes on they become less relevant. APC, for example, believe that watts / rack assures compatibility with high density loads, provides unambiguous instruction for design and installation of power and cooling equipment, prevents oversizing and maximises electrical efficiency.

3. Having identified its electrical capacity, the autonomy of the facility should be decided upon, from providing sufficient UPS runtime to ensure an elegant shutdown in the event of a blackout, to proving external standby power generation to ensure 24x7 operation in all situations.

By using a modular, rack mounted UPS, the facility may be scaled with the addition of power and battery modules. This method of rightsizing helps to ensure the electrical efficiency of the room and keep energy costs to a minimum. An inefficient (or oversized) UPS will create heat which adds to the required cooling capacity. With the advent of high density equipment such as blade servers, oversizing the electrical infrastructure has become an expensive and inefficient feature which can easily be avoided.
Additionally, some rack mounted UPS can be deployed without special engineering (subject to floor load), and their modularity means that should there come a time when the IT equipment outgrows the space available, it can be shipped to a new site. Users may also consider that standardised and modular key components such as batteries and power modules will help to reduce mean-time-to-repair (MTTR) as spares may be kept on site.

4. Cooling hot spots and high density equipment racks is a problem if the capacity for the facility has been calculated on a room basis. Now that most infrastructure providers have adopted an integrated solutions approach, best practice has been established as getting the cooling equipment as close to the load as possible in other words, into the rack.
This means that high density equipment does not have to be spread through the data centre and sprawl can be avoided. Additionally, as airflow does not behave in a predictable fashion, it will help avoid failures low power racks as cool air is scavenged from them.

By bringing the high density equipment together alongside cooling equipment, users benefit from reduced energy costs since hot air is conducted directly to chiller units, which therefore run at higher efficiency. Again, best practice should be adopted for rack layout and a hot aisle/ cool aisle approach is considered ideal. Within the racks, empty positions should be blanked off with banking panels to ensure cool airflow to hosted equipment.

With the introduction of efficient in-row cooling equipment, data centre professionals may have the option not to deploy a raised floor plenum, reducing the engineering costs for the room.


It pays to be rack-focused when designing data centre facilities in order to maximise adaptability and scalability. Using watts/ rack to establish overall room capacity ensures that high density loads can be accommodated and is energy efficient. Using rack-mounted equipment UPS and PDU's reduces the upfront engineering costs and ensures that power and autonomy can be scaled to meet facility demands. Likewise, bringing cooling equipment in-row ensures that high density equipment can be cooled without the emergence of hot spots and consequent failures.

By adopting the adaptable & scalable paradigm, upfront engineering and capital equipment costs can be kept in check, as data centre owners take a pay-as-you-grow approach. This reaps benefits as rightsized equipment is more electrically efficient and therefore less wasteful of energy and cost. Scalable, of-course, means that owners can scale up as well as down who knows when we might see the emergence of ultra-compact, low energy servers?

M8 Recruitment providing facilities management, critical engineering, risk management and built environment professionals to sustain the world we live in.