When it comes to building IT power infrastructure in a land of power scarcity, the more you need, the less it seems to be available. This makes it imperative to get the IT power infrastructure design correct, right at the drawing board. The business case is drawn out painstakingly, and is invariably bolstered by the promise of attractive monetary and power savings as well as efficacy and efficiency. The following data center checklist will help you to best-leverage your organization’s IT power infrastructure design for high efficiency and productivity.
1. Allocate adequate level of backup power – Power backup is a critical component needed to ensure 100% availability of the data center. The IT power infrastructure should be designed as per the following specifications:
Tier-2: A tier-2 data center setup has two UPSes (uninterruptible power supply) that run in parallel to ensure redundancy. So if one fails, the other takes over through a bypass.
Tier-3:This setup has three UPSes to help the organization ensure redundancy and concurrent maintainability. It requires at least n+1 redundancy, that is, when one path is running, the other is redundant.
Tier-4: This setup has four UPSes to assure concurrent maintainability and fault tolerance. In addition to redundancy that is on the supply front, right from the first point of upstream; you need to have two sets of input power: two DGs, four UPSes. (2+1 is the fault tolerant and n+1 is the redundant UPS which is concurrently maintainable).
In case of tier-3 and tier-4 setups, there needs to be zero power distribution (PD) and no difference between earth and neutral. The transformer inbuilt into the UPS should be located 75 feet from the load, failing which it creates a harmonic PD in voltage, resulting in noise. Voltage fluctuation may have disastrous effects on the server and collaterally on the IT power infrastructure.
2. Assess your technology requirements–
The IT power infrastructure setup depends on the IT workload, that is, the power factor consisting of IT servers, storage, and networking equipment. Server arrangements help to distribute the power and load by reconfiguring the load arrangements to work in tandem with power and cooling requirements.
Ideally, opt for energy optimizers in the UPS. This creates an intelligent integrated infrastructure (III) that senses the load, and changes dynamically to improve the overall efficacy and efficiency. The lower the loads, the lower will be the efficiency of the product. For example, if the load is equally shared among four UPSes, then efficiency will be lower by almost 92%. On the other hand, energy optimizers built into the UPS will stack 80% of the load in only one UPS and keep the other three in idle position thus increasing the efficiency to 96%.
An absolute dust and water-free environment is needed to maintain the IT power infrastructure in the data center, including under the raised floor. The cleaning should be done with high efficiency particle arrester (HEPPA filters) vacuum filters, as blowers are harmful. Make certain there is zero water leakage in the datacenter as water is a PCB spoiler.
3. Create an adaptive architecture-
While designing the IT power infrastructure, look at it in a holistic manner. The stronger the base, the more robust is the setup for the eco-system.
Here is where adaptive architecture comes into play. Adaptive architecture has three layers: business, services, and IT infrastructure. The topmost business layer has the applications or the ERP running on it; followed by the service layer which includes the mail services, print services, RDBMS services. Last layer is the IT infrastructure which has servers, storage, and the networking, power and cooling equipment.
The vendors providing the facilities need to consider these options in order to enhance the IT power infrastructure: What is the current status of the power infrastructure? Is it a green field venture or a legacy system build-up, or if from a small setup you are looking to migrate or graduate into a larger kind of a setup?
An adaptive architecture design depends on flexibility and scalability without having incremental losses by virtue of legacy power systems. The systems should be reliable and available 99.999% of the time.
Improving on the total cost of ownership is critical. CAPEX entails what equipment you will install in your data center, leaving room for flexibility if future power needs scale-up; while OPEX is directly linked to the operational efficiency of the data center.
Lastly, a Power Quality Audit is vital to ensure quality of the incoming power.
4. Arrange the data center equipment appropriately-
Place UPS systems away from the server room to protect it from the electromagnetic field. The power distribution unit (PDU) should be kept close to the IT load, preferably attached to the rack to reduce their physical footprint. The basic norm is to have zero PD between earth and neutral into the load, failing which there may be noise with a probability to boot the system as mentioned earlier.
Deployment of blade architecture reduces the footprint. In a 7U rack (1U = 48.26cm), one can have a blade chassis with a huge compute capability of 14 servers. The heat density will go up as you are crunching maximum compute capability in a smallest available form factor. Therefore, while designing the IT power infrastructure, don’t ignore the cooling requirement.
Finally, without maintenance, even the best data center may become defunct. Once the power and data center cables are running below the raised floor, ensure they aren’t running in parallel as it may create an electromagnetic field hampering the data center’s functioning. They should ideally have a minimum gap of 60cms between them.
5. Ensure energy efficiency-
This is the last lap to cover in the data center checklist. The servers in the data center need to be cooled as they are designed to run at up to 24 degrees Celsius. Adopt the free cooling methodology which is very cost effective. Defining and arranging hot and cold aisles can improve energy efficiency. The cold aisle with perforated tiles in the raised floor releases cool air into the server racks. Similarly, another method is to take a uniform load distribution across racks and cubic feet space available in data center for cooling.
With organizations switching to high density servers for optimization of technologies such as virtualization, extreme dense cooling and high sensible cooling are other approaches to obtain an efficient IT power infrastructure.
However, another popular school of thought is Moore’s Law, which states that the compute capability of a processor doubles every 18 months. Correspondingly, a data center’s power requirements tend to double every 18 months. So, if servers aren’t refreshed every two years, power consumption and heat density can rise, increasing operational expenses. Hence, the IT power infrastructure should be equipped to handle this refresh cycle.
This data center checklist shows that focusing on minor details could lead to dramatic savings in data center power usage.
About the author: Pratik Chube is the Country General Manager, Product Management & Marketing, at Emerson Network Power (India) Pvt. Ltd. He is an energy logic expert who conceptualizes and designs solutions for vertical specific applications, as well as develops cutting-edge technologies and products. Pratik has an MBA from Mumbai University, and an electronics engineering degree from REC Jaipur. He has over 11 years of experience in the channel business.
(As told to Joyoti Mahanta)
This was first published in January 2012