The days of monitoring data centers by looking at the thermostat on the wall ended years ago. Data center infrastructure management (DCIM) tools that watch over infrastructure such as power, heating and cooling, system, storage and network utilization are nearly universal.
As private clouds — filled with virtual servers and high-performance storage area networks — move into data centers, DCIM tools become critical to ensuring uptime and continuity of operations. When an enterprise puts all its eggs in one basket, it has to watch the basket very carefully.
As organizations move from dedicated systems with local storage to highly virtualized environments, they should consider these DCIM strategies:
Tip 1: Double up on temperature and airflow sensing.
Organizations that have followed best practices and laid out systems in a hot aisle/cold aisle configuration want to make sure that the cold side is cold enough.
Data center managers should start with airflow sensors on the output of each cooling unit to be sure the center is getting the cold air it needs, and add sensors on the cold side of each aisle in the data center. A zigzag configuration, measuring at the top of one rack and the bottom of the next rack, will give managers a good picture of the flow along each aisle. Focus on the cold side and watch that carefully. The data center may have alarmingly hot air on the hot side of its aisles, but as long as it is not being sucked into any cabinets, it’s not really a concern. A few strategically placed sensors can help to provide a picture of what’s happening, but organizations need to allocate a budget for this and pay attention to the airflow into the racks.
Many devices have internal temperature sensors that can be monitored using desktop management tools. These can be helpful for understanding whether the data center can increase room temperature safely. Every degree of increased room temperature can save the agency thousands of dollars in power each year. Most data center managers are overly conservative and keep their centers too cold, wasting power and increasing cooling costs. Good DCIM can put the data center in the right envelope by keeping devices within manufacturers’ specifications.
Tip 2: Deploy extended virtual switch and virtual tap technology.
As the data center moves from dedicated servers on dedicated switch ports to a virtual environment, requirements for monitoring traffic and gathering statistics don’t change — but the technology is decidedly unfriendly. Many out-of-the-box virtualization systems aren’t able to provide monitoring capabilities when virtual machines randomly boomerang between racks and virtual hosts. Software virtual tools, such as the Cisco Nexus 1000V and the NetOptics Phantom Virtual Tap, provide needed monitoring utilization as an organization moves from physical servers to a private cloud.
Tip 3: Use power-saving technologies inherent in virtualization.
Although many data center managers aren’t directly charged for their power usage, wasting power is a problem regardless. The flexibility that virtualization gives a private cloud allows the data center to power assets on and off to meet demand — sometimes even automatically. IT personnel shouldn’t be afraid to use these features. More efficient use of power will reduce the load on both servers and infrastructure, which can lead to longer operating lifetime and greater reliability. DCIM tools that can monitor and document savings will help data center managers prove they’re doing the right thing.
Tip 4: Start to micro-monitor.
Often, DCIM is used merely for crisis monitoring and alerting, and thus only captures the minimum required to detect serious problems. But the more elements the data center monitors, the better it can predict growth requirements for all its resources. Digging deep and gathering information, even information that isn’t needed every day, makes data center managers better prepared to answer what-if scenarios and head off capacity problems.
Tip 5: Use DCIM to tune cooling and power support systems properly.
Infrastructure components, such as compressors and uninterruptible power supplies, operate most efficiently near their design capacity. For example, a typical 20-kilowatt UPS wastes 15 percent of input power if run at 30 percent load, but wastes only 5 percent of power if run at 75 percent load. The same is true of cooling compressors: They operate best at a high duty cycle, typically in the 70 percent to 80 percent range. At the same time, most data centers are dramatically overprovisioned: Seventy percent use less than 50 percent of their power capacity. Many data center managers have thrown up their hands and use inefficient strategies for managing utilization. DCIM monitoring and control systems can increase efficiency and reliability, while preserving additional capacity and making sure the center has adequate headroom.