The Smart Grid will bring about many changes, and data centers will not be exempt as they consume and waste an enormous amount of electricity. In the U.S., data centers consumed 61 billion kilowatt-hours or 1.5 percent of the nation’s total electricity in 2006, according to an August 2007 study by the Environmental Protection Agency (EPA). That is twice what was consumed just six years earlier, and the EPA forecasted data center power consumption to double again from 2006 to 2012.
With generating capacity remaining largely the same, the doubling of data center power consumption and other growing consumption as the economy recovers are causing the grid to become increasingly capacity constrained. As a result, energy prices are increasing and fluctuating with grid conditions. Some utilities are already charging for electricity based on its time-of-use (TOU), with rates being considerably higher during periods of peak demand, usually in the late afternoon and early evening local time.
As the demand for electricity continues to grow, utilities are exploring alternatives to building new generation facilities. They are asking commercial and industrial customers who can manage consumption to cut back on their power usage during these peak periods. Data centers are an example of large customers that both contribute to peak system demand and that can manage consumption. Utilities call these Smart Grid programs demand response (DR) or demand-side management, and Independent System Operations (ISOs) are also establishing markets for demand response services based on the ability of customers to provide capacity, regulation, or spinning reserve. Without action, curtailment events when the system is capacity constrained are certain to become both more frequent and more severe. Should these programs fall short of balancing supply with demand, utilities will be forced to implement rolling brownouts and blackouts.
There is, fortunately, plenty that data center IT and facility managers can do.
The first step is to reduce the amount of energy being wasted in most data centers, which are designed for capacity, performance and reliability, usually at the expense of energy efficiency. A comprehensive Data Center Infrastructure Management (DCIM) system is the best tool for finding and fixing inefficiencies in cooling systems, hot/cold aisle configurations, power distribution (including AC-DC conversions), and especially underutilized servers—the primary cause of waste in most data centers.
Once the data center is operating efficiently at a Power Usage Effectiveness (PUE) rating of between 1.1 and 1.4—the target range established by EPA—the next step is to use the DCIM to temporarily power-cap less critical servers and/or turn up the thermostat during DR events. Even greater reductions can be achieved by continuously matching server capacity to application load in real-time, again using the DCIM with runbooks to automate the steps involved in de- and re-activating servers. Organizations with multiple data centers will also be able to shift loads to where power is currently the most stable and least expensive. Such a “follow the moon” strategy can dramatically reduce energy expenditures.
The “greenest” organizations (both environmentally and financially) may even want to “monetize” the data center by participating in the local utility’s ancillary services market and/or generating their own electricity from renewable resources like wind and solar, or fuel cells.
The Smart Grid will indeed change—and for the better with the right approach—how data centers use, conserve and even produce energy in the future.
About the Author
Clemens Pfeiffer is the CTO of Power Assure and is a 22-year veteran of the software industry, where he has held leadership roles in process modeling and automation, software architecture and database design, and data center management and optimization technologies.
No comments:
Post a Comment