Title
Analysis of an energy proportional data center
Date Issued
01 February 2015
Access level
metadata only access
Resource Type
journal article
Author(s)
University of Houston
Publisher(s)
Elsevier B.V.
Abstract
Energy proportionality is a desirable property of an energy efficient data center that can be achieved by making servers available on demand, dynamically enabling enough computing capacity to handle the workload. However, reducing the number of running servers can impact job performance and may potentially lead to breaches in the service level agreement. We analyze the optimal (minimum) energy requirement of servers in an energy proportional data center to maintain a selected performance service level objective from the following possibilities: (i) running servers at or below a maximum utilization level; (ii) keeping the average job response time below a given limit; and (iii) limiting the probability of job response times exceeding a turnaround deadline. Performance and power measurements from a real server allow to define realistic parameters for theoretical and simulated models and to obtain realistic results.
Start page
554
End page
564
Volume
25
Issue
PB
Language
English
OCDE Knowledge area
Ciencias de la computación Ingeniería eléctrica, Ingeniería electrónica
Scopus EID
2-s2.0-84915822535
Source
Ad Hoc Networks
ISSN of the container
15708705
Sponsor(s)
In an energy proportional data center, a global power manager controls the operational status of servers to supply sufficient computing capacity to handle the current demand, cutting energy usage by hibernating redundant servers. However, a reduction in computing capacity can impact service quality. We analytically found the optimal (lower bound) energy consumption of servers implementing energy proportionality under workload and specific performance constraints. We formulated these constraints as three service level objectives: (i) limiting the utilization level of individual servers; (ii) bounding the average response times of jobs; and (iii) regulating the probability of job response times exceeding a maximum turnaround. The results from the analysis provide insight into the optimal combination of running and hibernating servers that can serve a given workload while fulfilling the selected SLO at the minimum energy expense. Furthermore, the results revealed the maximum user demand t ...
Sources of information: Directorio de ProducciĂłn CientĂ­fica Scopus