“Power to the People. Reducing datacenter carbon footprints”
Communications of the ACM, August 2020, Vol. 63 No. 8, Pages 41-45
By Jessie Frazelle
“One of the actions Google has taken for lowering its PUE is using machine learning to cool datacenters with inputs from local weather and other factors.”
When you upload photos to Instagram, back up your phone to the cloud, send email through Gmail, or save a document in a storage application like Dropbox or Google Drive, your data is being saved in a datacenter. These datacenters are airplane-hangar-sized warehouses, packed to the brim with racks of servers and cooling mechanisms. Depending on the application you are using, you are likely hitting one of the datacenters operated by Facebook, Google, Amazon, or Microsoft. Aside from those major players, which I refer to as hyperscalers, many other companies run their own datacenters or rent space from a colocation center to house their server racks.
Carbon footprints. Most of the hyperscalers have made massive strides toward achieving carbon-neutral footprints for their datacenters. Google, Amazon, and Microsoft have pledged to decarbonize completely; however, none has yet succeeded in that quest.
If a company claims to be carbon neutral, this usually means it is off-setting its use of fossil fuels with renewable energy credits (RECs). A REC represents one MWh (megawatt-hour) of electricity that is generated and delivered to the electrical grid from a renewable energy resource such as solar or wind power. By purchasing RECs, carbon-neutral companies are essentially giving back clean energy to prevent someone else from emitting carbon. Most companies become carbon neutral by investing in offsets that primarily avoid emissions, such as paying people not to cut down trees or buying RECs. These off-sets do not actually remove the carbon the companies are emitting.
A net zero company actually must remove as much carbon as it emits. Though the company is still creating carbon emissions, those emissions are equal to the amount of carbon the company removes.
If a company calls itself carbon negative, it is removing more carbon than it emits each year. This should be the gold standard for how companies operate. None of the FAANG (Facebook, Apple, Amazon, Netflix, and Google) today claim to be carbon negative, but Microsoft issued a press release stating it would be by 2030.
Power usage efficiency, or PUE, is defined as the total energy required to power a datacenter (including lights and cooling) divided by the energy used for servers. A perfect PUE would be 1.0, since 100% of electricity consumption would be used on computation. Conventional datacenters have a PUE of about 2.0, while hyperscalers have gotten theirs down to about 1.2. According to a 2019 study from the Uptime Institute, which surveyed 1,600 datacenters, the average PUE was 1.67.
PUE as a method of measurement is a point of contention. PUE does not account for location, which means a datacenter that is located in a part of the world that can benefit from free cooling from outside air will have a lower PUE than one in a very hot climate. PUE should be measured as an annual average since seasons change and affect the cooling needs of a data-center over the course of a year. According to a study from the University of Leeds, “comparing a PUE value of data-centers is somewhat meaningless unless it is known whether it is operating at full capacity or not.”
Google claims an average yearly PUE of 1.1 for all its datacenters, while individually some are as low as 1.08. One of the actions Google has taken for lowering its PUE is using machine learning to cool datacenters with inputs from local weather and other factors—for example, if the weather outside is cool enough the datacenter can use it without modification as free cold air. It can also predict windfarm output up to 36 hours in advance. Google took all the data it had from sensors in its facilities monitoring temperature, power, pressure, and other resources to create neural networks to predict future PUE, temperature, and pressure in its datacenters. This way Google can automate and recommend actions for keeping its datacenters operating efficiently from the predictions. Google also sets the temperature of its datacenters to 80°F, rather than the usual 68°F–70°F, saving a lot of power for cooling. Weather local to the datacenter is a huge factor. For example, Google’s Singapore datacenter has the highest PUE and is the least efficient of its sites because Singapore is hot and humid year-round.
About the Author:
Jessie Frazelle is the cofounder and chief product officer of the Oxide Computer Company. Before that, she worked on various parts of Linux, including containers, and the Go programming language.
Cooling the Data Center
Virtualization: Blessing or Curse?
Words Fail Them