Part 2: Zero Energy vs. Net Zero Case Study

Note: This series was originally posted in January 2016. Due to continued interest in this topic this series has been updated with new information to maintain relevancy as of March 2020.

In the first post of this three-part of this series, What’s the Difference between Net Zero and Zero Energy? – Part 1, we discussed the introduction of a new designation in the energy world, a “zero energy” project. This in contrast to the “net zero” goal that has been more widely pursued within the industry.

This post will explore what it would mean for a single building to achieve zero energy status versus net zero status. To simplify this explanation, we will present all energy use and production in terms of a common unit of measurement, here kilo-British-Thermal-Units (kBTU). Commonly electricity consumption is measured in kilo-Watt-hours (kWh, 1 kWh = 3.413 kBTU), and natural gas consumption in Therms ( 1 Therm = 100 kBTU).

While multiple definitions of the term are used in the industry, “net zero” usually means a building produces as much renewable energy on-site as it consumes on-site on an annual basis. For example if a project uses 200,000 kBTU of electricity and 100,000 kBTU of natural gas, it would have to produce 300,000 kBTU of on-site renewable energy, typically in the form of photovoltaic (PV) produced electric or solar thermal heat to be considered net zero. The common“net zero” designation does not require any on-site storage of energy, nor does it preclude being connected to standard utilities.

In contrast, “zero energy,” as defined by the Department of Energy (DOE), means that a building produces as much renewable energy on-site as it consumes in source energy (accounting for the different amounts energy needed to obtain each type of fuel and deliver to the project site) on an annual basis.