Friday, July 09, 2021

Study: How a large cat deity helps people to share space with leopards in India

The story of the Warli and the Waghoba

WILDLIFE CONSERVATION SOCIETY

Research News

IMAGE

IMAGE: THE RESEARCHERS HAVE IDENTIFIED OVER 150 SHRINES DEDICATED TO WORSHIPPING WAGHOBA view more 

CREDIT: RAMYA NAIR

BENGALURU, India (July 8, 2021) - A new study led by WCS-India documents how a big cat deity worshipped by Indigenous Peoples facilitates coexistence between humans and leopards.

The study, published in a special issue of the journal Frontiers in Conservation Science: Human-Wildlife Dynamics called Understanding Coexistence with Wildlife documents how the Indigenous Warli people of Maharashtra, India, worship Waghoba, a leopard/tiger deity to gain protection from leopards, and how they have lived side-by-side with them for centuries (formerly tigers, too). The researchers have identified over 150 shrines dedicated to worshipping Waghoba. The researchers note that while there are still negative interactions with leopards such as livestock depredation, they are likely to be more accepted under the institution of Waghoba.

Warlis believe in a reciprocal relationship, where Waghoba will protect them from the negative impacts of sharing spaces with big cats if the people worship the deity and conduct the required rituals, especially at the annual festival of Waghbaras.

Researchers suggest that such relationships facilitate the sharing of spaces between humans and leopards that live in the landscape. In addition, the study addresses the ways in which the range of institutions and stakeholders in the landscape shape the institution of Waghoba and thereby contribute to the human-leopard relationship in the landscape.

Said the study's lead author Ramya Nair of WCS India: "The main aim of the study is to diversify the way we understand and approach human-wildlife interactions. It does so by shedding light on how local institutions that contribute to co-existence are not devoid of conflict, but have a role in negotiating the conflicts that arise."

Locally produced systems that address issues surrounding human-wildlife interactions may exist in several other cultures and landscapes. The authors note that while conservation interventions have shown a movement toward the inclusion and participation of local communities, we have to recognize that landscapes have a history before our own point of entry into them. This is relevant for present-day wildlife conservation because such traditional institutions are likely to act as tolerance-building mechanisms embedded within the local belief system. Further, it is vital that the dominant stakeholders outside of the Warli community (such as the Forest Department, conservation biologists, and other non-Warli residents who interact with leopards) are informed about and sensitive to these cultural representations because it is not just the biological animal that the Warlis predominantly deal with.

The study was conducted by researchers from WCS-India, NINA, Norway, Inland Norway University of Applied Sciences, Norway and supported by Wildlife Conservation Trust. Fieldwork was conducted across Mumbai Suburban, Palghar and Thane districts of Maharashtra in 2018-19. An ethnographic approach was taken to collect data wherein researchers conducted semi-structured interviews and conducted participant observation (particularly attending worship ceremonies) concurrent to documenting Waghoba shrines. Questions were asked to explore narratives on the role of Waghoba in the lives of the Warli, the history of Waghoba worship, associated festivals, rituals and traditions, and the ties between Waghoba and human-leopard interactions.

###

WCS (Wildlife Conservation Society)

MISSION: WCS saves wildlife and wild places worldwide through science, conservation action, education, and inspiring people to value nature. To achieve our mission, WCS, based at the Bronx Zoo, harnesses the power of its Global Conservation Program in nearly 60 nations and in all the world's oceans and its five wildlife parks in New York City, visited by 4 million people annually. WCS combines its expertise in the field, zoos, and aquarium to achieve its conservation mission. Visit: newsroom.wcs.org Follow: @WCSNewsroom. For more information: 347-840-1242.

When resistance is futile, new paper advises RAD range of conservation options

ECOLOGICAL SOCIETY OF AMERICA

Research News

Major ecosystem changes like sea-level rise, desertification and lake warming are fueling uncertainty about the future. Many initiatives - such as those fighting to fully eradicate non-native species, or to combat wildfires - focus on actively resisting change to preserve a slice of the past.

However, resisting ecosystem transformation is not always a feasible approach. According to a new paper published today in the Ecological Society of America's journal Frontiers in Ecology and the Environment, accepting and directing ecosystem change are also viable responses, and should not necessarily be viewed as fallback options or as last resorts. The paper presents a set of guiding principles for applying a "RAD" strategy - a framework that involves either resistingaccepting or directing ecosystem changes.

"We are facing the harsh reality that, in some locations, ecosystems are transforming at such a pace that we won't be able to restore or rehabilitate them to what they once were," said Abigail Lynch, the paper's lead author and a research fish biologist at the United States Geological Survey (USGS) National Climate Adaptation Science Center. "The RAD framework provides a common language for starting productive conversations about what comes next - when we need to consider options to accept and direct change in addition to just trying to resist it."

The paper was a collaborative effort by 20 federal, state and academic researchers from across the United States. It zeroes in on three National Wildlife Refuges (NWRs) along the East Coast, where sea-level rise is increasing at three to four times the global average rate and transforming ecosystems and local communities. Managers of the three NWRs have applied all three of the responses outlined in the paper:

  • John H Chafee NWR (Rhode Island): managers are resisting the effects of sea-level rise by depositing dredged sediment on waterlogged salt marshes and securing the sediment with bags of recycled oyster shells.
  • Chincoteague NWR (Virginia): After years of resisting dune overwash, managers are now allowing storm-induced waves to fill in waterfowl impoundments, accepting the landward transport of sand and moving National Park Service visitor infrastructure.
  • Blackwater NWR (Maryland): Managers are directing the effects of sea-level rise by facilitating marsh migration upwards. Assisted marsh migration is ten times cheaper than trying to restore marsh in situ.

According to Erik Beever, a research ecologist at the USGS Northern Rocky Mountain Science Center, research affiliate faculty at Montana State University and a coauthor of the paper, the importance of considering costs and benefits is paramount when selecting a course of action within the RAD framework.

"A 'resist' approach may involve less cost in the immediate term or may allow the persistence of a culturally treasured species, but it may involve substantially higher costs over the course of a period as short as 10-15 years," said Beever. "For example, if that treasured species' bioclimatic niche no longer occurs within the management area, facilitating its persistence will require more intensive and more costly efforts."

Accepting ecosystem change can involve a fundamental shift in the way of life for communities that rely on an ecosystem's goods and services. However, solutions that focus on resisting change are becoming increasingly impractical as ecological changes occur more frequently and more dramatically. The paper contends that three broad feasibility criteria - ecological, societal, and financial - must be considered when deciding which RAD strategy is most suitable.

Natural resource managers are using options from within the RAD framework to tackle a variety of problems across many different systems, including:

  • Loss of corals in the Mexican state of Quintana Roo
  • Spruce bark beetle epidemic and wildfires on Alaska's Kenai peninsula, where white spruce forests are transforming into grasslands
  • Projected decline of cisco populations under warming conditions in Minnesota lakes

In the RAD framework, accepting change is not a passive approach; rather, it is a deliberate course of action geared toward a defined set of objectives. While the framework still needs to be tested and fine-tuned, the authors ultimately view it as a strategy of empowerment.

"It might be tempting to throw one's hands up in the air when faced with drastic and transformative environmental change, but there are options available," said Laura Thompson, a coauthor who is a research ecologist at the USGS National Climate Adaptation Science Center and adjunct faculty member at the University of Tennessee, Knoxville. "This RAD framework provides the full range of strategies."

###

Journal article:
Lynch AJ, Thompson LM, Beever EA, et al. 2021. Managing for RADical ecosystem change: applying the Resist-Accept-Direct (RAD) framework. Frontiers in Ecology and the Environment. doi.org/10.1002/fee.2377

Authors:
Abigail J Lynch1, Laura M Thompson1,2, Erik A Beever3,4, David N Cole5, Augustin C Engman2,6, Cat Hawkins Hoffman7,

Stephen T Jackson8,9, Trevor J Krabbenhoft10, David J Lawrence7, Douglas Limpinsel11, Robert T Magill12, Tracy A Melvin13, John M Morton14, Robert A Newman15, Jay O Peterson16, Mark T Porath17, Frank J Rahel18, Gregor W Schuurman7, Suresh A Sethi19, and Jennifer L Wilkening20

1National Climate Adaptation Science Center, US Geological Survey (USGS), Reston, VA; 2Department of Forestry, Wildlife and Fisheries, University of Tennessee, Knoxville, TN; 3Northern Rocky Mountain Science Center, USGS, Bozeman, MT; 4Department of Ecology, Montana State University, Bozeman, MT; 5Retired, Aldo Leopold Wilderness Research Institute, Missoula, MT; 6North Carolina Cooperative Fish and Wildlife Research Unit, North Carolina State University, Raleigh, NC; 7Climate Change Response Program, National Park Service, Fort Collins, CO; 8Southwest and South Central Climate Adaptation Science Centers, USGS, Tucson, AZ; 9School of Natural Resources and Environment and Department of Geosciences, University of Arizona, Tucson, AZ; 10Department of Biological Sciences and RENEW Institute, University at Buffalo, Buffalo, NY; 11National Oceanic and Atmospheric Administration (NOAA) Fisheries, Alaska Region, Habitat Conservation Division, Anchorage, AK; 12Coconino County Parks and Recreation, Arizona Game and Fish Department, Flagstaff, AZ; 13Department of Fisheries and Wildlife, Michigan State University, East Lansing, MI; 14Kenai National Wildlife Refuge, US Fish and Wildlife Service (FWS), Soldotna, AK; 15Department of Biology, University of North Dakota, Grand Forks, ND; 16NOAA Fisheries, Office of Science and Technology, Silver Spring, MD; 17Nebraska Game and Parks Commission, Lincoln, NE; 18Department of Zoology and Physiology, Program in Ecology, University of Wyoming, Laramie, WY; 19USGS, New York Cooperative Fish and Wildlife Research Unit, Cornell University, Ithaca, NY; 20Southern Nevada Fish and Wildlife Office, FWS, Las Vegas, NV

Author contact:
Abigail Lynch (ajlynch@usgs.gov">ajlynch@usgs.gov)

RAD resources:
https://usgs.gov/casc/rad

###

The Ecological Society of America, founded in 1915, is the world's largest community of professional ecologists and a trusted source of ecological knowledge, committed to advancing the understanding of life on Earth. The 9,000 member Society publishes five journals and a membership bulletin and broadly shares ecological information through policy, media outreach, and education initiatives. The Society's Annual Meeting attracts 4,000 attendees and features the most recent advances in ecological science. Visit the ESA website at https://www.esa.org. 

ESA is offering complimentary registration at the 106th Annual Meeting of the Ecological Society of America for press and institutional public information officers (see credential policy). The meeting will feature live plenaries, panels and Q&A sessions from August 2-6, 2021. To apply for press registration, please contact ESA Public Information Manager Heidi Swanson at heidi@esa.org">heidi@esa.org.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy

 

Remotely-piloted sailboats monitor 'cold pools' in tropical environments

UNIVERSITY OF WASHINGTON

Research News

IMAGE

IMAGE: SAILDRONE UNCREWED SURFACE VEHICLES (USVS), LIKE THE ONE PICTURED HERE, MADE MEASUREMENTS OF ATMOSPHERIC COLD POOLS IN REMOTE REGIONS OF THE TROPICAL PACIFIC. view more 

CREDIT: SAILDRONE, INC.

Conditions in the tropical ocean affect weather patterns worldwide. The most well-known examples are El Niño or La Niña events, but scientists believe other key elements of the tropical climate remain undiscovered.

In a study recently published in Geophysical Research Letters, scientists from the University of Washington and NOAA's Pacific Marine Environmental Laboratory use remotely-piloted sailboats to gather data on cold air pools, or pockets of cooler air that form below tropical storm clouds.

"Atmospheric cold pools are cold air masses that flow outward beneath intense thunderstorms and alter the surrounding environment," said lead author Samantha Wills, a postdoctoral researcher at the Cooperative Institute for Climate, Ocean and Ecosystem Studies. "They are a key source of variability in surface temperature, wind and moisture over the ocean."

The paper is one of the first tropical Pacific studies to rely on data from Saildrones, wind-propelled sailing drones with a tall, hard wing and solar-powered scientific instruments. Co-authors on the NOAA-funded study are Dongxiao Zhang at CICOES and Meghan Cronin at NOAA.

Atmospheric cold pools produce dramatic changes in air temperature and wind speed near the surface of the tropical ocean. The pockets of cooler air form when rain evaporates below thunderstorm clouds. These relatively dense air masses, ranging between 6 to 125 miles (10 to 200 kilometers) across, lead to downdrafts that, upon hitting the ocean surface, produce temperature fronts and strong winds that affect their surroundings. How this affects the larger atmospheric circulation is unclear.

"Results from previous studies suggest that cold pools are important for triggering and organizing storm activity over tropical ocean regions," Wills said.

To understand the possible role of cold pools in larger tropical climate cycles, scientists need detailed measurements of these events, but it is hard to witness an event as it happens. The new study used uncrewed surface vehicles, or USVs, to observe the phenomena.

Over three multi-month missions between 2017 and 2019, 10 USVs covered over 85,000 miles (137,000 kilometers) and made measurements of more than 300 cold pool events, defined as temperature drops of at least 1.5 degrees Celsius in 10 minutes. In one case, a fleet of four vehicles separated by several miles captured the minute-by-minute evolution of an event and revealed how the cold pool propagated across the region.

"This technology is exciting as it allows us to collect observations over hard-to-reach, under-sampled ocean regions for extended periods of time," Wills said.

The paper includes observations of air temperature, wind speed, humidity, air pressure, sea surface temperature and ocean salinity during cold pool events. The authors use the data to better describe these phenomena, including how much and how quickly air temperatures drops, how long it takes the wind to reach peak speeds, and how sea surface temperature changes nearby. Results can be used to evaluate mathematical models of tropical convection and explore more questions, like how the gusts created by the temperature difference affect the transfer of heat between the air and ocean.

###

For more information, contact Wills at smwills@uw.edu

 

Air pollution contributes to COVID-19 severity, suggests study in one of America's most polluted cities

EUROPEAN SOCIETY OF CLINICAL MICROBIOLOGY AND INFECTIOUS DISEASES

Research News

Long-term exposure to high levels of air pollutants, especially fine particulate matter (PM2.5), appears to have a significant influence on outcomes for people hospitalised with COVID-19, according to a large, multicentre observational study being presented at the European Congress of Clinical Microbiology & Infectious Diseases (ECCMID) held online this year.

The greater the exposure, the greater the risk, researchers found. Each small (ug/m³) increase in long-term PM2.5 exposure was associated with more than three times the odds of being mechanically ventilated and twice the likelihood of a stay in ICU.

"Our study calls attention to the systemic inequalities that may have led to the stark differences in COVID-19 outcomes along racial and ethnic lines", says Dr Anita Shallal from the Henry Ford Hospital in Detroit, USA. "Communities of colour are more likely to be located in areas closer to industrial pollution, and to work in businesses that expose them to air pollution".

According to the American Lung Association, Detroit is the 12th most polluted city in the USA measured by year-round fine particle pollution (PM2.5) [1]. Ambient air pollution--including potentially harmful pollutants such as PM2.5 and toxic gases emitted by industries, households, and vehicles--can enhance inflammation and oxidative stress in the respiratory system, exacerbating pre-existing lung disease. Air pollution has been linked with worse health outcomes, including increased risk of dying, from respiratory viruses like influenza.

To examine the association between air pollution and the severity of COVID-19 outcomes, researchers retrospectively analysed data from 2,038 adults with COVID-19 admitted to four large hospitals within the Henry Ford Health System between March 12 and April 24, 2020. Patients were followed until May 27, 2020.

The researchers collected data on where participants lived as well as data from the US Environmental Protection Agency and other sources on local levels of pollutants including PM2.5, ozone, and lead paint (percentage of houses built before 1960). They explored the association between COVID-19 outcomes and exposure to PM2.5, ozone, lead paint, traffic, hazardous waste, and waste water discharge.

They found that patients who were male, black, obese, or had more severe long-term health conditions were much more likely to be mechanically ventilated and admitted to the ICU. So too were patients living in areas with higher levels of PM2.5 and lead paint.

Even after accounting for potentially influential factors including age, BMI, and underlying health conditions, the analysis found that being male, obese, and having more severe long-term health conditions were a good predictor of dying following admission. Similarly, higher PM2.5 was an independent predictor for mechanical ventilation and ICU stay, but not a greater risk of dying from COVID-19.

"The key takeaway is that living in a more polluted neighbourhood is an independent risk factor for severity of COVID-19 disease", says Dr Shallal. "Although it is not clear how air pollutants contribute to more severe disease, it's possible that long-term exposure to air pollution may impair the immune system, leading both to increased susceptibility to viruses and to more severe viral infections. In a double hit, fine particles in air pollution may also act as a carrier for the virus, increasing its spread. Urgent further research is needed to guide policy and environmental protection, to minimise the impact of COVID-19 in highly industrialised communities that are home to our most vulnerable residents."

The authors point out that their study was observational, so can't establish cause. They add that while they adjusted for several influential factors, it is still possible that other factors which could not be fully controlled for, including severity of disease at time of presentation, may contribute to the outcomes observed.

###

 

How air pollution changed during COVID-19 in Park City, Utah

UNIVERSITY OF UTAH

Research News

As luck would have it, the air quality sensors that University of Utah researcher Daniel Mendoza and his colleagues installed in Park City, Utah in September 2019, hoping to observe how pollution rose and fell through the ski season and the Sundance Film Festival, captured a far more impactful natural experiment: the COVID-19 pandemic.

Throughout the pandemic, the air sensors watched during lockdowns as air pollution fell in residential and commercial areas, and then as pollution rose again with reopenings. The changing levels, the researchers found, which behaved differently in residential and commercial parts of the city, show where pollution is coming from and how it might change in the future under different policies.

"The lockdown period demonstrated how low pollution levels can be and showed what the background pollution is in the area," says Mendoza, a research assistant professor in the Department of Atmospheric Sciences and visiting assistant professor in the Department of City & Metropolitan Planning. "The very low levels of PM2.5 [fine particulate matter] can be considered an aspirational target and could spur increases in renewable and low-polluting energy sources."

The study, supported by the Sustainability Office of Park City, is published in Environmental Research.

Good timing

Before this study, neither Park City nor Summit County, Utah had a long-term record from regulatory air quality sensors. Although the population of Park City is much smaller than the Salt Lake Valley, its geography still creates temperature inversions that can trap and concentrate emissions from cars, businesses and other sources. Mendoza, who also holds appointments as an adjunct assistant professor in the Pulmonary Division at the School of Medicine and as a senior scientist at the NEXUS Institute, and his colleagues set up sensors at two different locations, one atop the building of the KPCW radio station, in Park City's "Old Town" district, representing a bustling commercial area. The other was located at the Park City Municipal Athletic & Recreation Center, in an affluent residential area.

"We are looking to study other areas, including the Salt Lake Valley, but we wanted to focus on Park City because of the novelty of having sensors installed there," Mendoza says. In contrast to the Salt Lake Valley's diverse set of industrial and residential emissions, Park City's emissions are primarily related to heating and on-road traffic. It was already set to be a fascinating study.

"However, as we all know, COVID-19 happened and we had a natural experiment," he says. As restrictions and precautions went into effect, the research team tracked how emissions changed.

Lockdown

Emissions declined during the lockdown period across the city but decreased more in commercial areas. Many residents stayed at home and many offices shifted to remote work. But the emissions, Mendoza says, shifted to the residential areas.

"Due to exposure concerns, many people ordered food, groceries, etc. to be delivered to their homes," he says. "Furthermore, many companies have been allowing people to work from home, at least for part of the week, so car trips moved to residential areas instead of commercial areas."

Studying two clearly different locations in the same city is an important feature of the research, Mendoza says. "The intra-city variability is something that has not been studied in detail and can help us understand potential future emission and pollution patterns, particularly as teleworking is becoming a more viable and accepted option."

The findings can't be directly extrapolated to larger cities, but it stands to reason, Mendoza says, that air pollution emissions may have similarly shifted in many cities from a central city signal to a more dispersed residential pattern. "While traditionally residential areas have had cleaner air, this was not necessarily the case during and following the lockdown periods," he says.

Rebound

The sensors kept watch as activity largely returned to a form of normal in May and June 2020. By the end of the study period in late July 2020, commercial emissions hadn't yet returned to pre-pandemic levels, while residential emissions had made a full rebound. The researchers noted that the emissions rose over a course of two months.

"I think it's comparatively easy to lock down a place - businesses and activities shut down," Mendoza says. "However, reopening takes much more time and thought."

The researchers carefully checked their data and ruled out the possibility that the changes in emissions were due to changing seasons or meteorology. They concluded that changes in human activity produced a measurable change in air quality--a finding with broad implications. Pandemic-level emissions could serve as a baseline, for example, for air pollution reduction goals. The study also showed that residential heating and cooling are significant components of the air quality equation--something for policymakers to consider in the transition to a low-carbon energy economy.

Air pollution has improved following other events in the past, such as the Great Recession of 2008, says Tabitha Benney, associate professor of political science and a co-author on the paper. But those prior events weren't monitored with an inter-city perspective. So the observed trends in Park City, with residential emissions rebounding faster than commercial emissions, came as a surprise.

"However, at the county level, it appears that pollution remains low over the entire study period," she says. "It is only when we use the inter-city perspective that such patterns become apparent. This has important implications for other urban areas as well."

###

Find the full study here.

 NATURAL CAPITALISM TOO

Dealing with global carbon debt

INTERNATIONAL INSTITUTE FOR APPLIED SYSTEMS ANALYSIS

Research News

As atmospheric concentrations of CO2 continue to rise, we are putting future generations at risk of having to deal with a massive carbon debt. IIASA researchers and international colleagues are calling for immediate action to establish responsibility for carbon debt by implementing carbon removal obligations, for example, during the upcoming revision of the EU Emissions Trading Scheme.

Over the last several decades, governments have collectively pledged to slow global warming through accords such as the Kyoto Protocol and the Paris Agreement. Despite the ratification of these agreements by a large number of countries, the atmospheric concentration of CO2 continues to rise. At the rate we are going, we are well on our way to using up the remaining quantity of CO2 emissions to limit temperature rise to 1.5°C approximately within the next ten years. If this so called 'carbon budget' becomes depleted before net-zero emissions are achieved globally, we will have to remove one tonne of CO2 from the atmosphere later in the century for every additional tonne of CO2 that we emit after this point. In other words, if we continue on our current trajectory - which is very likely to be the case - we will be building up a carbon debt.

The authors of a new study just published in Nature point out that the net-zero pledges made by a growing number of countries in fact already assume that a substantial amount of carbon debt will need to be compensated for by net negative emissions in the long-term. Idealized global scenarios from the Intergovernmental Panel on Climate Change (IPCC)'s Special Report on 1.5 °C Warming, for instance, suggest that carbon debt could amount to the equivalent of 2 to 18 years of pre-COVID emissions. This amount is bound to increase if we do not manage to cut CO2 emissions by roughly 50% by 2030, or if significant Earth system feedbacks, such as additional emissions from permafrost melting, occur.

"With its recently adopted Climate Law, the European Union not only decided to reach net-zero greenhouse gas emissions by 2050, but already for going net negative thereafter, potentially helping to bring down global carbon budget overshoot. However, so far this is not more than a declaration, since any serious discussion on instruments to establish long-term responsibility for large-scale carbon dioxide removal is missing, both from the political and the academic debate," explains IIASA researcher and PhD student at Oxford University, Johannes Bednar, the lead author of the study.

Despite the existing ambitious agendas to achieve net-zero emissions, there is generally a lack of strategy to repay potentially costly carbon debt. By implication, we risk that future generations will end up with massive debt, which is not only questionable from an equity perspective but also significantly reduces our chances to limit warming to 1.5°C in the long-run. To assure the viability of a future net negative carbon economy, the authors argue that funds for carbon debt repayment need to be collected through carbon pricing while emissions are still in the net positive domain. Economic logic dictates that the latest possible time to start doing that is when the carbon budget becomes depleted.

Using carbon pricing to collect funds for carbon debt repayment works through both carbon taxes and emission trading schemes. For carbon taxes, a fraction of tax revenues would need to be earmarked for future net negative emissions, which is in some ways similar to paying into trust funds for nuclear decommissioning. Carbon taxes, however, carry the risk that insufficient funding is collected in the near-term through politically set prices for covering highly uncertain CO2 removal costs far in the future; or that savings are appropriated for other political purposes.

The study shows that in the case of an idealized global emission trading scheme, emission caps would accurately have to reflect the almost-depleted carbon budget. For existing trading schemes, like the EU emission trading scheme, this would imply a downward correction of currently scheduled emission caps. The resultant reduction of emission allowances, which would require CO2 emissions to fall to net-zero within this decade, could, however, be compensated. Corporations that continue to emit large amounts of CO2 would be able to hold on to an obligation to remove an equivalent quantity of CO2 in the future. Carbon debt would consequently be managed through so-called carbon removal obligations, which establish legal responsibility for carbon debt repayment.

Emission trading schemes then need to start dealing with the default risk of carbon debtors. The authors suggest this could be addressed by treating carbon debt like financial debt by imposing interest on it. Interest payments can be viewed as a rental fee for temporarily storing CO2 in the atmosphere. However, because carbon removal obligations are tradeable assets, this would facilitate de-risking carbon markets over time.

"Carbon removal obligations completely change how we see CO2 removals: from magical tools to enable a 30-year long period of the grand atmospheric restoration project, to a technology option that is developed and tested today and flexibly and more incrementally scaled throughout the 21st century and possibly beyond," notes IIASA researcher and study coauthor Fabian Wagner.

According to the authors, this policy proposal resolves some of the large inconsistencies of the current scenario literature and foreseeable long-run climate policy failures. Instead of overburdening future generations, carbon removal obligations imply a much more equitable distribution of financial flows and costs over time. Moreover, in climate mitigation scenarios a larger portfolio of CO2 removal enabling technologies usually goes hand in hand with increased carbon debt and a large reliance on CO2 removal later in the century. With carbon removal obligations in place, carbon debt is penalized through interest payments. In this case, the authors say, CO2 removal helps to minimize carbon debt and associated risks if it is rolled-out at large scale in the near-term to facilitate a more rapid path to net-zero.

"The idea of intertemporal emission trading has been around for some time. However, its crucial importance for dealing with net negative emissions has only now been discovered. Carbon removal obligations are in principle fully compatible with existing emission trading schemes. Nevertheless, for regulators and financial institutions this marks new territory, and frictionless operation will only be possible after some years of pilot testing," says coauthor Michael Obersteiner, a senior IIASA researcher and Director of the Environmental Change Institute at Oxford University. "With the rapid depletion of the carbon budget, we therefore call for immediate action to establish responsibility for carbon debt by implementing carbon removal obligations," he concludes.

###

Reference

Bednar, J., Obersteiner, M., Baklanov, A., Thomson, M., Wagner, F., Geden, O., Allen, M., Hall, J. (2021). Operationalizing the net negative carbon economy. Nature DOI: 10.1038/s41586-021-03723-9

Contacts:

Researcher contact

Johannes Bednar
Research Scholar
Exploratory Modeling of Human-Natural Systems Research Group
Advancing Systems Analysis Program
Tel: +43 2236 807 382
bednar@iiasa.ac.at

Press Officer

Ansa Heyl
IIASA Press Office
Tel: +43 2236 807 574
Mob: +43 676 83 807 574
heyl@iiasa.ac.at

About IIASA:

The International Institute for Applied Systems Analysis (IIASA) is an international scientific institute that conducts research into the critical issues of global environmental, economic, technological, and social change that we face in the twenty-first century. Our findings provide valuable options to policymakers to shape the future of our changing world. IIASA is independent and funded by prestigious research funding agencies in Africa, the Americas, Asia, and Europe. http://www.iiasa.ac.at

A Road Map for Natural Capitalism

by
Amory B. Lovins,
L. Hunter Lovins, and
Paul Hawken
From the Harvard Buisness Magazine (July–August 2007)


Summary. Reprint: R0707P No one would run a business without accounting for its capital outlays. Yet in 1999, when this article was originally published, most companies overlooked one major capital component—the value of the earth’s ecosystem services. It was a...more


Editor’s Note: The unsettling warning this article delivers has only grown more urgent since 1999, when it first appeared in HBR. But the value here lies not so much in the alarm that sounds as in the vivid and sometimes startling reconceptualization of how we think about the environment and economic value.

BLOG EDITORS NOTE 
IT IS NOW THE FIRST DECADE OF THE CLIMATE CRISIS 2021

The value to the economy of the services provided by the earth’s ecosystem—as distinct from the value of the natural resources we extract from it—runs into tens of trillions of dollars annually, say the authors. They provide numerous examples of companies that leverage this insight in the interest of their own bottom lines and the health of the environment as a whole.

On September 16, 1991, a small group of scientists was sealed inside Biosphere II, a glittering 3.2-acre glass and metal dome in Oracle, Arizona. Two years later, when the radical attempt to replicate the earth’s main ecosystems in miniature ended, the engineered environment was dying. The gaunt researchers had survived only because fresh air had been pumped in. Despite $200 million worth of elaborate equipment, Biosphere II had failed to generate breathable air, drinkable water, and adequate food for just eight people. Yet Biosphere I, the planet we all inhabit, effortlessly performs those tasks every day for 6 billion of us.

Disturbingly, Biosphere I is now itself at risk. The earth’s ability to sustain life, and therefore economic activity, is threatened by the way we extract, process, transport, and dispose of a vast flow of resources—some 220 billion tons a year, or more than 20 times the average American’s body weight every day. With dangerously narrow focus, our industries look only at the exploitable resources of the earth’s ecosystems—its oceans, forests, and plains—and not at the larger services that those systems provide for free. Resources and ecosystem services both come from the earth—even from the same biological systems—but they’re two different things. Forests, for instance, not only produce the resource of wood fiber but also provide such ecosystem services as water storage, habitat, and regulation of the atmosphere and climate. Yet companies that earn income from harvesting the wood fiber resource often do so in ways that damage the forest’s ability to carry out its other vital tasks.

Unfortunately, the cost of destroying ecosystem services becomes apparent only when the services start to break down. In China’s Yangtze basin in 1998, for example, deforestation triggered flooding that killed 3,700 people, dislocated 223 million, and inundated 60 million acres of cropland. That $30 billion disaster forced a logging moratorium and a $12 billion crash program of reforestation.

The reason companies (and governments) are so prodigal with ecosystem services is that the value of those services doesn’t appear on the business balance sheet. But that’s a staggering omission. The economy, after all, is embedded in the environment. Recent calculations published in the journal Nature conservatively estimate the value of all the earth’s ecosystem services to be at least $33 trillion a year. That’s close to the gross world product, and it implies a capitalized book value on the order of half a quadrillion dollars. What’s more, for most of these services, there is no known substitute at any price, and we can’t live without them.

This article puts forward a new approach not only for protecting the biosphere but also for improving profits and competitiveness. Some very simple changes to the way we run our businesses, built on advanced techniques for making resources more productive, can yield startling benefits both for today’s shareholders and for future generations.

This approach is called natural capitalism because it’s what capitalism might become if its largest category of capital—the “natural capital” of ecosystem services—were properly valued. The journey to natural capitalism involves four major shifts in business practices, all vitally interlinked:
Dramatically increase the productivity of natural resources. Reducing the wasteful and destructive flow of resources from depletion to pollution represents a major business opportunity. Through fundamental changes in both production design and technology, farsighted companies are developing ways to make natural resources—energy, minerals, water, forests—stretch five, ten, even 100 times further than they do today. These major resource savings often yield higher profits than small resource savings do—or even saving no resources at all would—and not only pay for themselves over time but in many cases reduce initial capital investments.
Shift to biologically inspired production models. Natural capitalism seeks not merely to reduce waste but to eliminate the very concept of waste. In closed-loop production systems, modeled on nature’s designs, every output either is returned harmlessly to the ecosystem as a nutrient, like compost, or becomes an input for manufacturing another product. Such systems can often be designed to eliminate the use of toxic materials, which can hamper nature’s ability to reprocess materials.


Move to a solutions-based business model. The business model of traditional manufacturing rests on the sale of goods. In the new model, value is instead delivered as a flow of services—providing illumination, for example, rather than selling lightbulbs. This model entails a new perception of value, a move from the acquisition of goods as a measure of affluence to one where well-being is measured by the continuous satisfaction of changing expectations for quality, utility, and performance. The new relationship aligns the interests of providers and customers in ways that reward them for implementing the first two innovations of natural capitalism—resource productivity and closed-loop manufacturing.
Reinvest in natural capital. Ultimately, business must restore, sustain, and expand the planet’s ecosystems so that they can produce their vital services and biological resources even more abundantly. Pressures to do so are mounting as human needs expand, the costs engendered by deteriorating ecosystems rise, and the environmental awareness of consumers increases. Fortunately, these pressures all create business value.

Natural capitalism is not motivated by a current scarcity of natural resources. Indeed, although many biological resources, like fish, are becoming scarce, most mined resources, such as copper and oil, seem ever more abundant. Indices of average commodity prices are at 28-year lows, thanks partly to powerful extractive technologies, which are often subsidized and whose damage to natural capital remains unaccounted for. Yet even despite these artificially low prices, using resources manyfold more productively can now be so profitable that pioneering companies—large and small—have already embarked on the journey toward natural capitalism.1

Still the question arises—if large resource savings are available and profitable, why haven’t they all been captured already? The answer is simple: Scores of common practices in both the private and public sectors systematically reward companies for wasting natural resources and penalize them for boosting resource productivity. For example, most companies expense their consumption of raw materials through the income statement but pass resource-saving investment through the balance sheet. That distortion makes it more tax efficient to waste fuel than to invest in improving fuel efficiency. In short, even though the road seems clear, the compass that companies use to direct their journey is broken. Later we’ll look in more detail at some of the obstacles to resource productivity—and some of the important business opportunities they reveal. But first, let’s map the route toward natural capitalism.
Dramatically Increase the Productivity of Natural Resources

In the first stage of a company’s journey toward natural capitalism, it strives to wring out the waste of energy, water, materials, and other resources throughout its production systems and other operations. There are two main ways companies can do this at a profit. First, they can adopt a fresh approach to design that considers industrial systems as a whole rather than part by part. Second, companies can replace old industrial technologies with new ones, particularly with those based on natural processes and materials.
Implementing whole-system design.

Inventor Edwin Land once remarked that “people who seem to have had a new idea have often simply stopped having an old idea.” This is particularly true when designing for resource savings. The old idea is one of diminishing returns—the greater the resource saving, the higher the cost. But that old idea is giving way to the new idea that bigger savings can cost less—that saving a large fraction of resources can actually cost less than saving a small fraction of resources. This is the concept of expanding returns, and it governs much of the revolutionary thinking behind whole-system design. Lean manufacturing is an example of whole-system thinking that has helped many companies dramatically reduce such forms of waste as lead times, defect rates, and inventory. Applying whole-system thinking to the productivity of natural resources can achieve even more.

Consider Interface, a leading maker of materials for commercial interiors. In its new Shanghai carpet factory, a liquid had to be circulated through a standard pumping loop similar to those used in nearly all industries. A top European company designed the system to use pumps requiring a total of 95 horsepower. But before construction began, Interface’s engineer, Jan Schilham, realized that two embarrassingly simple design changes would cut that power requirement to only seven horsepower—a 92% reduction. His redesigned system cost less to build, involved no new technology, and worked better in all respects.

What two design changes achieved this 12-fold saving in pumping power? First, Schilham chose fatter-than-usual pipes, which create much less friction than thin pipes do and therefore need far less pumping energy. The original designer had chosen thin pipes because, according to the textbook method, the extra cost of fatter ones wouldn’t be justified by the pumping energy that they would save. This standard design trade-off optimizes the pipes by themselves but “pessimizes” the larger system. Schilham optimized the whole system by counting not only the higher capital cost of the fatter pipes but also the lower capital cost of the smaller pumping equipment that would be needed. The pumps, motors, motor controls, and electrical components could all be much smaller because there’d be less friction to overcome. Capital cost would fall far more for the smaller equipment than it would rise for the fatter pipe. Choosing big pipes and small pumps—rather than small pipes and big pumps—would therefore make the whole system cost less to build, even before counting its future energy savings.

Schilham’s second innovation was to reduce the friction even more by making the pipes short and straight rather than long and crooked. He did this by laying out the pipes first, then positioning the various tanks, boilers, and other equipment that they connected. Designers normally locate the production equipment in arbitrary positions and then have a pipe fitter connect everything. Awkward placement forces the pipes to make numerous bends that greatly increase friction. The pipe fitters don’t mind: They’re paid by the hour, they profit from the extra pipes and fittings, and they don’t pay for the oversized pumps or inflated electric bills. In addition to reducing those four kinds of costs, Schilham’s short, straight pipes were easier to insulate, saving an extra 70 kilowatts of heat loss and repaying the insulation’s cost in three months.

This small example has big implications for two reasons. First, pumping is the largest application of motors, and motors use three-quarters of all industrial electricity. Second, the lessons are very widely relevant. Interface’s pumping loop shows how simple changes in design mentality can yield huge resource savings and returns on investment. This isn’t rocket science; often it’s just a rediscovery of good Victorian engineering principles that have been lost because of specialization.

Whole-system thinking can help managers find small changes that lead to big savings that are cheap, free, or even better than free (because they make the whole system cheaper to build). They can do this because often the right investment in one part of the system can produce multiple benefits throughout the system. For example, companies would gain 18 distinct economic benefits—of which direct energy savings is only one—if they switched from ordinary motors to premium-efficiency motors or from ordinary lighting ballasts (the transformer-like boxes that control fluorescent lamps) to electronic ballasts that automatically dim the lamps to match available daylight. If everyone in America integrated these and other selected technologies into all existing motor and lighting systems in an optimal way, the nation’s $220-billion-a-year electric bill would be cut in half. The after-tax return on investing in these changes would in most cases exceed 100% per year.

The profits from saving electricity could be increased even further if companies also incorporated the best off-the-shelf improvements into their building structure and their office, heating, cooling, and other equipment. Overall, such changes could cut national electricity consumption by at least 75% and produce returns of around 100% a year on the investments made. More important, because workers would be more comfortable, better able to see, and less fatigued by noise, their productivity and the quality of their output would rise. Eight recent case studies of people working in well-designed, energy-efficient buildings measured labor productivity gains of 6% to 16%. Since a typical office pays about 100 times as much for people as it does for energy, this increased productivity in people is worth about 6 to 16 times as much as eliminating the entire energy bill.

Energy-saving, productivity-enhancing improvements can often be achieved at even lower cost by piggybacking them onto the periodic renovations that all buildings and factories need. A recent proposal for reallocating the normal 20-year renovation budget for a standard 200,000-square-foot glass-clad office tower near Chicago shows the potential of whole-system design. The proposal suggested replacing the aging glazing system with a new kind of window that lets in nearly six times more daylight than the old sun-blocking glass units. The new windows would reduce the flow of heat and noise four times better than traditional windows do. So even though the glass costs slightly more, the overall cost of the renovation would be reduced because the windows would let in cool, glare-free daylight that, when combined with more efficient lighting and office equipment, would reduce the need for air-conditioning by 75%. Installing a fourfold more efficient, but fourfold smaller, air-conditioning system would cost $200,000 less than giving the old system its normal 20-year renovation. The $200,000 saved would, in turn, pay for the extra cost of the new windows and other improvements. This whole-system approach to renovation would not only save 75% of the building’s total energy use, it would also greatly improve the building’s comfort and marketability. Yet it would cost essentially the same as the normal renovation. There are about 100,000 20-year-old glass office towers in the United States that are ripe for such improvement.

Major gains in resource productivity require that the right steps be taken in the right order. Small changes made at the downstream end of a process often create far larger savings further upstream. In almost any industry that uses a pumping system, for example, saving one unit of liquid flow or friction in an exit pipe saves about ten units of fuel, cost, and pollution at the power station.

Of course, the original reduction in flow itself can bring direct benefits, which are often the reason changes are made in the first place. In the 1980s, while California’s industry grew 30%, for example, its water use was cut by 30%, largely to avoid increased wastewater fees. But the resulting reduction in pumping energy (and the roughly tenfold larger saving in power-plant fuel and pollution) delivered bonus savings that were at the time largely unanticipated.

To see how downstream cuts in resource consumption can create huge savings upstream, consider how reducing the use of wood fiber disproportionately reduces the pressure to cut down forests. In round numbers, half of all harvested wood fiber is used for such structural products as lumber; the other half is used for paper and cardboard. In both cases, the biggest leverage comes from reducing the amount of the retail product used. If it takes, for example, three pounds of harvested trees to produce one pound of product, then saving one pound of product will save three pounds of trees—plus all the environmental damage avoided by not having to cut them down in the first place.

The easiest savings come from not using paper that’s unwanted or unneeded. In an experiment at its Swiss headquarters, for example, Dow Europe cut office paper flow by about 30% in six weeks simply by discouraging unneeded information. For instance, mailing lists were eliminated and senders of memos got back receipts indicating whether each recipient had wanted the information. Taking those and other small steps, Dow was also able to increase labor productivity by a similar proportion because people could focus on what they really needed to read. Similarly, Danish hearing-aid maker Oticon saved upwards of 30% of its paper as a by-product of redesigning its business processes to produce better decisions faster. Setting the default on office printers and copiers to double-sided mode reduced AT&T’s paper costs by about 15%. Recently developed copiers and printers can even strip off old toner and printer ink, permitting each sheet to be reused about ten times.

Further savings can come from using thinner but stronger and more opaque paper and from designing packaging more thoughtfully. In a 30-month effort at reducing such waste, Johnson & Johnson saved 2,750 tons of packaging, 1,600 tons of paper, $2.8 million, and at least 330 acres of forest annually. The downstream savings in paper use are multiplied by the savings further upstream, as less need for paper products (or less need for fiber to make each product) translates into less raw paper, less raw paper means less pulp, and less pulp requires fewer trees to be harvested from the forest. Recycling paper and substituting alternative fibers such as wheat straw will save even more.

Comparable savings can be achieved for the wood fiber used in structural products. Pacific Gas and Electric, for example, sponsored an innovative design developed by Davis Energy Group that used engineered wood products to reduce the amount of wood needed in a stud wall for a typical tract house by more than 70%. These walls were stronger, cheaper, more stable, and insulated twice as well. Using them enabled the designers to eliminate heating and cooling equipment in a climate where temperatures range from freezing to 113°F. Eliminating the equipment made the whole house much less expensive both to build and to run while still maintaining high levels of comfort. Taken together, these and many other savings in the paper and construction industries could make our use of wood fiber so much more productive that, in principle, the entire world’s present wood fiber needs could probably be met by an intensive tree farm about the size of Iowa.
Adopting innovative technologies.

Implementing whole-system design goes hand in hand with introducing alternative, environmentally friendly technologies. Many of these are already available and profitable but not widely known. Some, like the “designer catalysts” that are transforming the chemical industry, are already runaway successes. Others are still making their way to market, delayed by cultural rather than by economic or technical barriers.

The automobile industry is particularly ripe for technological change. After a century of development, motorcar technology is showing signs of age. Only 1% of the energy consumed by today’s cars is actually used to move the driver: Only 15% to 20% of the power generated by burning gasoline reaches the wheels (the rest is lost in the engine and drivetrain) and 95% of the resulting propulsion moves the car, not the driver. The industry’s infrastructure is hugely expensive and inefficient. Its convergent products compete for narrow niches in saturated core markets at commodity-like prices. Auto making is capital intensive, and product cycles are long. It is profitable in good years but subject to large losses in bad years. Like the typewriter industry just before the advent of personal computers, it is vulnerable to displacement by something completely different.

Enter the Hypercar. Since 1993, when Rocky Mountain Institute placed this automotive concept in the public domain, several dozen current and potential auto manufacturers have committed billions of dollars to its development and commercialization. The Hypercar integrates the best existing technologies to reduce the consumption of fuel as much as 85% and the amount of materials used up to 90% by introducing four main innovations.

First, making the vehicle out of advanced polymer composites, chiefly carbon fiber, reduces its weight by two-thirds while maintaining crashworthiness. Second, aerodynamic design and better tires reduce air resistance by as much as 70% and rolling resistance by up to 80%. Together, these innovations save about two-thirds of the fuel. Third, 30% to 50% of the remaining fuel is saved by using a “hybrid-electric” drive. In such a system, the wheels are turned by electric motors whose power is made onboard by a small engine or turbine, or even more efficiently by a fuel cell. The fuel cell generates electricity directly by chemically combining stored hydrogen with oxygen, producing pure hot water as its only by-product. Interactions between the small, clean, efficient power source and the ultralight, low-drag auto body then further reduce the weight, cost, and complexity of both. Fourth, much of the traditional hardware—from transmissions and differentials to gauges and certain parts of the suspension—can be replaced by electronics controlled with highly integrated, customizable, and upgradable software.

These technologies make it feasible to manufacture pollution-free, high-performance cars, sport utilities, pickup trucks, and vans that get 80 to 200 miles per gallon (or its energy equivalent in other fuels). These improvements will not require any compromise in quality or utility. Fuel savings will not come from making the vehicles small, sluggish, unsafe, or unaffordable, nor will they depend on government fuel taxes, mandates, or subsidies. Rather, Hypercars will succeed for the same reason that people buy compact discs instead of phonograph records: The CD is a superior product that redefines market expectations. From the manufacturers’ perspective, Hypercars will cut cycle times, capital needs, body part counts, and assembly effort and space by as much as tenfold. Early adopters will have a huge competitive advantage—which is why dozens of corporations, including most automakers, are now racing to bring Hypercar-like products to market.2

In the long term, the Hypercar will transform industries other than automobiles. It will displace about an eighth of the steel market directly and most of the rest eventually, as carbon fiber becomes far cheaper. Hypercars and their cousins could ultimately save as much oil as OPEC now sells. Indeed, oil may well become uncompetitive as a fuel long before it becomes scarce and costly. Similar challenges face the coal and electricity industries because the development of the Hypercar is likely to accelerate greatly the commercialization of inexpensive hydrogen fuel cells. These fuel cells will help shift power production from centralized coal-fired and nuclear power stations to networks of decentralized, small-scale generators. In fact, fuel cell–powered Hypercars could themselves be part of these networks. They’d be, in effect, 20-kilowatt power plants on wheels. Given that cars are left parked—that is, unused—more than 95% of the time, these Hypercars could be plugged into a grid and could then sell back enough electricity to repay as much as half the predicted cost of leasing them. A national Hypercar fleet could ultimately have five to ten times the generating capacity of the national electric grid.

As radical as it sounds, the Hypercar is not an isolated case. Similar ideas are emerging in such industries as chemicals, semiconductors, general manufacturing, transportation, water and wastewater treatment, agriculture, forestry, energy, real estate, and urban design. For example, the amount of carbon dioxide released for each microchip manufactured can be reduced almost 100-fold through improvements that are now profitable or soon will be.

Some of the most striking developments come from emulating nature’s techniques. In her book, Biomimicry, Janine Benyus points out that spiders convert digested crickets and flies into silk that’s as strong as Kevlar without the need for boiling sulfuric acid and high-temperature extruders. Using no furnaces, abalone can convert seawater into an inner shell twice as tough as our best ceramics. Trees turn sunlight, water, soil, and air into cellulose, a sugar stronger than nylon but one-fourth as dense. They then bind it into wood, a natural composite with a higher bending strength than concrete, aluminum alloy, or steel. We may never become as skillful as spiders, abalone, or trees, but smart designers are already realizing that nature’s environmentally benign chemistry offers attractive alternatives to industrial brute force.

Whether through better design or through new technologies, reducing waste represents a vast business opportunity. The U.S. economy is not even 10% as energy efficient as the laws of physics allow. Just the energy thrown off as waste heat by U.S. power stations equals the total energy use of Japan. Materials efficiency is even worse: only about 1% of all the materials mobilized to serve America is actually made into products and still in use six months after sale. In every sector, there are opportunities for reducing the amount of resources that go into a production process, the steps required to run that process, and the amount of pollution generated and by-products discarded at the end. These all represent avoidable costs and hence profits to be won.
Redesign Production According to Biological Models

In the second stage on the journey to natural capitalism, companies use closed-loop manufacturing to create new products and processes that can totally prevent waste. This plus more efficient production processes could cut companies’ long-term materials requirements by more than 90% in most sectors.

The central principle of closed-loop manufacturing, as architect Paul Bierman-Lytle of the engineering firm CH2M Hill puts it, is “waste equals food.” Every output of manufacturing should be either composted into natural nutrients or remanufactured into technical nutrients—that is, it should be returned to the ecosystem or recycled for further production. Closed-loop production systems are designed to eliminate any materials that incur disposal costs, especially toxic ones, because the alternative—isolating them to prevent harm to natural systems—tends to be costly and risky. Indeed, meeting EPA and OSHA standards by eliminating harmful materials often makes a manufacturing process cost less than the hazardous process it replaced. Motorola, for example, formerly used chlorofluorocarbons for cleaning printed circuit boards after soldering. When CFCs were outlawed because they destroy stratospheric ozone, Motorola at first explored such alternatives as orange-peel terpenes. But it turned out to be even cheaper—and to produce a better product—to redesign the whole soldering process so that it needed no cleaning operations or cleaning materials at all.

Closed-loop manufacturing is more than just a theory. The U.S. remanufacturing industry in 1996 reported revenues of $53 billion—more than consumer-durables manufacturing (appliances; furniture; audio, video, farm, and garden equipment). Xerox, whose bottom line has swelled by $700 million from remanufacturing, expects to save another $1 billion just by remanufacturing its new, entirely reusable or recyclable line of “green” photocopiers. What’s more, policy makers in some countries are already taking steps to encourage industry to think along these lines. German law, for example, makes many manufacturers responsible for their products forever, and Japan is following suit.

Combining closed-loop manufacturing with resource efficiency is especially powerful. DuPont, for example, gets much of its polyester industrial film back from customers after they use it and recycles it into new film. DuPont also makes its polyester film ever stronger and thinner so it uses less material and costs less to make. Yet because the film performs better, customers are willing to pay more for it. As DuPont chairman Jack Krol noted in 1997, “Our ability to continually improve the inherent properties [of our films] enables this process [of developing more productive materials, at lower cost, and higher profits] to go on indefinitely.”

Interface is leading the way to this next frontier of industrial ecology. While its competitors are “down cycling” nylon-and-PVC-based carpet into less valuable carpet backing, Interface has invented a new floor-covering material called Solenium, which can be completely remanufactured into identical new product. This fundamental innovation emerged from a clean-sheet redesign. Executives at Interface didn’t ask how they could sell more carpet of the familiar kind; they asked how they could create a dream product that would best meet their customers’ needs while protecting and nourishing natural capital.

Solenium lasts four times longer and uses 40% less material than ordinary carpets—an 86% reduction in materials intensity. What’s more, Solenium is free of chlorine and other toxic materials, is virtually stainproof, doesn’t grow mildew, can easily be cleaned with water, and offers aesthetic advantages over traditional carpets. It’s so superior in every respect that Interface doesn’t market it as an environmental product—just a better one.

Solenium is only one part of Interface’s drive to eliminate every form of waste. Chairman Ray C. Anderson defines waste as “any measurable input that does not produce customer value,” and he considers all inputs to be waste until shown otherwise. Between 1994 and 1998, this zero-waste approach led to a systematic treasure hunt that helped to keep resource inputs constant while revenues rose by $200 million. Indeed, $67 million of the revenue increase can be directly attributed to the company’s 60% reduction in landfill waste.

Subsequently, president Charlie Eitel expanded the definition of waste to include all fossil fuel inputs, and now many customers are eager to buy products from the company’s recently opened solar-powered carpet factory. Interface’s green strategy has not only won plaudits from environmentalists, it has also proved a remarkably successful business strategy. Between 1993 and 1998, revenue has more than doubled, profits have more than tripled, and the number of employees has increased by 73%.
Change the Business Model

In addition to its drive to eliminate waste, Interface has made a fundamental shift in its business model—the third stage on the journey toward natural capitalism. The company has realized that clients want to walk on and look at carpets—but not necessarily to own them. Traditionally, broadloom carpets in office buildings are replaced every decade because some portions look worn out. When that happens, companies suffer the disruption of shutting down their offices and removing their furniture. Billions of pounds of carpets are removed each year and sent to landfills, where they will last up to 20,000 years. To escape this unproductive and wasteful cycle, Interface is transforming itself from a company that sells and fits carpets into one that provides floor-covering services.

Under its Evergreen Lease, Interface no longer sells carpets but rather leases a floor-covering service for a monthly fee, accepting responsibility for keeping the carpet fresh and clean. Monthly inspections detect and replace worn carpet tiles. Since at most 20% of an area typically shows at least 80% of the wear, replacing only the worn parts reduces the consumption of carpeting material by about 80%. It also minimizes the disruption that customers experience—worn tiles are seldom found under furniture. Finally, for the customer, leasing carpets can provide a tax advantage by turning a capital expenditure into a tax-deductible expense. The result: The customer gets cheaper and better services that cost the supplier far less to produce. Indeed, the energy saved from not producing a whole new carpet is in itself enough to produce all the carpeting that the new business model requires. Taken together, the fivefold savings in carpeting material that Interface achieves through the Evergreen Lease and the sevenfold materials savings achieved through the use of Solenium deliver a stunning 35-fold reduction in the flow of materials needed to sustain a superior floor-covering service. Remanufacturing, and even making carpet initially from renewable materials, can then reduce the extraction of virgin resources essentially to the company’s goal of zero.

Interface’s shift to a service-leasing business reflects a fundamental change from the basic model of most manufacturing companies, which still look on their businesses as machines for producing and selling products. The more products sold, the better—at least for the company, if not always for the customer or the earth. But any model that wastes natural resources also wastes money. Ultimately, that model will be unable to compete with a service model that emphasizes solving problems and building long-term relationships with customers rather than making and selling products. The shift to what James Womack of the Lean Enterprise Institute calls a “solutions economy” will almost always improve customer value and providers’ bottom lines because it aligns both parties’ interests, offering rewards for doing more and better with less.

Interface is not alone. Elevator giant Schindler, for example, prefers leasing vertical transportation services to selling elevators because leasing lets it capture the savings from its elevators’ lower energy and maintenance costs. Dow Chemical and Safety-Kleen prefer leasing dissolving services to selling solvents because they can reuse the same solvent scores of times, reducing costs. United Technologies’ Carrier division, the world’s largest manufacturer of air conditioners, is shifting its mission from selling air conditioners to leasing comfort. Making its air conditioners more durable and efficient may compromise future equipment sales, but it provides what customers want and will pay for—better comfort at lower cost. But Carrier is going even further. It’s starting to team up with other companies to make buildings more efficient so that they need less air-conditioning, or even none at all, to yield the same level of comfort. Carrier will get paid to provide the agreed-upon level of comfort, however that’s delivered. Higher profits will come from providing better solutions rather than from selling more equipment. Since comfort with little or no air-conditioning (via better building design) works better and costs less than comfort with copious air-conditioning, Carrier is smart to capture this opportunity itself before its competitors do. As they say at 3M: “We’d rather eat our own lunch, thank you.”

The shift to a service business model promises benefits not just to participating businesses but to the entire economy as well. Womack points out that by helping customers reduce their need for capital goods such as carpets or elevators, and by rewarding suppliers for extending and maximizing asset values rather than for churning them, adoption of the service model will reduce the volatility in the turnover of capital goods that lies at the heart of the business cycle. That would significantly reduce the overall volatility of the world’s economy. At present, the producers of capital goods face feast or famine because the buying decisions of households and corporations are extremely sensitive to fluctuating income. But in a continuous-flow-of-services economy, those swings would be greatly reduced, bringing a welcome stability to businesses. Excess capacity—another form of waste and source of risk—need no longer be retained for meeting peak demand. The result of adopting the new model would be an economy in which we grow and get richer by using less and become stronger by being leaner and more stable.
Reinvest in Natural Capital

The foundation of textbook capitalism is the prudent reinvestment of earnings in productive capital. Natural capitalists who have dramatically raised their resource productivity, closed their loops, and shifted to a solutions-based business model have one key task remaining. They must reinvest in restoring, sustaining, and expanding the most important form of capital—their own natural habitat and biological resource base.

This was not always so important. Until recently, business could ignore damage to the ecosystem because it didn’t affect production and didn’t increase costs. But that situation is changing. In 1998 alone, violent weather displaced 300 million people and caused upwards of $90 billion worth of damage, representing more weather-related destruction than was reported through the entire decade of the 1980s. The increase in damage is strongly linked to deforestation and climate change, factors that accelerate the frequency and severity of natural disasters and are the consequences of inefficient industrialization. If the flow of services from industrial systems is to be sustained or increased in the future for a growing population, the vital flow of services from living systems will have to be maintained or increased as well. Without reinvestment in natural capital, shortages of ecosystem services are likely to become the limiting factor to prosperity in the next century. When a manufacturer realizes that a supplier of key components is overextended and running behind on deliveries, it takes immediate action lest its own production lines come to a halt. The ecosystem is a supplier of key components for the life of the planet, and it is now falling behind on its orders.

Failure to protect and reinvest in natural capital can also hit a company’s revenues indirectly. Many companies are discovering that public perceptions of environmental responsibility, or its lack thereof, affect sales. MacMillan Bloedel, targeted by environmental activists as an emblematic clear-cutter and chlorine user, lost 5% of its sales almost overnight when dropped as a U.K. supplier by Scott Paper and Kimberly-Clark. Numerous case studies show that companies leading the way in implementing changes that help protect the environment tend to gain disproportionate advantage, while companies perceived as irresponsible lose their franchise, their legitimacy, and their shirts. Even businesses that claim to be committed to the concept of sustainable development but whose strategy is seen as mistaken, like Monsanto, are encountering stiffening public resistance to their products. Not surprisingly, University of Oregon business professor Michael Russo, along with many other analysts, has found that a strong environmental rating is “a consistent predictor of profitability.”

The pioneering corporations that have made reinvestments in natural capital are starting to see some interesting paybacks. The independent power producer AES, for example, has long pursued a policy of planting trees to offset the carbon emissions of its power plants. That ethical stance, once thought quixotic, now looks like a smart investment because a dozen brokers are now starting to create markets in carbon reduction. Similarly, certification by the Forest Stewardship Council of certain sustainably grown and harvested products has given Collins Pine the extra profit margins that enabled its U.S. manufacturing operations to survive brutal competition. Taking an even longer view, Swiss Re and other European reinsurers are seeking to cut their storm-damage losses by pressing for international public policy to protect the climate and by investing in climate-safe technologies that also promise good profits. Yet most companies still do not realize that a vibrant ecological web underpins their survival and their business success. Enriching natural capital is not just a public good—it is vital to every company’s longevity.

It turns out that changing industrial processes so that they actually replenish and magnify the stock of natural capital can prove especially profitable because nature does the production; people need to just step back and let life flourish. Industries that directly harvest living resources, such as forestry, farming, and fishing, offer the most suggestive examples. Here are three:
Allan Savory of the Center for Holistic Management in Albuquerque, New Mexico, has redesigned cattle ranching to raise the carrying capacity of rangelands, which have often been degraded not by overgrazing but by undergrazing and grazing the wrong way. Savory’s solution is to keep the cattle moving from place to place, grazing intensively but briefly at each site, so that they mimic the dense but constantly moving herds of native grazing animals that coevolved with grasslands. Thousands of ranchers are estimated to be applying this approach, improving both their range and their profits. This “management-intensive rotational grazing” method, long standard in New Zealand, yields such clearly superior returns that over 15% of Wisconsin’s dairy farms have adopted it in the past few years.
The California Rice Industry Association has discovered that letting nature’s diversity flourish can be more profitable than forcing it to produce a single product. By flooding 150,000 to 200,000 acres of Sacramento valley rice fields—about 30% of California’s rice-growing area—after harvest, farmers are able to create seasonal wetlands that support millions of wildfowl, replenish groundwater, improve fertility, and yield other valuable benefits. In addition, the farmers bale and sell the rice straw, whose high silica content—formerly an air-pollution hazard when the straw was burned—adds insect resistance and hence value as a construction material when it’s resold instead.


John Todd of Living Technologies in Burlington, Vermont, has used biological Living Machines—linked tanks of bacteria, algae, plants, and other organisms—to turn sewage into clean water. That not only yields cleaner water at a reduced cost, with no toxicity or odor, but it also produces commercially valuable flowers and makes the plant compatible with its residential neighborhood. A similar plant at the Ethel M Chocolates factory in Las Vegas, Nevada, not only handles difficult industrial wastes effectively but is showcased in its public tours.

Although such practices are still evolving, the broad lessons they teach are clear. In almost all climates, soils, and societies, working with nature is more productive than working against it. Reinvesting in nature allows farmers, fishermen, and forest managers to match or exceed the high yields and profits sustained by traditional input-intensive, chemically driven practices. Although much of mainstream business is still headed the other way, the profitability of sustainable, nature-emulating practices is already being proven. In the future, many industries that don’t now consider themselves dependent on a biological resource base will become more so as they shift their raw materials and production processes more to biological ones. There is evidence that many business leaders are starting to think this way. The consulting firm Arthur D. Little surveyed a group of North American and European business leaders and found that 83% of them already believe that they can derive “real business value [from implementing a] sustainable-development approach to strategy and operations.”
A Broken Compass?

If the road ahead is this clear, why are so many companies straying or falling by the wayside? We believe the reason is that the instruments companies use to set their targets, measure their performance, and hand out rewards are faulty. In other words, the markets are full of distortions and perverse incentives. Of the more than 60 specific forms of misdirection that we have identified,3 the most obvious involve the ways companies allocate capital and the way governments set policy and impose taxes. Merely correcting these defective practices would uncover huge opportunities for profit.

Consider how companies make purchasing decisions. Decisions to buy small items are typically based on their initial cost rather than their full life-cycle cost, a practice that can add up to major wastage. Distribution transformers that supply electricity to buildings and factories, for example, are a minor item at just $320 apiece, and most companies try to save a quick buck by buying the lowest-price models. Yet nearly all the nation’s electricity must flow through transformers, and using the cheaper but less efficient models wastes $1 billion a year. Such examples are legion. Equipping standard new office-lighting circuits with fatter wire that reduces electrical resistance could generate after-tax returns of 193% a year. Instead, wire as thin as the National Electrical Code permits is usually selected because it costs less up front. But the code is meant only to prevent fires from overheated wiring, not to save money. Ironically, an electrician who chooses fatter wire—thereby reducing long-term electricity bills—doesn’t get the job. After paying for the extra copper, he’s no longer the low bidder.

Some companies do consider more than just the initial price in their purchasing decisions but still don’t go far enough. Most of them use a crude payback estimate rather than more accurate metrics like discounted cash flow. A few years ago, the median simple payback these companies were demanding from energy efficiency was 1.9 years. That’s equivalent to requiring an after-tax return of around 71% per year—about six times the marginal cost of capital.

Most companies also miss major opportunities by treating their facilities costs as an overhead to be minimized, typically by laying off engineers, rather than as a profit center to be optimized—by using those engineers to save resources. Deficient measurement and accounting practices also prevent companies from allocating costs—and waste—with any accuracy. For example, only a few semiconductor plants worldwide regularly and accurately measure how much energy they’re using to produce a unit of chilled water or clean air for their clean-room production facilities. That makes it hard for them to improve efficiency. In fact, in an effort to save time, semiconductor makers frequently build new plants as exact copies of previous ones—a design method nicknamed “infectious repetitis.”

Many executives pay too little attention to saving resources because they are often a small percentage of total costs (energy costs run to about 2% in most industries). But those resource savings drop straight to the bottom line and so represent a far greater percentage of profits. Many executives also think they already “did” efficiency in the 1970s, when the oil shock forced them to rethink old habits. They’re forgetting that with today’s far better technologies, it’s profitable to start all over again. Malden Mills, the Massachusetts maker of such products as Polartec, was already using “efficient” metal-halide lamps in the mid-1990s. But a recent warehouse retrofit reduced the energy used for lighting by another 93%, improved visibility, and paid for itself in 18 months.

The way people are rewarded often creates perverse incentives. Architects and engineers, for example, are traditionally compensated for what they spend, not for what they save. Even the striking economics of the retrofit design for the Chicago office tower described earlier wasn’t incentive enough actually to implement it. The property was controlled by a leasing agent who earned a commission every time she leased space, so she didn’t want to wait the few extra months needed to refit the building. Her decision to reject the efficiency-quadrupling renovation proved costly for both her and her client. The building was so uncomfortable and expensive to occupy that it didn’t lease, so ultimately the owner had to unload it at a fire-sale price. Moreover, the new owner will for the next 20 years be deprived of the opportunity to save capital cost.

If corporate practices obscure the benefits of natural capitalism, government policy positively undermines it. In nearly every country on the planet, tax laws penalize what we want more of—jobs and income—while subsidizing what we want less of—resource depletion and pollution. In every state but Oregon, regulated utilities are rewarded for selling more energy, water, and other resources, and penalized for selling less, even if increased production would cost more than improved customer efficiency. In most of America’s arid western states, use-it-or-lose-it water laws encourage inefficient water consumption. Additionally, in many towns, inefficient use of land is enforced through outdated regulations, such as guidelines for ultrawide suburban streets recommended by 1950s civil-defense planners to accommodate the heavy equipment needed to clear up rubble after a nuclear attack.

The costs of these perverse incentives are staggering: $300 billion in annual energy wasted in the United States, and $1 trillion already misallocated to unnecessary air-conditioning equipment and the power supplies to run it (about 40% of the nation’s peak electric load). Across the entire economy, unneeded expenditures to subsidize, encourage, and try to remedy inefficiency and damage that should not have occurred in the first place probably account for most, if not all, of the GDP growth of the past two decades. Indeed, according to former World Bank economist Herman Daly and his colleague John Cobb (along with many other analysts), Americans are hardly better off than they were in 1980. But if the U.S. government and private industry could redirect the dollars currently earmarked for remedial costs toward reinvestment in natural and human capital, they could bring about a genuine improvement in the nation’s welfare. Companies, too, are finding that wasting resources also means wasting money and people. These intertwined forms of waste have equally intertwined solutions. Firing the unproductive tons, gallons, and kilowatt-hours often makes it possible to keep the people, who will have more and better work to do.
Recognizing the Scarcity Shift

In the end, the real trouble with our economic compass is that it points in exactly the wrong direction. Most businesses are behaving as if people were still scarce and nature still abundant—the conditions that helped to fuel the first Industrial Revolution. At that time, people were relatively scarce compared with the present-day population. The rapid mechanization of the textile industries caused explosive economic growth that created labor shortages in the factory and the field. The Industrial Revolution, responding to those shortages and mechanizing one industry after another, made people a hundred times more productive than they had ever been.

The logic of economizing on the scarcest resource, because it limits progress, remains correct. But the pattern of scarcity is shifting: Now people aren’t scarce but nature is. This shows up first in industries that depend directly on ecological health. Here, production is increasingly constrained by fish rather than by boats and nets, by forests rather than by chain saws, by fertile topsoil rather than by plows. Moreover, unlike the traditional factors of industrial production—capital and labor—the biological limiting factors cannot be substituted for one another. In the industrial system, we can easily exchange machinery for labor. But no technology or amount of money can substitute for a stable climate and a productive biosphere. Even proper pricing can’t replace the priceless.

Natural capitalism addresses those problems by reintegrating ecological with economic goals. Because it is both necessary and profitable, it will subsume traditional industrialism within a new economy and a new paradigm of production, just as industrialism previously subsumed agrarianism. The companies that first make the changes we have described will have a competitive edge. Those that don’t make that effort won’t be a problem because ultimately they won’t be around. In making that choice, as Henry Ford said, “Whether you believe you can, or whether you believe you can’t, you’re absolutely right.”

1. Our book, Natural Capitalism, provides hundreds of examples of how companies of almost every type and size, often through modest shifts in business logic and practice, have dramatically improved their bottom lines.

2. Nonproprietary details are posted at www.hypercar.com

3. Summarized in the report “Climate: Making Sense and Making Money,” at www.rmi.org/images/other/Climate/C97-13_ClimateMSMM.pdf

A version of this article appeared in the
July–August 2007 issue of Harvard Business Review.


AL
Amory B. Lovins is a cofounder and the chairman of Rocky Mountain Institute (RMI), a nonprofit resource policy center in Snowmass, Colorado.
LL
L. Hunter Lovins is a cofounder of RMI and the president and founder of Natural Capitalism, a firm that helps organizations create sustainability strategies, in Boulder, Colorado.
PH
Paul Hawken is the founder and the executive director of Natural Capital Institute, a research group based in Sausalito, California. He has also founded or cofounded several companies,