Thursday, May 20, 2021

The brain game: What causes engagement and addiction to video games?

Researchers have analyzed the objective and subjective aspects of game-playing experience based on analogies of physical models of motion

JAPAN ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY

Research News

IMAGE

IMAGE: THE GRAPH DEPICTS VARIOUS QUANTITIES OF THE MODEL PLOTTED AGAINST THE DIFFICULT OF SOLVING UNCERTAINTY IN A GAME (M). REPRESENTATIVE EXAMPLES OF BOARD GAMES, GAMBLING GAMES, AND SPORTS ARE PLACED... view more 

CREDIT: HIROYUKI IIDA

Ishikawa, Japan - History tells us that games are an inseparable facet of humanity, and mainly for good reasons. Advocates of video games laud their pros: they help develop problem-solving skills, socialize, relieve stress, and exercise the mind and body--all at the same time! However, games also have a dark side: the potential for addiction. The explosive growth of the video game industry has spawned all sorts of games targeting different groups of people. This includes digital adaptations of popular board games like chess, but also extends to gambling-type games like online casinos and betting on horse races. While virtually all engaging forms of entertainment lend themselves to addictive behavior under specific circumstances, some video games are more commonly associated with addiction than others. But what exactly makes these games so potentially addictive?

This is a difficult question to answer because it deals directly with aspects of the human mind, and the inner workings of the mind are mostly a mystery. However, there may be a way to answer it by leveraging what we do know about the physical world and its laws. At the Japan Advanced Institute of Science and Technology (JAIST), Japan, Professor Hiroyuki Iida and colleagues have been pioneering a methodology called "motion in mind" that could help us understand what draws us towards games and makes us want to keep reaching for the console.

Their approach is centered around modelling the underlying mechanisms that operate in the mind when playing games through an analogy with actual physical models of motion. For example, the concepts of potential energy, forces, and momentum from classical mechanics are considered to be analogous to objective and/or subjective game-related aspects, including pacing of the game, randomness, and fairness. In their latest study published in IEEE AccessProfessor Iida and Assistant Professor Mohd Nor Akmal Khalid, also from JAIST, linked their "motion in mind" model with the concepts of engagement and addiction in various types of games from the perceived experience of the player and their behaviors.

The researchers employed an analogy of the law of conservation of energy, mass, and momentum to mathematically determine aspects of the game-playing experience in terms of gambling psychology and the subjective/objective perception of the game. Their conjectures and results were supported by a "unified conceptual model of engagement and addiction," previously derived from ethnographic and social science studies, which suggests that engagement and addiction are two sides of the same coin. By comparing and analyzing a variety of games, such as chess, Go, basketball, soccer, online casinos, and pachinko, among others, the researchers showed that their law of conservation model expanded upon this preconception, revealing new measures for engagement while also unveiling some of the mechanisms that underlie addiction.

Their approach also provides a clearer view of how the perceived (subjective) difficulty of solving uncertainties during a given game can differ from and even outweigh the real (objective) one, and how this affects our behavior and reactions. "Our findings are valuable for understanding the dynamics of information in different game mechanics that have an impact on the player's state of mind. In other words, this helps us establish the relationship between the game-playing process and the associated psychological feeling," explains Professor Iida . Such insight will help developers make game content more engaging, healthy, and personalized both in the short and long term.

Further studies shall make the tailoring of game experiences much easier, as Professor Iida remarks: "Our work is a stepping-stone for linking behavioral psychology and game-playing experiences, and soon we will be able to mechanistically manipulate and adapt the notions of engagement and addiction towards specific needs and use-cases ." Let's hope these findings can take us one step towards deaddiction to games, while keeping the fun intact!

###

About Japan Advanced Institute of Science and Technology, Japan

Founded in 1990 in Ishikawa prefecture, the Japan Advanced Institute of Science and Technology (JAIST) was the first independent national graduate school in Japan. Now, after 30 years of steady progress, JAIST has become one of Japan's top-ranking universities. JAIST counts with multiple satellite campuses and strives to foster capable leaders with a state-of-the-art education system where diversity is key; about 40% of its alumni are international students. The university has a unique style of graduate education based on a carefully designed coursework-oriented curriculum to ensure that its students have a solid foundation on which to carry out cutting-edge research. JAIST also works closely both with local and overseas communities by promoting industry-academia collaborative research.

About Professor Hiroyuki Iida from Japan Advanced Institute of Science and Technology, Japan

Dr. Hiroyuki Iida received his Ph.D. in 1994 on Heuristic Theories on Game-Tree Search from the Tokyo University of Agriculture and Technology, Japan. Since 2005, he has been a Professor at JAIST, where he is also Trustee and Vice President of Educational and Student Affairs. He is the head of the Iida laboratory and has published over 300 hundred papers, presentations, and books. His current research interests include artificial intelligence, game informatics, game theory, mathematical modeling, search algorithms, game-refinement theory, game tree search, and entertainment science.

Funding information

This study was funded by a grant from the Japan Society for the Promotion of Science in the framework of the Grant-in-Aid for Challenging Exploratory Research (Grant Number 19K22893).

Model bias corrections for reliable projection of extreme El Niño frequency change

SCIENCE CHINA PRESS

Research News

IMAGE

IMAGE: FIGURE 2. TROPICAL PACIFIC MEAN-STATE CHANGE DURING 2011-2098, AFTER REMOVING THE IMPACTS OF 13 COMMON BIASES IN THE CMIP5 ORIGINAL PROJECTIONS, IN (A) SST, (B) CONVECTION (OMEGA, UPWARD NEGATIVE), AND... view more 

CREDIT: ©SCIENCE CHINA PRESS

A reliable projection of extreme El Niño frequency change in future warmer climate is critical to managing socio-economic activities and human health, strategic policy decisions, environmental and ecosystem managements, and disaster mitigations in many parts of the world. Unfortunately, long-standing common biases in CMIP5 models, despite enormous efforts on the numerical model development over the past decades, make it hard to achieve a reliable projection of the extreme El Niño frequency change in the future. While increasing attentions have been paid to estimate possible impacts of models' biases, it is not yet fully understood whether and how much models' common biases would impact the projection of the extreme El Niño frequency change in coming decades. This is an urgent question to be solved.

According to the original projection of CMIP5 models, the extreme El Niño, defined by Niño3 convection, would increase twice in the future. However, Prof. Luo and his research team find that models which produce a centennial easterly trend in the tropical Pacific during the 20th century would project a weak increase or even a decrease of the extreme El Niño frequency in the 21st century. Since the centennial easterly trend is systematically underestimated in all CMIP5 models compared to the historical record, a reasonable question is whether this common bias might lead to an over-estimated frequency of the extreme El Niño frequency change in the models' original projections (Figure 1a).

CAPTION

Figure 1. Inter-model correlation between the projected change in the extreme El Niño frequency during 2011-2098 and the simulated zonal wind trend during 1901-2010 (a), and the change in mean-state convection (represented by omega) over Niño3 region (b). Inter-model correlation between the Niño3 mean-state convection change and the Pacific east-minus-west SST gradient change (c).

CREDIT

©Science China Press

Based on their results, the change in the frequency of extreme El Niño, which was defined by the total convection in the eastern equatorial Pacific (i.e. the sum of the mean-state and anomaly value of the vertical velocity in Nino3 region), is mostly determined by the mean-state change of the Nino3 convection (Figure 1b). In addition, change in the mean-state convection in Nino3 region is highly controlled by the change in the east-minus-west sea surface temperature (SST) gradient that determines the tropical Pacific Walker circulation (Figure 1c). Therefore, the change in the extreme El Niño frequency defined by the total convection in Nino3 region boils down to the change in the tropical Pacific east-minus-west SST gradient (i.e., "El Niño-like" or "La Niña-like" change).

By identifying systematic impacts of 13 common biases of CMIP5 models in simulating the tropical climate over the past century, they find that change in the tropical Pacific east-minus-west SST gradient in the future was significantly over-estimated in the original projection. In stark contrast to the original "El Niño-like" SST warming in the future projected by the CMIP5 models, the Pacific SST change, after removing the systematic impacts of the models' 13 common biases, shows that the strongest SST warming would occur in the tropical western Pacific rather than in the east (i.e., a "La Niña-like" SST warming change), coupled with stronger trade winds across the Pacific and suppressed convection in the eastern Pacific (Figure 2).

As mentioned above, change in the frequency of the so-defined extreme El Niño would be determined by the change in the Pacific mean-states. Therefore, by carefully removing the impacts of the models' common biases on the mean-state changes, Luo and colleagues find that the extreme El Niño frequency would remain almost unchanged in the future (Figure 3).

CAPTION

Figure 3. El Niño frequency change in the 21st century. Red, green, and gray dots indicate extreme El Niño events (i.e. Niño3 omega is negative), moderate El Niño events (i.e. positive omega but with greater than 0.5 standardized SST anomaly in Niño3 region), and non-El Niño years, respectively. Results are based on (a) the historical simulations during 1901-2010, (b, c) the original and corrected projections of the extreme El Niño frequency in RCP4.5 scenario during 2011-2098. All the frequency is calculated per 100 years. (d) Histogram of the extreme El Niño frequency in each magnitude bin of the Niño3 SST anomaly (interval: 0.5 standard deviation). The 95% confidence interval of the extreme El Niño frequency is estimated by bootstrap test. (e) Frequency of extreme and moderate El Niño, defined by the Niño3 SST anomaly in boreal winter being greater than 1.5 (red line) and being 0.5-1.5 standard deviation (green line), respectively. The frequency of the extreme and moderate El Niño events (per 100 years) in the historical simulations and future projections are represented by gray and blue Arabic numbers, respectively. (f) As in (e), but for the results defined by Niño3 omega anomaly in boreal winter.

CREDIT

©Science China Press

In summary, this finding highlights that the impacts of models' common biases could be great enough to reverse the original projection of the change in the tropical Pacific climate mean-states, which would largely affect the projection of the extreme El Niño frequency change in the future. Thus, it sheds a new light on the importance of model bias-correction in order to gain a reliable projection of future climate change. More importantly, this finding suggests that much more efforts should be put to improve climate models and reduce major systematic biases in coming years/decades.

###

See the article:

Over-projected Pacific warming and extreme El Niño frequency change due to CMIP5 common biases. Tao Tang, Jing-Jia Luo, Ke Peng et al. National Science Reviewhttps://doi.org/10.1093/nsr/nwab056

Strong quake, small tsunami

GEOMAR scientists publish unique data set on the northern Chilean subduction zone

HELMHOLTZ CENTRE FOR OCEAN RESEARCH KIEL (GEOMAR)

Research News

IMAGE

IMAGE: FOR A TOTAL OF TWO YEARS, 15 OCEAN BOTTOM SEISMOMETERS OFF NORTHERN CHILE RECORDED AFTERSHOCKS FROM THE 2014 IQUIQUE EARTHQUAKE. view more 

CREDIT: JAN STEFFEN/GEOMAR

Northern Chile is an ideal natural laboratory to study the origin of earthquakes. Here, the Pacific Nazca plate slides underneath the South American continental plate with a speed of about 65 millimetres per year. This process, known as subduction, creates strain between the two plates and scientists thus expected a mega-earthquake here sooner or later, like the last one in 1877. But although northern Chile is one of the focal points of global earthquake research, until now there was no comprehensive data set on the structure of the marine subsurface - until nature itself stepped in to help.

On 1 April 2014, a segment of the subduction zone finally ruptured northwest of the city of Iquique. The earthquake with a moment magnitude of 8.1 released at least parts of accumulated stresses. Subsequent seismic measurements off the coast of Chile as well as seafloor mapping and land-based data provided a hitherto unique insight into the architecture of the plate boundary. "Among other things, this allows us to explain why a relatively severe quake like the one in 2014 only triggered a relatively weak tsunami," says Florian Petersen from GEOMAR Helmholtz Centre for Ocean Research Kiel. He is the lead author of the study, which has now been published in the journal Geophysical Research Letters.

As early as December 2014, just eight months after the main earthquake, the Kiel team deployed 15 seismic measuring devices specially developed for the deep sea off the coast of Chile. "The logistical and also administrative challenges for the deployment of these ocean-bottom seismometers are challenging and eight months of preparation time is very short. However, since the investigations are crucial to better comprehend the hazard potential of the plate margin off northern Chile, even the Chilean Navy finally supported us by making its patrol boat COMANDANTE TORO available," reports project leader and co-author Dr. Dietrich Lange from GEOMAR.

At the end of 2015, these ocean bottom seismometers (OBS) were recovered by the German research vessel SONNE. The team on board serviced the devices, read out the data and placed the OBS on the seabed again. It was not until November 2016 that the American research vessel MARCUS G. LANGSETH finally recovered them. "Together with data from land, we have obtained a seismic data set of the earthquake region over 24 months, in which we can find the signals of numerous aftershocks. This is unique so far," explains Florian Petersen, for whom the study is part of his doctoral thesis.

The evaluation of the long-term measurements, in which colleagues from the Universidad de Chile and Oregon State University (USA) were also involved, showed that an unexpectedly large number of aftershocks were located between the actual earthquake rupture zone and the deep-sea trench. "But what surprised us even more was that many aftershocks were quite shallow. They occurred in the overlying South American continental plate and not along the plate boundary of the dipping Nazca plate," Petersen says.

Over many earthquake cycles, these aftershocks can strongly disturb and rupture the seaward edge of the continental plate. Resulting gaps fill with pore fluids. As a result, the authors conclude, the energy of the quakes can only propagate downwards, but not to the deep-sea trench off the coast of Chile. "Therefore, there were no large, sudden shifts of the seafloor during the 2014 earthquake and the tsunami was fortunately relatively small", says Florian Petersen.

The question still remains whether the Iquique earthquake of 2014 was already the expected major quake in the region or whether it only released some of the stress that had built up since 1877. "The region remains very exciting for us. The current results were only possible due to the close cooperation of several nations and the use of research vessels from Germany, Chile and the USA. This shows the immense effort that is required to study marine natural hazards. However, this is critical for a detailed assessment of the risk to the coastal cities in northern Chile, so everyone was dedicated to the task," says co-author Prof. Dr. Heidrun Kopp from GEOMAR.

###

Coral reef restorations can be optimized to reduce flood risk

New practical guidelines for reef restoration will benefit coral ecosystems while also providing coastal communities with effective protection from flood risk

FRONTIERS

Research News

New guidelines for coral reef restoration aiming to reduce the risk of flooding in tropical coastal communities have been set out in a new study that simulated the behavior of ocean waves travelling over and beyond a range of coral reef structures. Published in Frontiers in Marine Science, these guidelines hope to optimize restoration efforts not only for the benefit of the ecosystem, but also to protect the coast and people living on it.

"Our research reveals that shallow, energetic areas such as the upper fore reef and middle reef flat, typically characterized by physically-robust coral species, should be targeted for restoration to reduce coastal flooding," says Floortje Roelvink, lead author on the paper and researcher at Deltares, a Dutch research institute. "This will benefit both coral ecosystems and human coastal populations that rely on them for tourism, fisheries, and recreation."

Important structures

Coral reefs help to sustain the economy of 500 million people in tropical coastal communities and can offer protection from wave-driven flooding and coastal erosion, especially in the face of climate change. Reef restoration, which involves coral planting and reef management to improve the health, abundance, and biodiversity of the ecosystem, has been suggested as a way of reducing flood risk.

"Our research can help guide the design of coral reef restorations to best increase the resiliency of coastal communities from flooding," said Curt Storlazzi, U.S. Geological Survey research geologist and project lead. "Such information can increase the efficiency of coral restoration efforts, assisting a range of stakeholders in not only coral reef conservation and management, but also coastal hazard risk reduction."

"Although we know that coral reefs can efficiently attenuate ocean wave energy and reduce coastal flooding, knowledge of specifically where to locate and design coral reef restorations on specific types of reefs is lacking," explains Ap van Dongeren, coastal morphology specialist at Deltares and project co-lead. "We were keen to fill this knowledge gap because the costs and practical constraints of reef recovery efforts necessitate an approach to design and restoration that produces the most benefit for all."

Reef by design

To first understand the range of naturally occurring reef shapes, such as fringing reefs, straight sloping reefs, convex reefs and reefs with an offshore shelf, the researchers analyzed a database of over 30,000 coral reef profiles across the U.S., including those in the Mariana, Hawaii, and Virgin Islands. Using these reef profiles, they numerically "designed" reef restorations to be both feasible from an operational and ecological perspective and to have an expected beneficial impact on coastal flooding.

The researchers established that reef restorations should not be placed too deep because of operational constraints and limit on the wave reduction efficiency. Restorations should also not be too shallow, to prevent the drying of reef restorations and reef degradation due to thermal intolerance.

Different types of coral restorations were also investigated - "green", entailing solely outplanting corals, or "gray-green hybrid" restorations, entailing emplacement of structures (such as ReefBalls) and then outplanting corals on top of them. The team then used a numerical model to simulate waves travelling over both the restored and unrestored coral reef profiles to see how far those waves ran up the coast, providing an indication of the effect of the different reef restorations on coastal flooding.

"We hope this study will motivate others to continue and expand on this research, among others by conducting field and laboratory experiments to validate our findings," concludes Roelvink.

###

THE SCIENCE OF STAR TREK

How to thermally cloak an object

Theoretical method can make objects invisible to a thermal camera, or mimic a different object

UNIVERSITY OF UTAH

Research News

Can you feel the heat? To a thermal camera, which measures infrared radiation, the heat that we can feel is visible, like the heat of a traveler in an airport with a fever or the cold of a leaky window or door in the winter.

In a paper published in Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, an international group of applied mathematicians and physicists, including Fernando Guevara Vasquez and Trent DeGiovanni from the University of Utah, report a theoretical way of mimicking thermal objects or making objects invisible to thermal measurements. And it doesn't require a Romulan cloaking device or Harry Potter's invisibility cloak. The research is funded by the National Science Foundation.

The method allows for fine-tuning of heat transfer even in situations where the temperature changes in time, the researchers say. One application could be to isolate a part that generates heat in a circuit (say, a power supply) to keep it from interfering with heat sensitive parts (say, a thermal camera). Another application could be in industrial processes that require accurate temperature control in both time and space, for example controlling the cooling of a material so that it crystallizes in a particular manner.

Watch a visualization of how the method cloaks a kite-shaped object here.


Or watch how it works for a Homer Simpson-shaped object here.

Cloaking or invisibility devices have long been elements of fictional stories, but in recent years scientists and engineers have explored how to bring science fiction into reality. One approach, using metamaterials, bends light in such a way as to render an object invisible.

Just as our eyes see objects if they emit or reflect light, a thermal camera can see an object if it emits or reflects infrared radiation. In mathematical terms, an object could become invisible to a thermal camera if heat sources placed around it could mimic heat transfer as if the object wasn't there.

The novelty in the team's approach is that they use heat pumps rather than specially crafted materials to hide the objects. A simple household example of a heat pump is a refrigerator: to cool groceries it pumps heat from the interior to the exterior. Using heat pumps is much more flexible than using carefully crafted materials, Guevara says. For example, the researchers can make one object or source appear as a completely different object or source. "So at least from the perspective of thermal measurements," Guevara says, "they can make an apple appear as an orange."

The researchers carried out the mathematical work needed to show that, with a ring of heat pumps around an object, it's possible to thermally hide an object or mimic the heat signature of a different object.

The work remains theoretical, Guevara says, and the simulations assume a "probing" point source of heat that would reflect or bend around the object - the thermal equivalent of a flashlight in a dark room.

The temperature of that probing source must be known ahead of time, a drawback of the work. However the approach is within reach of current technology by using small heat pumps called Peltier elements that transport heat by passing an electrical current across a metal-metal junction. Peltier elements are already widely used in consumer and industrial applications.

The researchers envision their work could be used to accurately control the temperature of an object in space and time, which has applications in protecting electronic circuits. The results, the researchers say, could also be applied to accurate drug delivery, since the mathematics of heat transfer and diffusion are similar to those of the transfer and diffusion of medications. And, they add, the mathematics of how light behaves in diffuse media such as fog could lead to applications in visual cloaking as well.

Find a preprint of the study here.

After publication, find the full study here.

In addition to Guevara and DeGiovanni, Maxence Cassier, CNRS Researcher at the Fresnel Institute in Marseille, France and Sébastien Guenneau, CNRS researcher, UMI 2004 Abraham de Moivre-CNRS, Imperial College London, London, UK co-authored the study.

###

Discovering candidate for reflex network of walking cats: Understanding animals with robots

OSAKA UNIVERSITY

Research News

IMAGE

IMAGE: THE QUADRUPED ROBOT THAT CAN REPRODUCE MUSCLE CHARACTERISTICS AND REFLEXES OF ANIMALS. view more 

CREDIT: OSAKA UNIVERSITY

A group of researchers from Osaka University developed a quadruped robot platform that can reproduce the neuromuscular dynamics of animals (Figure 1), discovering that a steady gait and experimental behaviors of walking cats emerged from the reflex circuit in walking experiments on this robot. Their research results were published in Frontiers in Neurorobotics.

It was thought that a steady gait in animals is generated by complex nerve systems in the brain and spinal marrow; however, recent research shows that a steady gait is produced by the reflex circuit alone. Scientists discovered a candidate of reflex circuit to generate the steady walking motion of cats, investigating locomotion mechanisms of cats by reproducing their motor control using robots and computer simulations.

Since experiments using animals are strictly controlled and restricted in terms of animal protection, it is difficult to study animal locomotion. So, it is still unknown how nerve systems discovered in prior research are integrated (i.e., how reflex circuits responsible for animal locomotion are integrated) in the animal body.

Toyoaki Tanikawa and his supervisors assistant professor Yoichi Masuda and Prof Masato Ishikawa developed a four-legged robot that enables the reproduction of motor control of animals using computers. This quadruped robot, which comprises highly back-drivable legs to reproduce the flexibility of animals and torque-controllable motors, can reproduce muscle characteristics of animals. Thus, it is possible to conduct various experiments using this robot instead of the animals themselves.

By searching for the reflex circuit that contributes to the generation of a steady walking in cats through robotic experiments, the researchers found a simple reflex circuit that could produce leg trajectories and a steady gait pattern, which they named "reciprocal excitatory reflex between hip and knee extensors."

In this study, the researchers found that:

    - The robot generated steady walking motions by simply reproducing the reciprocal circuit in each leg of the robot.

    - The robot's gait became unstable when the reciprocal circuit was cut off.

    - When the mutual excitatory circuit was stimulated, the circuit produced a phenomenon called 'prolongation of the stance phase.' This result suggests that this circuit is an important component responsible for walking in cats.

This group's research results will benefit both the biology and robotics fields. In addition to bringing new knowledge to biology, if robotic animals could serve as a replacement for real animals in the future, it will give more scientists the chance to study the mechanisms of animal locomotion under various experimental conditions. Approximating a robot's structure to that of an animal will lead to the development of fundamental technologies for making robots that move and maneuver as effectively as animals.

Co-author Yoichi Masuda says, "Gaining knowledge about animals without using experimental animals is also significant for the humans that live with them. Further combination of robotics and biology through the creation of robots that mimic the structures of animals and their locomotion could become the first step towards understanding the principles underlying the behaviors of animals and humans."

###

The article, "A reciprocal excitatory reflex between extensors reproduces the prolongation of stance phase in walking cats: analysis on a robotic platform," was published in Frontiers in Neurorobotics at DOI: https://doi.org/10.3389/fnbot.2021.636864

A video detailing this research can be viewed on https://youtu.be/-iLHRhvDccA

About Osaka University

Osaka University was founded in 1931 as one of the seven imperial universities of Japan and is now one of Japan's leading comprehensive universities with a broad disciplinary spectrum. This strength is coupled with a singular drive for innovation that extends throughout the scientific process, from fundamental research to the creation of applied technology with positive economic impacts. Its commitment to innovation has been recognized in Japan and around the world, being named Japan's most innovative university in 2015 (Reuters 2015 Top 100) and one of the most innovative institutions in the world in 2017 (Innovative Universities and the Nature Index Innovation 2017). Now, Osaka University is leveraging its role as a Designated National University Corporation selected by the Ministry of Education, Culture, Sports, Science and Technology to contribute to innovation for human welfare, sustainable development of society, and social transformation.

Website: https://resou.osaka-u.ac.jp/en

1.5°C degrowth scenarios suggest need for new mitigation pathways: Research

Wellbeing can be maintained in a degrowth transition

UNIVERSITY OF SYDNEY

Research News

IMAGE

IMAGE: MODELLING HIGHLIGHTS THE RELIANCE ON CONTROVERSIAL CARBON DIOXIDE REMOVAL OR CCS (SIZE OF CIRCLE) OF VARIOUS GROWTH PATHWAYS TO LIMIT GLOBAL WARMING, COMPARED TO 'DEGROWTH' , WHICH IS ON TRACK... view more 

CREDIT: LORENZ KEYSSE

The first comprehensive comparison of 'degrowth' scenarios with established pathways to limit climate change highlights the risk of over-reliance on carbon dioxide removal, renewable energy and energy efficiency to support continued global growth - which is assumed in established global climate modelling.

Degrowth focuses on the global North and is defined as an equitable, democratic reduction in energy and material use while maintaining wellbeing. A decline in GDP is accepted as a likely outcome of this transition.

The new modelling by the University of Sydney and ETH Zürich includes high growth/technological change and scenarios summarised by the Intergovernmental Panel on Climate Change (IPCC) as a comparison to degrowth pathways. It shows that by combining far-reaching social change focused on sufficiency as well as technological improvements, net-zero carbon emissions can be more easily achieved technologically.

The findings published today in Nature Communications.

Currently the IPCC and the established modelling community, integrated assessment model (IAM), does not consider degrowth scenarios where reduced production and consumption in the global North is combined with maintaining wellbeing and achieving climate goals. In contrast, established scenarios rely on combinations of unprecedented carbon dioxide removal from the atmosphere and other far-reaching technological changes.

The results show the international targets of capping global warming to 1.5C-2C above pre-Industrial levels can be achieved more easily in key dimensions, for example:

    1. Degrowth in the global North/high-income countries results in 0.5% annual decline of global GDP. However, a substantially increased uptake of renewable energy coupled with negative emissions remains necessary, albeit significantly less than in established pathways.

    2. Capping warming to the upper limit of 2C can be achieved with 0 percent growth, while being consistent with low levels of carbon dioxide removal (i.e. from tree planting) and increases in renewable energy as well as energy efficiency.

Lead author, Mr Lorenz Keyßer, from ETH Zürich whose Master's thesis is on degrowth, carried out the research in Australia under supervision of global leader in carbon footprinting Professor Manfred Lenzen, from the University of Sydney's centre for Integrated Sustainability Analysis (ISA) in the School of Physics.

Mr Keyßer said he was surprised by the clarity of the results: "Our simple model shows degrowth pathways have clear advantages in many of the central categories; it appears to be a significant oversight that degrowth is not even considered in the conventional climate modelling community.

"The over-reliance on unprecedented carbon dioxide removal and energy efficiency gains means we risk catastrophic climate change if one of the assumptions does not materialise; additionally, carbon dioxide removal shows high potential for severe side-effects, for instance for biodiversity and food security, if done using biomass. It thus remains a risky bet.

"Our study also analysed the other key assumption upon which the modelling of the IPCC and others is based: continued growth of global production and consumption."

The senior author Professor Lenzen said the technological transformation is particularly extraordinary given the scale of carbon dioxide removal assumed in the IPCC Special Report, Global warming of 1.5C, of between 100-1000 billion tonnes (mostly over 600 GtCO2) by 2100; in large part through bioenergy to carbon capture and storage (BECCS) as well as through afforestation and reforestation (AR).

"Deployment of controversial 'negative emissions' future technologies to try to remove several hundred gigatonnes [hundreds of billion tonnes] of carbon dioxide assumed in the IPCC scenarios to meet the 1.5C target faces substantial uncertainty," Professor Lenzen says.

"Carbon dioxide removal (including carbon capture and storage or CCS) is in its infancy and has never been deployed at scale."

WHAT DEGROWTH MIGHT LOOK LIKE

The new modelling was undertaken pre-COVID-19 but the degrowth pathways are based on a fraction of global GDP shrinkage of some 4.2% experienced in the first six months of the pandemic. Degrowth also focuses on structural social change to make wellbeing independent from economic growth.

"We can still satisfy peoples' needs, maintain employment and reduce inequality with degrowth, which is what distinguishes this pathway from recession," Mr Keyßer says.

"However, a just, democratic and orderly degrowth transition would involve reducing the gap between the haves and have-nots, with more equitable distribution from affluent nations to nations where human needs are still unmet - something that is yet to be fully explored."

A 'degrowth' society could include:

    · A shorter working week, resulting in reduced unemployment alongside increasing productivity and stable economic output.

    · Universal basic services independent of income, for necessities i.e. food, health care, transport.

    · Limits on maximum income and wealth, enabling a universal basic income to be increased and reducing inequality, rather than increasing inequality as is the current global trend.

Among the 1.5C degrowth pathways explored in the new research, the Decent Living Energy (DLE) scenario is closest to historical trends for renewable energy and negligible 'negative emissions'. Mr Keyßer says the International Energy Agency projections for renewables growth to 2050 based on past trends are roughly equivalent to the DLE pathway modelled.

"That non-fossil energy sources could meet 'decent living energy' requirements while achieving 1.5C - under conditions close to business-as-usual - is highly significant.

"However it is clear that the DLE pathway remains extremely challenging due to the substantial reduction in energy use as well as the connected deep social changes required," Mr Keyßer says.

MODELLING CLIMATE PATHWAYS

For the study, a simplified quantitative representation of the fuel-energy-emissions nexus was used as a first step to overcome what the authors believe is an absence of comprehensive modelling of degrowth scenarios in mainstream circles like the IAM community and IPCC. The model is accessible in open access via the paper online.

A total of 18 scenarios were modelled under three main categories to reach 1.5C-2C:

    1. Degrowth and 'decent living energy', looking at low energy-GDP decoupling.

    2. Medium energy-GDP decoupling including approximated IPCC scenarios.

    3. High energy-GDP decoupling (strong-to-extreme technology pathways/energy efficiency driving separation between economic growth and emissions).

Mr Keyßer says: "This study demonstrates the viability of degrowth in minimising several key feasibility risks associated with technology-driven pathways, so it represents an important first step in exploring degrowth climate scenarios."

Professor Lenzen concludes: "A precautionary approach would suggest degrowth should be considered, and debated, at least as seriously as risky technology-driven pathways upon which the conventional climate policies have relied."

###