Tuesday, June 07, 2022

A 50% reduction in emissions by 2030 can be achieved. Here’s how

Energy and environmental researchers pooled their knowledge to provide recommendations to fulfill the United States’ climate pledge

Peer-Reviewed Publication

DOE/LAWRENCE BERKELEY NATIONAL LABORATORY

EV Charging Station for Electric Car Concept Image 

IMAGE: REDUCING EMISSIONS WILL REQUIRE MAJOR POLICY CHANGES SUCH AS DOUBLING THE AMOUNT OF ELECTRICITY GENERATED FROM RENEWABLES AND A WIDESPREAD SHIFT TO ELECTRIC VEHICLES. view more 

CREDIT: CHOOCHART CHOOCHAIKUPT/ISTOCK

The United States has set an ambitious goal to reduce greenhouse gas (GHG) emissions by at least 50% by 2030. Are we on track to succeed?  

new study by a team of scientists and policy analysts from across the nation suggests that there are multiple pathways to achieve this goal – but big commitments will need to be made, immediately. 

“This study should give policy makers and other energy stakeholders some level of comfort, by showing that everybody in the field is pointing in the same direction. The case for clean energy is stronger than ever before and our study shows that the 2030 emission target can be achieved,” said Nikit Abhyankar, one of the study’s authors and a scientist in the Electricity Markets & Policy Department at Lawrence Berkeley National Laboratory (Berkeley Lab). He notes that the most urgent actions will be to double the amount of renewable capacity built each year and transition predominately to electric vehicles within the next decade or so.

“With the right policies and infrastructure, we can reduce our emissions, while saving American consumers billions of dollars and generating new employment,” he said. 

Reducing GHG emissions by 50% by 2030 would put the United States on a path to limit global warming to 1.5 degrees Celsius, the target scientists say is required to avoid the worst consequences of the climate crisis.

CAPTION

The chart shows how the key economic sectors will contribute to achieve the U.S. goal of reducing greenhouse gas emissions by 50% by 2030. Electric sector (doubling the rate of renewable energy buildout) and Transport sector (selling predominantly EVs by 2030) would be the most critical strategies for achieving this goal.

CREDIT

Nikit Abhyankar/Berkeley Lab


The study, published in Science, consolidates findings from six recently published techno-economic models that simulate the U.S. energy system operations in comprehensive detail. According to the authors, the separate models all agree on four major points:

  • The majority of the country’s greenhouse gas emissions come from power generation and transportation, so to reduce overall emissions by 50%, the electricity grid needs to run on 80% clean energy (up from today’s 40%), and the majority of vehicles sold by 2030 need to be electric. Other important sources of GHG emissions reduction include electrification of buildings and industries.
  • The primary barrier to increased alternative energy use will not be cost, it will be enacting new policies. A coordinated policy response between states and the federal government will be necessary to succeed.
  • Thanks to advances in wind, solar, and energy storage technologies, powering the electric grid with renewables will not be more expensive; and electric vehicles could save every household up to $1,000 per year in net benefits.
  • A clean-energy transition would reduce air pollution, prevent up to 200,000 premature deaths, and avoid up to $800 billion in environmental and health costs through 2050. Many of the health benefits will occur in communities of color and frontline communities that are disproportionately exposed to vehicle, power plant, and industrial pollution.

“Our study provides the first detailed roadmap for how the United States can reach its 50% greenhouse gas emissions-reduction target by 2030,” said lead author John Bistline, program manager in the Energy Systems and Climate Analysis Group at the Electric Power Research Institute. “This will require tripling the pace of historic carbon reductions, an ambitious but achievable target if stakeholders collaborate across all sectors. By comparing results across six independent models, we provide greater confidence about the policies and technology deployment needed to achieve near-term climate goals, laying the groundwork for an affordable, reliable, and equitable net-zero future.”

CAPTION

U.S. electricity grid needs to be 80% clean (up from 40% currently) for reaching the U.S. goal of reducing emissions by 50% by 2030. The chart shows the required new electricity generation capacity buildout to achieve this goal (right) and compares with the historical buildout rates (left). The U.S. will need to double its renewable energy buildout rates.

CREDIT

Nikit Abhyankar/Berkeley Lab


According to Abhyankar, who led the development of one of the six models, “By 2030, wind, solar, coupled with energy storage can provide the bulk of the 80% clean electricity. The findings also show that generating the remaining 20% of grid power won’t require the creation of new fossil fuel generators.” He noted that existing gas plants, used infrequently and combined with energy storage, hydropower, and nuclear power are sufficient to meet demand during periods of extraordinarily low renewable energy generation or exceptionally high electricity demand. “And if the right policies are in place, the coal and gas power plants in the country that currently provide the majority of the nation’s electricity would recover their initial investment, thereby avoiding risk of cost under-recovery for investors.”

“Since announcing the nation’s emissions reduction pledge at the 2021 United Nations climate conference, the United States has taken steps in the right direction,” said Abhyankar. “But a lot still needs to happen. What we are hoping is that this study will give some level of a blueprint of how it could be done.”

The other models used for this study were developed by the Electric Power Research Institute, Environmental Defense Fund, National Resources Defense Council, and the MIT Joint Program on the Science and Policy of Global Change.

 

# # #

Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 14 Nobel Prizes. Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and the universe. Scientists from around the world rely on the Lab’s facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of California for the U.S. Department of Energy's Office of Science.

DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.

Close encounter more than 10,000 years ago stirred up spirals in accretion disk

Peer-Reviewed Publication

CHINESE ACADEMY OF SCIENCES HEADQUARTERS

A schematic view of the history of the accretion disk and the intruding object 

IMAGE: A SCHEMATIC VIEW OF THE HISTORY OF THE ACCRETION DISK AND THE INTRUDING OBJECT. THE THREE PLOTS STARTING FROM THE BOTTOM LEFT ARE SNAPSHOTS FROM THE NUMERICAL SIMULATION, DEPICTING THE SYSTEM AT THE TIME OF THE FLYBY EVENT, 4000 YEARS LATER, AND 8000 YEARS AFTER THAT, RESPECTIVELY. THE TOP RIGHT IMAGE IS FROM THE ALMA OBSERVATIONS, SHOWING THE DISK WITH SPIRALS AND TWO OBJECTS AROUND IT, CORRESPONDING TO THE SYSTEM 12,000 YEARS AFTER THE FLYBY EVENT. view more 

CREDIT: SHAO

Dr. LU Xing, an associate researcher from the Shanghai Astronomical Observatory (SHAO) of the Chinese Academy of Sciences, along with collaborators from Yunnan University, the Harvard-Smithsonian Center for Astrophysics, and the Max Planck Institute, have used high-resolution observational data from the Atacama Large Millimeter/submillimeter Array (ALMA) to discover a massive protostellar disk in the Galactic Center and determine how its spiral arms were formed.

The group's research shows that this disk was perturbed by close encounter with a nearby object, thus leading to the formation of the spiral arms. This finding demonstrates that the formation of massive stars may be similar to that of lower-mass stars, through accretion disks and flybys.

The results were published in Nature Astronomy on May 30.

During the formation of stars, accretion disks arise around newborn stars. These accretion disks, also known as "protostellar disks," are an essential component in star formation. Accretion disks continuously feed gas into protostars from the environment. In this sense, they are stellar cradles where stars are born and raised.

For massive protostars, especially early O-type ones of more than 30 solar masses, however, the role of accretion disks in their formation has not been clear.

At a distance of about 26,000 light years from Earth, the Galactic Center is a unique and important star-forming environment. Besides the supermassive black hole Sgr A*, the Galactic Center contains a huge reservoir of dense molecular gas, mostly in the form of molecular hydrogen (H2), which is the raw material for star formation. The gas starts to form stars once gravitational collapse is initiated.

However, the environment in the Galactic Center is unique, with strong turbulence and strong magnetic fields as well as tidal forces from Sgr A*, all of which substantially affect star formation in this region.

Since the distance between the Galactic Center and Earth is huge and complicated foreground contaminations exist, direct observations of star-forming regions around the Galactic Center have been challenging.

The research team led by Dr. LU used ALMA's long baseline observations to achieve a resolution of 40 milliarcseconds. To get an idea how fine that resolution is, it would allow an observer in Shanghai to easily spot a football in Beijing.

With these high-resolution, high-sensitivity ALMA observations, the researchers discovered an accretion disk in the Galactic Center. The disk has a diameter of about 4,000 astronomical units and surrounds a forming, early O-type star with a mass about 32 times that of the Sun. This system is among the most massive protostars with accretion disks and represents the first direct imaging of a protostellar disk in the Galactic Center.

The discovery suggests that massive early-O type stars go through a formation phase involving accretion disks, and this conclusion is valid for the unique environment of the Galactic Center.

What is more interesting is that the disk clearly displays two spiral arms. Such arms are often found in spiral galaxies but are rarely seen in protostellar disks. In general, spiral arms emerge in accretion disks due to fragmentation induced by gravitational instability. However, the disk discovered in this research is hot and turbulent, thus making it able to balance its own gravity.

In trying to explain this phenomenon, the researchers proposed an alternate explanation—that the spirals were induced by external perturbation. The researchers proposed this explanation after detecting an object of about three solar masses—possibly the source of the external perturbation—several thousand astronomical units away from the disk.

To verify this proposition, the researchers calculated several dozen possible orbits of this object. They found that only one of these orbits could perturb the disk to the observed level. They subsequently carried out a numerical simulation on the high-performance supercomputing platform of the Shanghai Astronomical Observatory to trace the trajectory of the intruding object. The scientists were able to successfully reproduce the entire history of the object flying by the disk more than 10,000 years ago, when it would have stirred up spirals in the disk.

"The nice match among analytical calculations, the numerical simulation, and the ALMA observations provides robust evidence that the spiral arms in the disk are relics of the flyby of the intruding object," said Dr. LU.

This finding clearly demonstrates that accretion disks at early evolutionary stages of star formation are subject to frequent dynamic processes such as flybys and these processes can substantially influence the formation of stars and planets.

Interestingly, flybys may have occurred in our own solar system too: A binary stellar system known as Scholz's Star flew by the solar system about 70,000 years ago, probably penetrating through the Oort cloud and sending comets to the inner solar system.

The current study suggests that for more massive stars, especially in the high-stellar-density environment around the Galactic Center, such flybys should also be frequent. "The formation of this massive protostar is similar to its lower-mass cousins like the Sun, with accretion disks and flyby events involved. Although stellar masses are different, certain physical mechanisms in star formation could be the same. This provides important clues to solving the mystery of massive star formation," said Dr. LU.

Which forces control the elevation of mountains?

It depends, say scientists and propose a novel classification scheme in a new study

Peer-Reviewed Publication

GFZ GEOFORSCHUNGSZENTRUM POTSDAM, HELMHOLTZ CENTRE

Puna Plateau in the Andes 

IMAGE: THE PUNA PLATEAU IS PART OF THE CENTRAL ANDES, A MOUNTAIN BELT IN WHICH THE ELEVATION IS MAINLY DRIVEN BY LITHOSPHERIC STRENGTH AND TECTONIC FORCES. view more 

CREDIT: MITCHELL D'ARCY FOR GFZ

Scientists have come up with a new classification scheme for mountain belts that uses just a single number to describe whether the elevation of the mountain belt is controlled mainly by weathering and erosion or by properties of the Earth’s crust, i.e., the lithospheric strength: the “Beaumont number” (Bm). It’s named after Chris Beaumont, a scientist who, together with his team, developed coupled models of surface processes and tectonic forces. The scientists report about their findings in the current issue of Nature.

A Beaumont number between 0.4 and 0.5 means that the mountains are in a so-called flux steady state in which the controlling factors of mountain growth are tectonic forces and the lithospheric strength, balanced by weathering processes as, for example, in Taiwan. With a Bm value lower than 0.4, mountains are also in a flux steady state but with erosion as controlling factor like the Southern Alps of New Zealand.  A Beaumont number above 0.5 means that the mountains still grow (non-steady state) with lithospheric strength controlling the process. Examples for this type are the Himalaya-Tibet mountains and the Central Andes.

This classification is resolving a long-standing question whether tectonic forces and strength of the Earth’s crust are the controlling factors of mountain elevation or weathering processes. The new study says it can be one or the other – depending on geographic location, climate and underground properties.

The team of scientists led by Sebastian G. Wolf of Bergen University in Norway used a new coupled surface process and mantle-scale tectonic model for their study by combining the thermomechanical tectonic model FANTOM with the landscape evolution model FastScape. Thus, they were able to reconcile high erosion rates in some active orogens with long-term survival of mountain belts for hundreds of millions of years.

Jean Braun of the GFZ German Research Centre for Geosciences, who co-authored the paper, says: “With our Beaumont number we can determine to which proportion tectonics, climate, and crustal strength control the height of mountain belts. And, for most mountain belts, this can be done without complex measurements or assumptions; all that is needed is a knowledge of the rate of convergence obtained from present-day plate velocities or plate reconstructions, the height of the mountain obtained from a topographic map and the widening rate obtained from the geological record. In a nutshell: Whether a mountain is short or tall is the product of slow or fast convergence, wet or dry climate, or strong or weak crust.” The Beaumont number shows which of these three factors is dominating.

Original study: Sebastian G. Wolf, Ritske S. Huismans, Jean Braun & Xiaoping Yuan: “Topography of mountain belts controlled by rheology and surface processes”; in Nature. DOI: 10.1038/s41586-022-04700-6

Link: https://www.nature.com/articles/s41586-022-04700-6


What oxytocin can tell us about the evolution of human prosociality

Variants unique to modern humans in more than 70% of the population

Peer-Reviewed Publication

UNIVERSITY OF BARCELONA

Human prosociality 

IMAGE: THIS RESEARCH IS PART OF CONSTANTINA THEOFANOPOULOU'S DOCTORAL THESIS CARRIED OUT UNDER THE CO-SUPERVISION OF CEDRIC BOECKX, ICREA RESEARCHER AT THE INSTITUTE OF COMPLEX SYSTEMS AT THE UB (UBICS) AND ERICH D. JARVIS, PROFESSOR AT ROCKEFELLER UNIVERSITY. view more 

CREDIT: UNIVERSITY OF BARCELONA

Modern humans are characterized by their prosociality, a broad term that encompasses intraspecies empathy, social tolerance, cooperation and altruism. These facets of social cognition have been associated with variations in the oxytocin and vasotocin genes (OT and VT) and their receptors (OTR and VTR).To shed light on the genetic basis of this behaviour, scientists from the University of Barcelona (UB) and Rockefeller University carried out a new study comparing the available genomic sequences of these genes between modern humans, non-human primate species (e.g., chimpanzees, bonobos, and macaques) and, for the first time, archaic humans, using all the available genomes of Neanderthals and Denisovans.

In the study, published in the journal Comprehensive Psychoneuroendocrinology, the researchers identified several sites in which modern humans differed from both archaic humans and non-human primates, and others where both modern and archaic humans differed from non-human primates.

“We used an interdisciplinary approach to understand the evolution of hominid prosociality through the lens of the oxytocin and vasotocin receptors, where we combined evidence from modern and archaic genomics, population genetics, transcriptomics, and behavioural and neuroscientific studies, among other methods. These results can shed light on the genetics underlying possible sociality differences identified between modern humans and archaic humans, as well as the similarities between the modern human and bonobo social behaviour”, said first author Constantina Theofanopoulou. This research is part of her doctoral thesis carried out under the co-supervision of Cedric Boeckx, ICREA researcher at the Institute of Complex Systems at the UB (UBICS) and Erich D. Jarvis, professor at Rockefeller University.

Variants unique to modern humans in more than 70% of the population

Considering the evidence on modern human prosociality and on the involvement of the oxytocin and vasotocin genes in social behaviours, the researchers hypothesized that the evolution of these genes might elucidate the genetic basis of the evolution of hominin prosociality. With this aim in mind, the study explored the differences between modern humans, archaic humans and non-human primates in polymorphic heterozygous sites in the human genome – locations where at least two alternative sequences are found in a population. “Past studies that compared the entire modern human genome with the Neanderthal or the chimpanzee genomes have focused on changes that are fixed or nearly fixed in modern humans. This has led to them identifying sites where, for example, all Neanderthals had Adenine (one of the four nucleotides that with guanine, cytosine and thymine form the DNA) and nearly all modern humans (say, 98%) have Guanine. In this study, we searched for differences on locations where, by definition, not all modern humans share the same nucleotide, namely on polymorphic sites, where for example, 70% of the modern human population has Adenine and 30% Cytosine”, adds Theofanopoulou.

The researchers identified five sites in the oxytocin and vasotocin receptors where modern humans are unique in one of their two (or more) variants compared to archaic humans and non-human primates, and which are at the same time found in more than 70% of the modern human population. Next, they conducted functional and frequency analyses to establish whether the variants are relevant. They performed a range of analyses on the five sites and found that some of the variants are highly functional, indicating that they have an effect on the molecular function of the proteins activated by these genes.

The researchers also found that these sites are encountered in genome regions that are active in the brain, particularly in the cingulate gyrus, a brain region involved in social cognition-relevant pathways. Moreover, all these sites have been associated in other studies with a plethora of social behaviours or social deficits, such as autism, attention deficit hyperactivity disorder (ADHD), aggression, and so on.

These findings may help to explain some of the social differences between modern humans and what we presume to know about the social behaviours of Neanderthals and Denisovans. “For example, they might be relevant to the smaller social groups attributed to Neanderthals and Denisovans or to the decreased modern human androgenization. They might also be relevant to a different social structure, i.e., Neanderthals have been linked to a polygynous social structure and a higher level of male–male competition than most contemporary modern human populations”, says Constantina Theofanopoulou.

Variants present only in modern and archaic humans

The study also found two sites on the oxytocin receptor under a positive selection in modern and archaic humans: that is to say, modern and archaic humans showed a variant that was not present in any other non-human primate. This means that these sites are found in very high percentages in the modern human population (in this case, more than 85%). These same sites have also been associated with a great many social behaviours or deficits, and one of them was predicted to be a highly functional site in their regulation analyses. “The sites that are unique in both us and archaic humans versus non-human primates can elucidate the genetic underpinnings of the progressive social tolerance needed for the intensive cultural transmission of technological innovations (e.g., fire use) in the evolution of humankind, as well as for the reduced aggression indicated by several markers in early hominid evolution, such as the reduction of male canine size and the accelerated demographic success”, adds Theofanopoulou.

Convergent sites with bonobos

Lastly, the researchers found three sites where modern humans and bonobos, a primate species that shows convergence of prosocial behaviours with humans, have the same nucleotide. “The convergent sites in modern humans and bonobos could be insightful for understanding the posited similarities in prosociality, social tolerance and cooperation between us and bonobos, and the differences of both compared to chimpanzees. For example, bonobos outperform chimpanzees on tasks relevant to social causality or theory of mind and are more attentive to the face and eyes, suggestive of higher empathic sensitivity”, notes the researcher.

All the sites identified in this study have also been independently associated with disorders that include social deficits, such as autism spectrum disorders (ASD). “Understanding developmental disorders through evolutionary lenses can aid into us achieving what we call an evo-devo (evolutionary and developmental biology) understanding of these disorders. If indeed “ontogeny recapitulates phylogeny”, then deciphering our evolutionary trajectory may shed light to new genetic spots for clinical research that might, in turn, lead to earlier disorder diagnosis”, highlights Constantina Theofanopoulou.

Future research in larger sample sizes

The next step in this research would be to test these results in larger sample sizes. This objective, as Theofanopoulou says, “is easier” to achieve in non-human primates such as chimpanzees and bonobos, but “is more difficult in the case of Neanderthals or Denisovans”, since obtaining access to more archaic genomes depends on archaeological finds, among other factors. “Another step is to widen the scope of the research and include more genes of the oxytocin pathway, or other genes highlighted in human evolution, such as dopamine”, concludes the researcher.

 

Brain scans remarkably good at predicting political ideology

Study provides evidence of biological roots of partisan views

Peer-Reviewed Publication

OHIO STATE UNIVERSITY


Brain scans of people taken while they performed various tasks – and even did nothing – accurately predicted whether they were politically conservative or liberal, according to the largest study of its kind.

Researchers found that the “signatures” in the brain revealed by the scans were as accurate at predicting political ideology as the strongest predictor generally used in political science research, which is the ideology of a person’s parents.

“Can we understand political behavior by looking solely at the brain? The answer is a fairly resounding ‘yes,’” said study co-author Skyler Cranmer, the Phillips and Henry Professor of Political Science at The Ohio State University.

“The results suggest that the biological and neurological roots of political behavior run much deeper than we previously thought.”

The study, published recently in the journal PNAS Nexus, is the largest to date to use functional magnetic resonance imaging (fMRI) scans of the brain to study political ideology.

It is also one of the few to examine functional connectivity in connection to ideology – a whole-brain approach that examined which parts of the brain showed similar patterns of activity at the same time when performing specific tasks, indicating that they are communicating with each other.

The researchers used state-of-the-art artificial intelligence techniques and the resources of the Ohio Supercomputer Center to analyze the scans. They found correlations between the scan results and the participants’ reports of their ideology on a six-point scale from “very liberal” to “very conservative.”

Data came from the Ohio State University Wellbeing project, which involved 174 healthy adults who performed standard tasks often used in scientific experiments while in an fMRI scanner.

“None of the eight tasks was designed to elicit partisan responses,” said study co-author Seo Eun Yang, now an assistant professor of political science at Northeastern University, who did the work as a doctoral student at Ohio State.

“But we found the scans from all eight tasks were related to whether they identified as liberals or conservatives.”

In fact, even when participants were asked to sit quietly and think of nothing in particular, the resulting scans showed a relationship to political ideology, said co-author James Wilson, assistant professor of psychiatry and biostatistics at the University of Pittsburgh School of Medicine.

“Even without any stimulus at all, functional connectivity in the brain can help us predict a person’s political orientation,” Wilson said.

While the scans from all eight tasks were predictive of the participants’ ideology, three tasks had particularly strong links.

One was an empathy task, where participants are shown photos of emotional people with neutral, happy, sad and fearful faces. The second task examined episodic memory, and the third was a reward task where participants could win or lose money based on how quickly they pushed a button.

Only the scans of the reward task could predict political extremism – those who said they were very conservative or very liberal.  And only the empathy (emotional faces) task was significantly associated with moderate ideology.

“More work needs to be done to understand the relationship of reward decision-making with extreme political views,” Wilson said.

“The results with the empathy task suggest that political thought may be closely tied to emotion and emotional response.”

While this study did find a link between the brain signatures and political ideology, it can’t explain what causes what, Cranmer said.

“What we don’t know is whether that brain signature is there because of the ideology that people choose or whether people’s ideology is caused by the signatures we found,” he said.

“It also could be a combination of both, but our study does not have the data to address this question.”

The fact that the brain scans did as well at predicting ideology as the standard of parental ideology was impressive in itself, the researchers said.  But when the brain results were combined with demographic and socioeconomic indicators, such as age, gender, income and education, the resulting model did even better at predicting ideology than parental ideology.

“Functional connectivity and all survey-based responses provided the strongest predictive capabilities of any model we considered,” Yang said.

Cranmer noted how this study was different from others that have also used brain scans to examine ideology.

“We looked at the brain as a complex system of regions interacting with each other to produce these behaviors. Most other studies have looked at one region of the brain in isolation to see how it was activated or not activated when they were subjected to political stimuli,” he said.

This study showed that activations of specific regions of the brain – the amygdala, inferior frontal gyrus and the hippocampus – were most strongly associated with political affiliation.

Study co-author Zhong-Lin Lu, now at New York University, ran the Ohio State Wellbeing project while director of the university’s Center for Cognitive and Behavioral Brain Imaging.

The research was funded by the National Science Foundation and the National Institutes of Health.

A new framework for web scraping data to ensure its validity for use in marketing studies

News from the Journal of Marketing

Peer-Reviewed Publication

AMERICAN MARKETING ASSOCIATION

Researchers from Erasmus University Rotterdam, Tilburg University, INSEAD, and Oxford University published a new paper in the Journal of Marketing that proposes a methodological framework focused on enhancing the validity of web data.

The study, forthcoming in the Journal of Marketing, is titled “Fields of Gold: Scraping Web Data for Marketing Insights” and is authored by Johannes Boegershausen, Hannes Datta, Abhishek Borah, and Andrew T. Stephen.

The recent ruling of the Ninth Circuit in HiQ Labs v. LinkedIn underscores the importance of navigating the legal challenges when using web scraping to collect data for academic research. While it may be permissible to collect information from publicly available sites, researchers still need to be cautious about how they design their extraction software. For example, collecting information from publicly available user profiles in some jurisdictions may trigger privacy concerns—and prompts researchers to anonymize their data already during the collection.

While marketing researchers increasingly employ web data, the idiosyncratic and sometimes insidious challenges in its collection have received limited attention. How can researchers ensure that the datasets generated via web scraping and APIs are valid? This research team developed a novel framework that highlights how addressing validity concerns requires the joint consideration of idiosyncratic technical and legal/ethical questions.

The authors say that “Our framework covers the broad spectrum of validity concerns that arise along the three stages of the automatic collection of web data for academic use: selecting data sources, designing the data collection, and extracting the data. In discussing the methodological framework, we offer a stylized marketing example for illustration. We also provide recommendations for addressing challenges researchers encounter during the collection of web data via web scraping and APIs.”

The article further provides a systematic review of more than 300 articles using web data published in the top five marketing journals. Using this review, the researchers explain how web data has advanced marketing thought. Understanding the richness and versatility of web data is invaluable for scholars curious about integrating it into their research programs.

Interested researchers can access the database developed for this review on the companion website at https://web-scraping.org/. This website also features additional useful resources and tutorials for collecting web data via web scraping and APIs.

The researchers add, “We use our methodological framework and typology to unearth new and underexploited ‘fields of gold’ associated with web data. We seek to demystify the use of web scraping and APIs and thereby facilitate broader adoption of web data across the marketing discipline. Our Future Research section highlights novel and creative avenues of using web data that include exploring underutilized sources, creating rich multi-source datasets, and fully exploiting the potential of APIs beyond data extraction.”

Full article and author contact information available at: https://doi.org/10.1177/00222429221100750 

About the Journal of Marketing 

The Journal of Marketing develops and disseminates knowledge about real-world marketing questions useful to scholars, educators, managers, policy makers, consumers, and other societal stakeholders around the world. Published by the American Marketing Association since its founding in 1936, JM has played a significant role in shaping the content and boundaries of the marketing discipline. Christine Moorman (T. Austin Finch, Sr. Professor of Business Administration at the Fuqua School of Business, Duke University) serves as the current Editor in Chief.
https://www.ama.org/jm

About the American Marketing Association (AMA) 

As the largest chapter-based marketing association in the world, the AMA is trusted by marketing and sales professionals to help them discover what is coming next in the industry. The AMA has a community of local chapters in more than 70 cities and 350 college campuses throughout North America. The AMA is home to award-winning content, PCM® professional certification, premiere academic journals, and industry-leading training events and conferences.
https://www.ama.org

A ‘Goldilocks amount’ of time spent online could be good for teenagers’ wellbeing

Poor mental wellbeing in adolescents associated with low as well as high levels of digital engagement in study of 6,000 young people

Peer-Reviewed Publication

TRINITY COLLEGE DUBLIN

New research from the Department of Sociology in Trinity College Dublin has found further evidence of a relationship between online engagement and mental wellbeing in teenagers. The study, published recently in the journal ‘Computers in Human Behaviour’, contributes to mounting international evidence on the dangers of high levels of digital media use. 

Additionally, the researchers found that in today’s connected world low engagement with digital media is also associated with poor mental health outcomes for adolescents who spend less time online than their peers. This finding supports the ‘goldilocks’ hypothesis – that digital media use at moderate levels is not intrinsically harmful and there is a point between low and high use that is ‘just right’ for young people.  

This is the first time the ‘goldilocks’ theory has been examined in Irish teenagers/young adults. It is also the first study to attempt the integration of both time and online behaviours when examining associations between digital media and mental wellbeing.

Professor Richard Layte, Professor of Sociology and co-author on the paper, said:

“Evidence is mounting internationally that online engagement among adolescents may be damaging for mental well-being but the evidence is mixed. Our work provides fresh insights on the impact of digital engagement at the age of 17/18 and the results provide worrying evidence of real harms that require urgent action.”

“There is a simple narrative out there that more is worse. It is important to emphasise that online engagement is now a normal channel of social participation and non-use has consequences.  Our findings also raise the possibility that moderate use is important in today’s digital world and that low levels of online engagement carries its own risks. Now the questions for researchers are how much is too much and how little is too little?”

The research, drawing on longitudinal data from the Growing Up in Ireland study, looked at the association between adolescent use of online engagement and mental wellbeing in over 6,000 young people between the age of 13 and again at the age of 17/18. 

The researchers asked participants to report the time they spent on line as well as the activities they engaged in: online messaging, sharing of videos and pictures, school or college work, watching movies and listening to music. Mental wellbeing was assessed by questions investigating emotional, behavioural and peer issues.  

The study characterised young person’s online behaviour based on length of time spent online as well as the types of online behaviours engaged in. Adjusting for prior psychiatric disorders and symptoms at the age of 9 and 13, the study found that high engagement in digital media strongly predicted worse mental health outcomes for both boys and girls. Furthermore, low use of digital media was associated with worse mental health for both boys and girls and was also predictive of peer problems for girls.  

Dr Ross Brannigan, lead author of the study and a former postdoctoral researcher in Trinity’s Department of Sociology, said:

“This study is novel in that it considers the importance of both time and online behaviours when examining associations between digital media and mental wellbeing. We found clear distinctions between groups spending similar time online, but differing in their behaviours online. This indicates the importance of considering both time and behaviours online as well as the quality of these behaviours, for example passive compared to active behaviours, or the types of behaviours such as social, educational, entertainment.”

“Digital media and online usage is a controversial topic when it comes to its effect on mental health, with no real consistency of results overall. While these results are not causal or deterministic, our findings are an important first step on the path to revealing why these relationships exist. It will now be importance to build on these findings and further investigate WHY digital media engagement may be related to mental wellbeing.”

This study is part of the TeenPath Project, a collaborative project between the Department of Sociology in Trinity College Dublin and the Department of Public Health and Epidemiology in The School of Population Health, Royal College of Surgeons in Ireland. 

The paper “Digital engagement and its association with adverse psychiatric symptoms: A longitudinal cohort study utilizing latent class analysis” is available to read here.