Saturday, August 19, 2023

How Airbnb is fuelling gentrification in Toronto

How Airbnb is fuelling gentrification in Toronto
A new study sheds light on how short-term rentals like Airbnb make housing less affordable
. Credit: Shutterstock

The average asking price for a rental unit in Canada reached $2,042 in June, marking a 7.5 percent increase from 2022. Metropolitan districts are particularly affected by rising rental costs, with some local families forced to relocate due to a lack of affordable housing.

While several factors may contribute to this, some have pointed to Airbnb as one of the reasons for the rental crisis. Airbnb says it is not the cause of the housing affordability crisis.

Despite the significant public interest in how short-term rentals like Airbnb might make housing less affordable, empirical evidence of exactly how, and to what extent this is happening, is sparse.

Our preliminary study of Toronto's rental market (which will be submitted later this summer to the Social Science Research Network, an open-access repository of academic research papers), used data from Toronto Regional Real Estate Board and Airbnb listings from 2015 to 2020, and suggested there were two ways Airbnb was affecting the rental market during this period: reducing the number of available rentals and contributing to the gentrification of neighborhoods.

How Airbnb may lead to gentrification

Short-term rentals, like those offered by Airbnb, bring in outsiders, often with little regard for local community norms, leading to conflicts and complaints.

While dealing with these temporary disturbances is usually possible with traditional policing and communication, such short-term rentals can have lasting impacts on neighborhoods.

When homeowners convert their properties into Airbnb rentals, it may reduce the long-term rental supply in their neighborhoods. This could increase rental prices, stretching the budget of lower-income families.

The lucrative short-term market may also attract new housing investments targeted at Airbnb rentals. This could further squeeze local families, who may find themselves in bidding wars. Eventually, the economic pressure could force these families out of their neighborhoods, leaving only the wealthier population in place.

Property values could increase as vacated homes are filled by wealthier families moving in from outside, who can afford the high prices. Over time, the neighborhood could change to comprise mostly relatively wealthier citizens in a process called gentrification.

How Airbnb is fuelling gentrification in Toronto
A graph comparing a) excess supply in the long-term rental market to b) the ratio of new
 Airbnb listings relative to the supply of long-term rentals.
 Credit: Iman Sadeghi and Sourav Ray, Author provided

Is Airbnb driving up prices in Toronto?

With 6.6 million active listings spanning over 220 countries and 100,000 cities, Airbnb offers three types of accommodations: entire homes or apartments, private rooms and shared rooms.

Our analysis focused on the entire homes or apartments category. In the time period of the study, owners of these accommodations were able to choose between the long-term and short-term rental markets, but those who only rented out a portion of their residence were less likely to be part of the long-term market.

We found that Airbnb rentals can squeeze out long-term rentals in neighborhoods. As the number of Airbnb rentals in a neighborhood increased, the availability of long-term rentals decreased and vice versa.

On average, we estimate that an increase of one percent in Airbnb listings per square kilometer in a district, is associated with a 0.09 percent increase in long-term rental rates. A similar study conducted in the United States, estimated an average increase of 0.018 percent. While the numbers may not be easily comparable since one is for a metropolitan area and another is for the whole country, they are indicative of the potential impact.

We found evidence that Airbnb may be leading to higher potential rent income for property owners. This difference in income between the potential short-term rentals and traditional long-term rentals, known as the rent gap, draws investors to properties that can be used for short-term rentals.

The reduced availability of long-term rentals can lead to bidding wars for housing, which can lead to even higher rents. As telltale evidence we found that a 10 percent increase in this rent gap is associated with a 3.1 percent surge in long-term rental prices. This is equivalent to a $80 monthly rent hike for the average one-bedroom property in Toronto.

These results offer tentative evidence of the potential impact of Airbnb on long-term rental rates during the time period of the study.

How Airbnb is fuelling gentrification in Toronto
The average rent gap in Toronto from 2015 to 2020. Credit: Iman Sadeghi and Sourav Ray, Author provided

Mixed social impact

Despite evidence that Airbnb may be associated with rising rents, its broader social impact remains controversial.

For homeowners, Airbnb offers a new income source. Travelers can boost local employment opportunities as retailers, restaurants and other businesses cater to their needs. A flow of young people can energize neighborhoods with their joie de vivre and creativity.

Yet affordable housing is a basic need for our society. With almost 40,000 total listings in TorontoVancouver and MontrĂ©al, Airbnb is a big player in the economy, but is only one part of the larger picture affecting the availability of affordable housing.

Attempts to mitigate Airbnb's effect on housing affordability have had challengesToronto's short-term rental bylaw, which was upheld in 2019, limits Airbnb stays in principal residences to a maximum of 180 days per year. The city subsequently began enforcing the licensing and registration of short-term rentals in 2021.

Narrowly focused policy interventions may not only be ineffective, but may have unexpected negative impacts. In fact, there is also evidence that restricting Airbnb rentals reduces the development of new housing units, leading to less housing availability. These factors illustrate how Airbnb is part of a bigger picture and addressing this complex issue will require more studies and creative policy measures.

This is an updated version of a story originally published on Aug. 13, 2023. The updated version makes clear the context of the research cited in the article is for the period 2015-20 only and does not analyze the rental market since then.

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Location, location, location: Study reveals effects of Airbnb

 

'Forever chemicals'? Maybe not

`Forever chemicals'? Maybe not
Heavily modified shipping containers at EDL's research and development facility in Henderson, Auckland, house the company's patented Mechanochemical Destruction (MCD) reactors. Credit: EDL Ltd

Dangerous "forever chemicals" left in the soil from firefighting foam could be destroyed by grinding, according to a proof-of-concept study by University of Auckland scientists collaborating with the U.S. Environmental Protection Agency.

"Ball milling" appears viable for decontaminating soil from , airports, and refineries around the world where the foam was used over decades, according to the University and Environmental Decontamination (NZ) Limited (EDL).

Contaminant chemicals called PFAS (per- and polyfluoroalkyl substances) don't break down naturally and, at certain levels, have been linked to cancers, reduced fertility, liver damage and other adverse health effects.

"Cleaning up PFAS from the environment is a massive task that will require our continuous and dedicated investment in the coming years," US President Joe Biden's White House said in March. Individual sites can have thousands of tons of contaminated soil, with the US Department of Defense estimating in 2021 that its clean-up could cost $31 billion.

Ball milling in a University of Auckland chemistry laboratory destroyed 99.88 percent to 100 percent of PFAS in soil from a decommissioned New Zealand Defence Force firefighting training site and in firefighting foam.

Intense grinding at an extremely high speed by metal balls left a safe by-product, according to Dr. Kapish Gobindlal, an honorary academic at the University and the chief scientist for the company EDL.

Published in the  Environmental Science: Advances, the research was by Gobindlal and his Ph.D. supervisors, Professor Jon Sperry and Dr. Cameron Weber, of the University's Centre for Green Chemical Science. Collaborating were scientists Erin Shields and Andrew Whitehill of the US EPA.

"We've established proof-of-concept and believe this method can be scaled up faster and cheaper than alternatives," says Gobindlal. "There is a massive need—the US alone has thousands of contaminated sites and regulation is shifting toward mandating remediation of these sites."

It's exactly what Sperry hoped to achieve when the University set up the Centre for Green Chemical Science, which puts the environment at the forefront.

"Work in the lab is flowing quickly toward real-world benefits," Sperry said. "This is an example of green chemistry that can help communities, the environment and, in fact, the world."

Numbering in their thousands, forever chemicals resist water, oil and heat, famously featuring in Teflon non-stick pans but also in everything from burger wrappers and pizza boxes to waterproof clothing.

Firefighters used "aqueous film-forming foam" containing PFAS to blanket and smother flammable liquid fires.

PFAS are in animals—even plankton—and in humans' blood and  because of carbon-fluorine bonds which prevent the chemicals breaking down. Like microplastics, they are ubiquitous, turning up in drinking water and even rain.

While contaminated soil is only part of the problem, it's a big part.

In some ways, "ball milling" is not all that different from the grinding of a mortar and pestle, but at an extremely high intensity, with the balls moving at incredible speeds to degrade the PFAS at a , says Gobindlal.

Crucial to ramping up is cost, including whether the grinding process requires expensive additives. Affordable and easy-to-source quartz sand was used as part of the treatment for firefighting foam, says Gobindlal, while no additive was needed for soil.

Laboratory benchtop experiments at the University from 2018 to 2023 typically involved 10 to 30 small metal balls colliding to destroy PFAS in soil, in firefighting foam, and in media such as activated carbon, which is used to remove PFAS from water. The process left an inert powder suitable for being a grinding additive or non-hazardous fill.

Heavily modified shipping containers at EDL's research and development facility in Henderson, Auckland, house the company's patented Mechanochemical Destruction (MCD) reactors, intended to treat contaminated soil at speed and scale—potentially dealing with several tons per hour.

In New Zealand, PFAS soil contamination occurred at locations such as Royal New Zealand Air Force bases Woodbourne (west of Blenheim) and Ohakea (near Palmerston North). Banned in New Zealand in 2011, the firefighting foams were still found at sites including airports years later, according to New Zealand's Environmental Protection Agency.

Last year, Channel Infrastructure NZ, an operator of the Marsden Point Oil Refinery in Northland, was fined for using firefighting foam containing PFAS.

"In addition to the known PFAS-contaminated locations, there are likely many more unknown sites yet to be identified through active investigation from both governmental and private entities," according to Gobindlal. "We're likely just at the tip of the iceberg."

In the US, the chemical and manufacturing company 3M negotiated a $10 billion settlement with cities and towns over PFAS pollution in water. In Europe, a group of news organisations including Le Monde say at least 17,000 sites across Europe and the UK are contaminated with PFAS.

New Zealand's EPA has proposed a ban on PFAS in cosmetics. The chemicals are in our drinking water, but at lower concentrations than in other countries.

Levels in water "are still concerning because PFAS bioaccumulate and biomagnify; they build up in our bodies, environment, and food web," wrote Dr. Lokesh Padhye, Dr. Erin Leitao, and Dr. Melanie Kah, of the University's faculties of science and engineering, in Newsroom in March

More information: Kapish Gobindlal et al, Mechanochemical destruction of per- and polyfluoroalkyl substances in aqueous film-forming foams and contaminated soil, Environmental Science: Advances (2023). DOI: 10.1039/D3VA00099K


Provided by University of Auckland 

Farewell to 'forever': Destroying PFAS by grinding it up with a new additive


 

Natural or not? Scientists aid in quest to identify genetically engineered organisms

gmo plants
Credit: Pixabay/CC0 Public Domain

Ever since gene editing became feasible, researchers and health officials have sought tools that can quickly and reliably distinguish genetically modified organisms from those that are naturally occurring. Though scientists can make these determinations after careful genetic analysis, the research and national security communities have shared a longstanding unmet need for a streamlined screening tool. Following the emergence of SARS-CoV-2, the world at large became aware of this need.

Now, such tools are being built.

A suite of techniques—one lab-based platform and four computational DNA sequence analysis models—was developed and refined over the course of a six-year program funded by the United States Intelligence Advanced Research Projects Activity (IARPA). These approaches have the potential to dramatically shift current screening capabilities for detecting engineered organisms.

Susan Celniker's team at Lawrence Berkeley National Laboratory (Berkeley Lab) was chosen to lead the testing and evaluation phase of the program, called Finding Engineering-Linked Indicators, or FELIX. She and her colleagues designed and produced increasingly challenging  and assessed how well the tools made by participating academic and industry groups performed.

"What the FELIX program revealed in its initial months was that the capability to efficiently identify modified organisms in the environment does not exist. And so, the program really started at the foundations to developing first-in-class capabilities to identify modified organisms," said Ben Brown, a staff scientist computational biologist in Berkeley Lab's Biosciences Area, who co-led the project design with Celniker. "It's a very important program in that it created the tools to fill an important segment of our national security space."

Testing the testers

To evaluate the work accomplished by its research teams, IARPA leveraged national laboratories to perform Test and Evaluation. This process ensures capabilities and tools that are developed under programs like FELIX can achieve the same results as reported by the researchers and are meeting program metrics, enabling evaluation of progress within the program. To ensure the tests would be as useful as possible for national security applications, the teams evaluated their performance with samples based on current and potential real-world scenarios.

"We got a list of every virus and microbe that people are worried about, and they went into the samples. The idea is that these testing systems will be prepared for a situation where it becomes necessary to confidently evaluate if an organism, be it mammalian, plant, microbe, or virus, has been engineered and is now circulating in the environment uncontained," says Celniker.

In total, the scientists at Berkeley Lab, Pacific Northwest National Laboratory, and the United States Department of Agriculture produced nearly 200 unique sample organisms with innocuous modifications ranging from large DNA sequence deletions or insertions all the way down to very subtle single nucleotide alterations made using CRISPR. Each testing group was given samples containing altered organisms as well as unmodified control samples containing non-modified organisms—known as "wild type"—that had never been fully sequenced before, so the genomes were not available in any database for comparison. The samples included virus particles and cells from bacteria, mammals, and fungi. These blinded samples represented potential human pathogens, such as HIV and E. coli, plant-infecting pathogens, and engineered complex species. To ensure health and security for participants, all of the microbial or viral samples created for testing were noninfectious and all were controlled under strict biosafety procedures.

The Testing and Evaluation portion of FELIX was divided into four phases, where each subsequent phase had more difficult samples. Groups with candidate tests were eliminated along the way if their technique did not perform well enough.

In the beginning, testing groups received purified samples with only one organism each, and they got multiples of every sample to determine whether the testing technique generated reproducible results. At the end, the testers received mixed samples designed to approximate real-world testing conditions. "For the final round, we gave them mixtures of up to 10 wild type and engineered organisms with different mutations in them to mimic what a soil sample might look like. And we actually did give them two soil samples as well as actual microbiome samples from a cow digestive tract and a mouse digestive tract," said Celniker. "So they got very complex samples that were really challenging."

Celniker and Brown further challenged the testing groups by designing samples that incorporated naturally occurring genetic oddities. For example, they presented samples containing bacteria that had acquired  by swapping plasmids—circular pieces of DNA that are separate from the cell's main genome—with other species of microbes. Gene acquisition from plasmids is very common in single-celled organisms, and it is through this mechanism that strains of bacteria can very quickly gain new traits such as antibiotic resistance.

They also threw in some hybridized influenza samples that could not have formed naturally (despite the virus's penchant for genetic cross-over) because the strains never circulated at the same time or on the same continents. Real-world gene scrambling events like these make it difficult to differentiate between natural and synthetic gene additions, but being able to do so is an essential capability of a modified organism detection tool.

To that end, the IARPA program leaders set an ambitious goal for the testing technologies of 99% specificity (no more than 1% of wild types misidentified as modified) and 90% sensitivity (no more than 10% of tests could misidentify a modified organism as wild type). The four techniques that passed through to the end of phase four testing and will be useful for identifying biological threats were a lab-based test from the company Draper and computational models from Raytheon, Ginkgo Bioworks, and Noblis. These techniques were shown to be excellent at identifying wild type organisms, and a Berkeley Lab-developed ensemble of the computational models achieved 99% specificity.

The sensitivity in identifying engineered organisms of individual models was between 55%-70%. But the ensemble was able to achieve approximately 72% sensitivity under cross validation, which occurred when it was tested on new sequence datasets. Overall performance of individual models and the ensemble demonstrated considerable improvement over existing state-of-the-art capabilities.

A new resource

One reason why it's so hard to tell natural and engineered organisms apart is that scientists around the world use many different databases and programs to review and store genome sequence data. And on top of that, people use different names and terms to describe genes and predict their functions based on the sequences—a process called annotation. So, despite the fact that more and more species have had their genomes sequenced, the data isn't necessarily easy to use.

To remedy this issue, Celniker recruited her Biosciences Area colleague Chris Mungall, a computer staff scientist, to lead the development of an open-access software program and database. The result was Synbio Schema, which catalogs the annotated genomes of national security-relevant engineered and wild type organisms using standardized language. Each sample that Celniker's team created for the testers was also added to the new database and annotated with the standardized language, providing an easy-to-use resource for future researchers.

"This is the first curated database and common language for engineered vs. non-engineered organisms, and they really had to build the airplane in flight because nothing like it existed previously, and the program would have been crippled without it," Brown said.

"The real problem arises when multiple research groups are trying to share and compare results," explained Mark Miller, a software developer in Mungall's group. "If there are any internal inconsistencies or other issues within a team's database, or if there are structural or nomenclature differences between the teams' databases, then nobody can tell whether one team's data agrees with the other teams." This forces scientists to tediously review annotations manually for accurate comparisons.

Growing the biodefense industry

Building on the success of the FELIX program, the Berkeley Lab scientists plan to expand the database by adding new organisms that could be exploited as bioweapons, and call on other groups to add new sequences as well. Meanwhile, Brown is looking forward to using the neatly organized database to train machine learning models, which will lead to even better modified organism detection tools in the future.

Looking to next steps, the team hopes to use the knowledge and techniques gained from the FELIX program to develop better detection tools capable of ecosystem-scale monitoring to detect threats in the environment in real time—a capability that Brown describes as "NORAD for biology."

New resource harmonizes 16S and shotgun sequencing data for microbiome research

 

Atlatl weapon use by prehistoric females equalized the division of labor while hunting, experimental study shows

Atlatl weapon use by prehistoric females equalized the division of labor while hunting
Atlatl experiment on the Kent Campus with Bob Berg of Thunderbird Atlatl. Michelle Bebber
 is holding the radar gun. Credit: Metin I. Eren

A new study led by archaeologist Michelle Bebber, Ph.D., an assistant professor in Kent State University's Department of Anthropology, has demonstrated that the atlatl (i.e., spear thrower) functions as an "equalizer," a finding which supports women's potential active role as prehistoric hunters.

Bebber co-authored an article "Atlatl use equalizes female and male projectile weapon velocity" which was published in the journal Scientific Reports. Her co-authors include Metin I. Eren and Dexter Zirkle (a recent Ph.D. graduate) also in the Department of Anthropology at Kent State, Briggs Buchanan of University of Tulsa, and Robert Walker of the University of Missouri.

The atlatl is a handheld, rod-shaped device that employs leverage to launch a dart, and represents a major human technological innovation used in hunting and warfare since the Stone Age. The first javelins are at least hundreds of thousands of years old; the first atlatls are likely at least tens of thousands of years old.

"One hypothesis for forager atlatl adoption over its presumed predecessor, the thrown javelin, is that a diverse array of people could achieve equal performance results, thereby facilitating inclusive participation of more people in hunting activities," Bebber said.

Bebber's study tested this hypothesis via a systematic assessment of 2,160 weapon launch events by 108 people, all novices, (many of which were Kent State students) who used both javelins and atlatls. The results are consistent with the "atlatl equalizer ," showing that the atlatl not only increases the velocity of projectile weapons relative to thrown javelins, but that the atlatl equalizes the velocity of female- and male-launched projectiles.

"This result indicates that a javelin to atlatl transition would have promoted a unification, rather than division, of labor," Bebber said. "Our results suggest that female and male interments with atlatl weaponry should be interpreted similarly, and in some archaeological contexts females could have been the atlatl's inventor."

"Many people tend to view women in the past as passive and that only males were hunters, but increasingly that does not seem to be the case," Bebber said. "Indeed, and perhaps most importantly, there seems to be a growing consilience among different fields—archaeology, ethnography, and now modern experiments—that women were likely active and successful hunters of game, big and small."

Since 2019, every semester Bebber takes her class outside to use the atlatl. She noticed that females picked it up very easily and could launch darts as far as the males with little effort.

"Often males became frustrated because they were trying too hard and attempting to use their strength to launch the darts," Bebber said. "However, since the atlatl functions as a simple lever, it reduces the advantage of male's generally greater muscle strength."

"Given that females appear to benefit the most from atlatl use, it is certainly within the realm of possibility that in some contexts females invented the atlatl," Bebber said. "Likewise, in some , females invent tool technologies for hunting as documented among the Fongoli chimpanzees."

More information: Michelle R. Bebber et al, Atlatl use equalizes female and male projectile weapon velocity, Scientific Reports (2023). DOI: 10.1038/s41598-023-40451-8

Cylindrical autonomous drilling bot could reach buried Martian water

SO THAT'S WHERE THE MARTIANS GOT THEM

Cylindrical autonomous drilling bot could reach buried martian water
Artist’s depiction of a Borebot (the red & blue cylinder) being deployed from a rover. Credit: Morley & Bowen / James Vaughn

The south pole of Mars is a likely candidate for future exploration efforts there. It is also an area of interest for astrobiologists, as there is a decent chance that there might be signs of ancient water there and, therefore, signs of ancient life—if there was any on the red planet anyway.

But to access that , explorers would have to get to it, which means digging much further than has ever been dug on Mars before. Typical deep-bore drilling equipment is bulky, heavy, and difficult to set up on remote terrain like the Martian south pole. So a group of engineers from Planet Enterprises, a Space Technology Incubator based in Washington, developed a new deep bore drilling concept they call Borebots.

NASA's Institute for Advanced Concepts NIAC) support the Borebots concept back in 2021, and the engineers, led by Quinn Morley and Tom Bowen, produced a mammoth 96-page report of their efforts. That report details how Borebots are unique in the world of extraterrestrial drilling and how widely accepted the concept was in numerous other exploration contexts.

But the context it was designed for was to look for  in the Martian south pole. The engineers estimated they could collect interesting scientific data from a borehole measuring about 50 meters down.

Typically, a borehole that far down, even on Earth, would require some sort of tether back to the surface. Usually, that would include a cable or a rigid piping system that would provide power and control to the . That means a lot of material, most of it heavy, making it costly in space exploration.

So the team at Planet Enterprises came up with a solution—make autonomous bots that could do the drilling without being tethered to a . The bots themselves look like pieces of drilling tubing. Still, they are autonomous robots with a self-contained battery, drill bit, motor, and electronic system, all contained in a cylindrical housing that is 64mm in diameter by 1.1 meters long.

They could be deployed by a rover similar to the Perseverance rover already trundling around Mars. The rover could extend a deployment tube, which the bot would descend and start busily drilling away at the surface. Since it is remotely powered, its primary constraint would be its , as using a drill bit to dig through regolith is power intensive. However, once it begins to run low on battery, it can simply engage a series of traction spikes on its side and climb back up the hole it had just dug.

Once the Borebot makes it back into the deployment tube and safely into the rover, it can be shunted aside to a cleaning and recharging station while another one takes its place. With this speed, the Borebot system could almost continuously dig down without the need for heavy support equipment—just a set of Borebots to keep hacking away at the rock.

The engineers thought of plenty of potential problems, including how to power a dead Borebot in the hole—they could be developed to power each other. And how could one make a branching bore if something is exciting in a particular area—by employing an articulated joint that would allow the next Borebot to proceed at a slight angle—and hopefully not mess up the climbing process of any other Borebots that decide to continue the central hole.

Plenty of interesting CAD designs and even some 3D-printed devices are described in the final report. It's not short on math either—describing calculations from the  to the torque necessary for the drill head. They also mention there was some expressed interest from The Mars Society to flesh out the concept for resource extraction, as well as an idea to potentially utilize the idea on ocean worlds.

But for now, it's unclear if the project is undertaking the next step in development. While the paper details a clear plan to increase the Technology Readiness Level, it does not appear to have received further funding from NIAC, other any other funding source. However, the Planet Enterprises engineers haven't let that get them down—their TitanAir concept received a NIAC Phase I award in 2023. So they'll have plenty of time to keep working on their wildly innovative ideas.

More information: Borebots: Tetherless Deep Drilling into the Mars South Polar Layered Deposits: www.nasa.gov/sites/default/fil … _borebots_tagged.pdf

 

Study demonstrates the value of citizen science to monitor natural enemy in fight against invasive Siam weed

Study demonstrates the value of citizen science to monitor natural enemy in fight against invasive Siam weed
Chromolaena odorata (Syn. Eupatorium odoratum)—Siam Weed, Bitter bush, Devilweed, 
Hagonoy, Jack in the bush, Triffid weed. 
Credit: J.M.Garg/Wikimedia Commons, CC BY

CABI has led new research which demonstrates the value of using citizen science to monitor the establishment and spread of a natural enemy to fight the invasive shrub Chromolaena odorata—also known as Siam weed—in South and South-East Asia.

Dr. Matthew Cock, a CABI Emeritus Fellow, used the iNaturalist.org platform to assess the establishment and spread of the moth Pareuchaetes pseudoinsulata which was released in six countries in South and South-East Asia to control C. odorata.

Dr. Cock, together with colleagues from Australia's Department of Agriculture and Fisheries and MIA Consulting, Utah, U.S., found that—"adding to existing knowledge"—P. pseudoinsulata is established in Thailand and Vietnam and has spread to China, Cambodia and West Malaysia.

The researchers, whose findings are published as a short communication in the CABI Agriculture and Bioscience journal, say the results extracted from observations shared by  on iNaturalist also confirm widespread establishment of P. pseudoinsulata in southern India and Sri Lanka.

They argue that iNaturalist can provide an additional source of information regarding the incidence and spread of introduced species, including biological control agents, but will be most effective where the subjects are readily identifiable from photographs.

Citizen science is described as  conducted in participation from the  who are sometimes referred to as amateur/non-professional scientists.

According to a "Green Paper on Citizen Science," which was published in 2013 by the European Commission's Digital Science Unit and Socientize.eu, participants "provide  and facilities for researchers, raise new questions and co-create a new scientific culture."

C. odorata is a weedy pioneering shrub native to the Americas, from southern U.S. to Argentina and has become one of the worst invasive plants in the Old-World humid tropics and subtropics.

P. pseudoinsulata was released in selected countries in Africa, South and South-East Asia and parts of the Pacific, and became established in parts of these areas.

For several releases of P. pseudoinsulata, the agent was reported not to have established (e.g., Thailand and Vietnam), or there have been no published follow-up studies to assess whether or not introductions were successful.

Dr. Cock said, "This paper investigates the validity of some of these reports and also discusses the value of using citizen science to monitor the establishment and spread of weed .

"The images shared by  scientists on iNaturalist confirm the presence of P. pseudoinsulata in several areas, including some where it had not been previously reported.

"Areas where it has been reported as established but there were no images to confirm this indicate opportunities for a more targeted  project or some other form of on-the-ground truthing involving biological control researchers."

More information: Matthew J. W. Cock et al, Citizen science to monitor the establishment and spread of a biological control agent: the case of Pareuchaetes pseudoinsulata (Lepidoptera, Erebidae) for the control of Chromolaena odorata (Asteraceae) in South and South-East Asia, CABI Agriculture and Bioscience (2023). DOI: 10.1186/s43170-023-00171-5

Object recognition through vision, hearing and touch—it's time to let go of the learning styles myth

Object recognition through vision, hearing and touch—it's time to let go of the learning styles myth
Examples of tasks that tap into object recognition ability, from top left: (1) Are these two 
objects identical despite the change in viewpoint? (2) Which lung has a tumor? (3) Which 
of these dishes is the oddball? 4) Which option is the average of the four robots on the 
right? Answers: (1) no (2) left (3) third (4) fourth. 
Credit: Isabel Gauthier, CC BY-ND

The idea that individual people are visual, auditory or kinesthetic learners and learn better if instructed according to these learning styles is one of the most enduring neuroscience myths in education

There is no proof of the value of learning styles as educational tools. According to experts, believing in learning styles amounts to believing in astrology. But this "neuromyth" keeps going strong.

A 2020 review of teacher surveys revealed that 9 out of 10 educators believe students learn better in their preferred learning style. There has been no decrease in this belief since the approach was debunked as early as 2004, despite efforts by scientistsjournalistspopular science magazinescenters for teaching and YouTubers over that period. A cash prize offered since 2004 to whomever can prove the benefits of accounting for learning styles remains unclaimed.

Meanwhile, licensing exam materials for teachers in 29 states and the District of Columbia include information on learning styles. Eighty percent of popular textbooks used in pedagogy courses mention learning styles. What teachers believe can also trickle down to learners, who may falsely attribute any learning challenges to a mismatch between their instructor's teaching style and their own learning style.

Myth of learning styles is resilient

Without any evidence to support the idea, why do people keep believing in learning styles?

One possibility is that people who have incomplete knowledge about the brain might be more susceptible to these ideas. For instance, someone might learn about distinct brain areas that process visual and auditory information. This knowledge may increase the appeal of models that include distinct visual and aural learning styles. But this limited understanding of how the brain works misses the importance of multisensory brain areas that integrate information across senses.

Another reason that people may stick with the belief about learning styles is that the evidence against the model mostly consists of studies that have failed to find support for it. To some people, this could suggest that enough good studies just haven't been done. Perhaps they imagine that finding support for the intuitive—but wrong—notion of learning styles simply awaits more sensitive experiments, done in the right context, using the latest flavor of learning styles. Despite scientists' efforts to improve the reputation of null results and encourage their publication, finding "no effect" may simply not capture attention.

But our recent research results do in fact contradict predictions from learning styles models.

We are psychologists who study  in perception. We do not directly study learning styles, but our work provides evidence against models that split "visual" and "auditory" learners.

Object recognition skills related across senses

A few years ago, we became interested in why some people become visual experts more easily than others. We began measuring individual differences in visual object recognition. We tested people's abilities in performing a variety of tasks like matching or memorizing objects from several categories such as birds, planes and computer-generated artificial objects.

Object recognition through vision, hearing and touch—it's time to let go of the learning styles myth
In a task measuring haptic object recognition ability, participants touch pairs of 3D-printed objects without looking at them and decide if they are exactly the same. Credit: Isabel Gauthier

Using  historically applied to intelligence, we found that almost 90% of the differences between people in these tasks were explained by a general ability we called "o" for object recognition. We found that "o" was distinct from , concluding that book smarts may not be enough to excel in domains that rely heavily on visual abilities.

Discussing this work with colleagues, they often asked whether this recognition ability was only visual. Unfortunately we just didn't know, because the kinds of tests required to measure individual differences in object perception in nonvisual modalities did not exist.

To address the challenge, we chose to start with touch, because vision and touch share their ability to provide information about the shape of objects. We tested participants with a variety of new touch tasks, varying the format of the tests and the kinds of objects participants touched. We found that people who excelled at recognizing new objects visually also excelled at recognizing them by touch.

Moving from touch to listening, we were more skeptical. Sound is different from touch and vision and unfolds in time rather than space.

In our latest studies, we created a battery of auditory object recognition testsyou can test yourself. We measured how well people could learn to recognize different bird songs, different people's laughs and different keyboard sounds.

Quite surprisingly, the ability to recognize by listening was positively correlated with the ability to recognize objects by sight—we measured the correlation at about 0.5. A correlation of 0.5 is not perfect, but it signifies quite a strong effect in psychology. As a comparison, the mean correlation of IQ scores between identical twins is around 0.86, between siblings around 0.47, and between cousins 0.15.

This relationship between recognition abilities in different senses stands in contrast to learning styles studies' failure to find expected correlations among variables. For instance, people's preferred learning styles do not predict performance on measures of pictorial, auditory or tactile learning.

Better to measure abilities than preferences?

The myth of learning styles is resilient. Fans stick with the idea and the perceived possible benefits of asking students how they prefer to learn.

Our results add something new to the mix, beyond evidence that accounting for learning preferences does not help, and beyond evidence supporting better teaching methods—like active learning and multimodal instruction—that actually do foster learning.

Our work reveals that people vary much more than typically expected in perceptual abilities, and that these abilities are correlated across touch, vision and hearing. Just as we can expect that a student excelling in English is likely also to excel in math, we should expect that the student who learns best from visual instruction may also learn just as well when manipulating objects. And because  and perceptual skills are not strongly related, measuring them both can provide a more complete picture of a person's abilities.

In sum, measuring perceptual abilities should be more useful than measuring perceptual preferences, because perceptual preferences consistently fail to predict student learning. It's possible that learners may benefit from knowing they have weak or strong general perceptual skills, but critically, this has yet to be tested. Nevertheless, there remains no support for the "neuromyth" that teaching to specific learning styles facilitates learning.

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation


Belief in learning styles myth may be detrimental