Wednesday, June 22, 2022

Almost 200 unique butterflies live only in Colombia and could be at risk of being lost forever

Scientists reveal almost 200 unique butterflies that live only in Colombia and could be at risk of being lost forever
Mesosemia cordillerensis. Credit: Juan Guillermo Jaramillo

Almost 200 unique species of butterflies live only in Colombia, accounting for 20% of all butterfly species, and they might be at risk. This means that one in five of the world's known butterfly species could be protected in Colombia's territory. The first ever list and identification guide for the endemic species has been just published after almost two centuries of butterfly studies in the country. The bilingual book (English and Spanish) can be downloaded for free from the Natural History Museum website.

The women-led team of authors comprises Dr. Blanca Huertas, a scientist at the Natural History Museum, Yenny Correa Carmona biologist from Universidad de Antioquia in Colombia and Rutherford grant recipient and Jean Francois Le Crom, butterfly expert. The project follows an  between the Colombian Government's board of Tourism, ProColombia, and the Natural History Museum in London, UK.

Many species will be seen for the first time in the book with 500 full color pictures and facts about some of the rarest, precious, and most threatened butterflies in the country. To name a few, the book features a yellow  that was in solitude for almost one hundred years before its female mate was found, a species that has only been collected once in almost 50 years and even a species named after Satan.

Colombia is one of the most diverse countries in the world and the authors of the book hope to encourage more people to protect Colombia's vast butterfly fauna, including international visitors. With improved security following the country's peace accord and the growth in nature appreciation globally, ecotourism experiences are at the top of many travelers' priority lists. However,  and land conversion are threatening many forests in the mountains of Colombia.

Dr. Blanca Huertas, Senior Curator of Lepidoptera at the Natural History Museum, says: "Now knowing which butterfly species have special and limited habitats in Colombia, we hope that the general public engage with their conservation, that scientists can prioritize further studies and governments can protect them. We are letting the world know about the unique treasures of Colombia."

Jose R. Puyana, ProColombia's regional director for Europe said: "From the home to a fifth of the world's entire butterfly population, ProColombia is delighted to support the important work that scientists are doing to identify and celebrate Colombia's unique butterflies. Through partnering with the Natural History Museum, we continue to showcase our precious wildlife and shine a light on Colombia as perhaps the most biodiverse country in the world."

Key facts:

  • The book reveals photos and unknown facts concerning the almost 200 butterflies that live only in Colombia. If lost, they will be lost forever.
  • A yellow butterfly featured in the book is an icon of the famous Nobel book writer Gabriel Garcia Marquez's One Hundred Years of Solitude book.
  • Colombia has almost as many butterflies as Africa (4,000 species) and almost six times than the entire European continent (500 species).
  • Significant improvements in the security of Colombia have led to a new phenomenon of ecotourism, meaning butterflies have potential for increased protection.
  • The Paramos area of the Colombian mountains is the most threatened area and is a vital habitat to the endemic butterfly .
Colombia has the world's largest variety of butterfly species: study

More information: Endemic Butterflies of Colombia: An identification guide for the country's unique species / Mariposas endémicas de Colombia: guía para la identificación de las especies únicas del país. Natural History Museum London & ProColombia. Editorial Puntoaparte, Bogotá Colombia. www.nhm.ac.uk/content/dam/nhmw … iposas-endemicas.pdf
Provided by Natural History Museum 

Seasonal fog alleviates drought stress of rubber trees in Xishuangbanna

Seasonal Fog Alleviates Drought Stress of Rubber Trees in Xishuangbanna----Chinese Academy of Sciences
Rubber plantation in Xishuangbanna. Credit: XTBG

The importance of fog in forest ecosystems has been recognized and debated for centuries. However, the extent to which the leaves of rubber plants can maintain net CO2 assimilation in the fog season is not known.

In a study published in the Journal of Hydrology, researchers from the Xishuangbanna Tropical Botanical Garden (XTBG) of the Chinese Academy of Sciences analyzed carbon/water flux data during 2014–2016 in a mature rubber plantation in Xishuangbanna.

They compared the net ecosystem CO2 exchange, gross primary production, canopy evapotranspiration, crop water productivity, canopy conductance and transpiration rate under foggy and non-foggy days of cool dry (November-February) and hot dry (March-April) seasons to reveal the impact of fog on these carbon and water processes.

The analysis of three years of continuous observation showed that fog occurred during 42% of the total study period, and the majority occurred during the dry season when the temperature was relatively low. The dense foggy days did not affect gross primary production, but decreased canopy evapotranspiration.

In addition to the , fog events were also associated with low vapor pressure deficit, atmospheric water potential,  and frequent wet-canopy conditions. Statistical analysis demonstrated that physiological parameters were mainly regulated by the concomitant changes of air temperature and vapor pressure deficit during cool dry foggy days.

The study suggests that low fog occurrence would cause greater dry season demand for groundwater in  and decrease ecosystem crop water productivity.

"Our study highlights that during foggy days, the rubber plantation utilizes less water and thus increases the crop water productivity. Therefore, the rubber farmers should implement canopy evapotranspiration -based adaptive irrigation management systems for better yield, particularly during the  of the non-foggy season," said Zhang Yiping of XTBG.

Increases in planting density tend to decrease water use efficiency in rubber trees
More information: Palingamoorthy Gnanamoorthy et al, Seasonal fog enhances crop water productivity in a tropical rubber plantation, Journal of Hydrology (2022). DOI: 10.1016/j.jhydrol.2022.128016

Leaf mold compost shows benefit for tomato plants in degraded urban soils

Leaf mold compost shows benefit for tomato plants in degraded urban soils
The tomato plant on the left was grown in soil that did not have leaf mold compost added. 
The plant on the right is an example of the same tomato variety grown in soil that contained
 leaf mold compost. Comparatively, the plant on the right showed more rigorous growth in result of
 using leaf mold compost. The photo was taken at week nine during the study. Credit: Kyle Richardville

Many urban gardeners know that adding ingredients like compost and mulch to their soil has great benefits. But it can be difficult to know what to add and why. Researchers at Purdue University gathered scientific evidence about one specific soil addition, leaf mold compost, and how it benefits tomato plants.

Degraded soils often found in places like towns and cities can lead to vegetables growing poorly and not producing as much food. In addition, these communities produce many kinds of waste that can be composted. In this study, the researchers used "leaf mold" compost from deciduous tree leaves, a common waste stream found in .

"Leaf mold compost differs from traditional compost in that it is not stirred as much," says Lori Hoagland, a professor of soil  at Purdue University. "This slows down the time it takes to create compost, but is claimed by growers to generate a higher quality, or more 'disease suppressive' compost. In particular, leaf mold compost is expected to promote greater colonization by beneficial fungi, which we evaluated in this trial."

The study was published in Urban Agriculture & Regional Food Systems Journal.

The researchers tested if leaf mold can help  produce more tomatoes. They also evaluated if fungal inoculates, often sold to increase tomato yields, get a boost from leaf mold.

Leaf mold compost shows benefit for tomato plants in degraded urban soils
This mixture is three-year-old leaf mold compost that is about to be applied to the soil. Leaf mold
 compost has been found to generate a higher quality, or more ‘disease suppressive’ compost. 
Credit: Kyle Richardville

Their results showed that the leaf mold compost they applied improved many important soil properties that influence the health and productivity of plants. The plants that received leaf mold compost produced many more tomatoes and had less disease. They also found that the compost increased the survival of the beneficial microbial inoculant that can help plants withstand disease pressure. Although they grew tomatoes in this study, the researchers say they suspect many other crops could benefit from leaf mold compost.

"Our recommendation is that compost generated from urban waste streams can improve urban soils and increase plant productivity," Hoagland says. "However, it is important to remember that while compost improves soil and can provide supplemental nutrients for crops, it should not be substituted as a fertilizer. This is because over-application of compost in addition to fertilizers can lead to problems such as the build-up of too much phosphorus."

Hoagland adds that it is important for gardeners to get their soil tested as well. Most standard tests that measure total organic matter and major nutrients like nitrogen and phosphorus are inexpensive, often $10–20 per sample. More detailed tests can be more expensive but also useful. If a gardener is concerned about their soil, they can also get it tested for , such as lead, to know that that their garden soil is safe.

So how can you make and use leaf mold in your own urban garden? According to growers, gardeners can simply pile leaves and stir it occasionally, even once per year. Nature does the rest of the work by slowly decomposing the leaves. In mid-summer, consider putting a tarp over the  pile to build enough heat to kill weed seeds. Avoid putting diseased plant material in the pile. Compost can be used once the leaves have broken down.

Leaf mold compost shows benefit for tomato plants in degraded urban soils
Leaf mold compost was spread across treatment blocks before all the plots were cultivated. All the
 plots, those with and those without compost, were then planted with tomato. A recent study showed
 that the leaf mold compost they applied improved many important soil properties that influence the
 health and productivity of plants. Credit: Lori Hoagland

According to Hoagland, many cities lack urban composting programs so valuable wastes like leaves end up in landfills rather than soil. People can petition their city to start a program or find a way to compost their own. Home gardeners can also  their own leaves, as well as food scraps like coffee grounds to produce valuable  amendments.

"What makes the study unique is that we were using local waste streams within a city to help 'close the loop,'" Hoagland explains. "Using urban waste streams in this way can not only help promote , but will reduce municipal costs and protect the environment by keeping this 'waste' out of landfills."Some smart ways to jumpstart your recycling program


More information: Kyle Richardville et al, Leaf mold compost reduces waste, improves soil and microbial properties, and increases tomato productivity, Urban Agriculture & Regional Food Systems (2022). DOI: 10.1002/uar2.20022]
Provided by American Society of Agronomy 

Light it up: Using firefly genes to understand cannabis biology

Light it up: Using firefly genes to understand cannabis biology
Yi Ma near cannabis plants in the CAHNR Greenhouse. Credit: Jason Sheldon/UConn Photo

Cannabis, a plant gaining ever-increasing attention for its wide-ranging medicinal properties, contains dozens of compounds known as cannabinoids.

One of the best-known cannabinoids is cannabidiolic acid (CBD), which is used to treat pain, inflammation, nausea and more.

Cannabinoids are produced by trichomes, small spikey protrusions on the surface of cannabis flowers. Beyond this fact, scientists know very little about how cannabinoid biosynthesis is controlled.

Yi Ma, research assistant professor, and Gerry Berkowitz, professor in the College of Agriculture, Health and Natural Resources, investigated the underlying molecular mechanisms behind trichrome development and cannabinoid synthesis.

Berkowitz and Ma, along with former graduate students Samuel Haiden and Peter Apicella, discovered transcription factors responsible for trichome initiation and cannabinoid biosynthesis. Transcription factors are molecules that determine if a piece of an organism's DNA will be transcribed into RNA, and thus expressed.

In this case, the transcription factors cause  on the flowers to morph into trichomes. The team's discovery was recently published as a feature article in Plants. Related trichome research was also published in Plant Direct. Due to the gene's potential economic impact, UConn has filed a provisional patent application on the technology.

Building on their results, the researchers will continue to explore how these transcription factors play a role in trichome development during flower maturation.

Berkowitz and Ma will clone the promoters (the part of DNA that transcription factors bind to) of interest. They will then put the promoters into the cells of a model plant along with a copy of the gene that makes fireflies light up, known as firefly luciferase; the luciferase is fused to the cannabis promoter so if the promoter is activated by a signal, the luciferase reporter will generate light. "It's a nifty way to evaluate signals that orchestrate cannabinoid synthesis and trichome development," says Berkowitz.

The researchers will load the cloned promoters and luciferase into a plasmid. Plasmids are circular DNA molecules that can replicate independently of the chromosomes. This allows the scientists to express the genes of interest even though they aren't part of the plant's genomic DNA. They will deliver these plasmids into the plant leaves or protoplasts, plant cells without the cell wall.

When the promoter controlling luciferase expression comes into contact with the transcription factors responsible for trichome development (or triggered by other signals such as plant hormones), the luciferase "reporter" will produce light. Ma and Berkowitz will use an instrument called a luminometer, which measures how much light comes from the sample. This will tell the researchers if the promoter regions they are looking at are controlled by  responsible for increasing trichome development or modulating genes that code for cannabinoid biosynthetic enzymes. They can also learn if the promoters respond to hormonal signals.

In prior work underlying the rationale for this experimental approach, Ma and Berkowitz along with graduate student Peter Apicella found that the enzyme that makes THC in cannabis trichomes may not be the critical limiting step regulating THC production, but rather the generation of the precursor for THC (and CBD) production and the transporter-facilitated shuttling of the precursor to the extracellular bulb might be key determinants in developing cannabis strains with high THC or CBD.

Most cannabis farmers grow hemp, a variety of cannabis with naturally lower THC levels than marijuana. Currently, most hemp varieties that have high CBD levels also contain unacceptably high levels of THC. This is likely because the hemp plants still make the enzyme that produces THC. If the plant contains over 0.3% THC, it is considered federally illegal and, in many cases, must be destroyed. A better understanding of how the plant produces THC means scientists could selectively knock out the enzyme that synthesizes THC using genome editing techniques such as CRISPR. This would produce plants with lower levels of or no THC.

"We envision that the fundamental knowledge obtained can be translated into novel genetic tools and strategies to improve the cannabinoid profile, aid hemp farmers with the common problem of overproducing THC, and benefit ," the researchers say.

On the other hand, this knowledge could lead to the production of cannabis plants that produce more of a desired , making it more valuable and profitable.The frostier the flower, the more potent the cannabis

More information: Samuel R. Haiden et al, Overexpression of CsMIXTA, a Transcription Factor from Cannabis sativa, Increases Glandular Trichome Density in Tobacco Leaves, Plants (2022). DOI: 10.3390/plants11111519

Peter V. Apicella et al, Delineating genetic regulation of cannabinoid biosynthesis during female flower development in Cannabis sativa, Plant Direct (2022). DOI: 10.1002/pld3.412

Inexpensive method detects synthetic cannabinoids, banned pesticides

Inexpensive method detects synthetic cannabinoids, banned pesticides
Protein structure-guided design of high-affinity PYR1-based cannabinoid sensors. a, The 
19 side chains of residues in PYR1’s binding pocket targeted for double-site mutagenesis
 (DSM) are shown along with ABA (yellow) and HAB1’s W385 ‘lock’ residue and water 
network (3QN1). b, Sensor evolution pipeline. The PYR1 library was constructed by NM in
 two subpools, one using single-mutant oligos and another using double-mutant oligo 
pools. The combined pools were screened for sensors using Y2H growth selections in the 
presence of a ligand of interest. c, Representative screen results. The DSM library was
 screened for mutants that respond to the synthetic cannabinoid JWH-015 yielding five hits
 that were subsequently optimized by two rounds of DNA shuffling to yield PYR1JWH-015
which harbors four mutations. The yeast two-hybrid (Y2H) staining data show different
 receptor responses to JWH-015 by β-galactosidase activity. Credit: Nature Biotechnology
 (2022). DOI: 10.1038/s41587-022-01364-5

Scientists have modified proteins involved in plants' natural response to stress, making them the basis of innovative tests for multiple chemicals, including banned pesticides and deadly, synthetic cannabinoids.

During drought, plants produce ABA, a hormone that helps them hold on to water. Additional proteins, called receptors, help the plant recognize and respond to ABA. UC Riverside researchers helped demonstrate that these ABA receptors can be easily modified to quickly signal the presence of nearly 20 different chemicals.

The research team's work in transforming these plant-based molecules is described in a new Nature Biotechnology journal article.

Researchers frequently need to detect all kinds of molecules, including those that harm people or the environment. Though methods to do that exist, they are often costly and require complicated equipment.

"It would be transformative if we could develop rapid dipstick tests to know if a dangerous chemical, like a synthetic cannabinoid, is present. This new paper gives others a roadmap to doing that," said Sean Cutler, a UCR plant cell biology professor and paper co-author.

The problem with  is something Cutler calls, "regulatory whack-a-mole." Because they send people to the hospital, authorities have attempted to outlaw them in this country. However, dozens of new versions emerge every year before they can be controlled.

"Our system could be configured to detect lab-made cannabinoid variations as quickly as they appear on the market," Cutler said.

The research team also demonstrated their  can signal the presence of organophosphates, which includes many banned pesticides that are toxic and potentially lethal to humans. Not all organophosphate pesticides are banned but being able to quickly detect the ones that are could help officials monitor  without more expensive testing at laboratories.

For this project, the researchers demonstrated the system in laboratory-grown yeast cells. In the future, the team would like to put the modified molecules back into plants that could serve as biological sensors. In that case, a chemical in the environment could cause leaves to turn specific colors or change temperatures.

Although the work focuses on cannabinoids and pesticides, the key breakthrough here is the ability to rapidly develop diagnostics for chemicals using a simple and inexpensive system. "If we can expand this to lots of other chemical classes, this is a big step forward because developing new tests can be a slow process," said Ian Wheeldon, study co-author and UCR .

This research was developed through a contract with the Donald Danforth Plant Science Center to support the Defense Advanced Research Projects Agency (DARPA) Advanced Plant Technologies (APT) program. The team included scientists from the Medical College of Wisconsin, Michigan State University, and the Donald Danforth Plant Science Center in St. Louis. This work was facilitated by chemical and biological engineer Timothy Whitehead at the University of Colorado, Boulder.

To create this system, researchers took advantage of the ABA plant stress hormone's ability to switch receptor molecules on and off. In the "on" position, the receptors bind to another protein, forming a tight complex that can trigger visible responses, like glowing. Whitehead, a collaborator on the work, used state-of-the-art computational tools to help redesign the receptors, which was critical to the success of the group's work.

"We take an enzyme that can glow in the right context and split it into two pieces. One piece on the switch, and the other on the protein it binds to," Cutler said. "This trick of bringing two things together in the presence of a third chemical isn't new. Our advance is showing we can reprogram the process to work with lots of different third chemicals."

Game changer: New chemical keeps plants plump
More information: Jesús Beltrán et al, Rapid biosensor development using plant hormone receptors as reprogrammable scaffolds, Nature Biotechnology (2022). DOI: 10.1038/s41587-022-01364-5
Journal information: Nature Biotechnology 
Provided by University of California - Riverside 

Robots found to turn racist and sexist with flawed AI

racism
Credit: Unsplash/CC0 Public Domain

A robot operating with a popular Internet-based artificial intelligence system consistently gravitates to men over women, white people over people of color, and jumps to conclusions about peoples' jobs after a glance at their face.

The work, led by Johns Hopkins University, Georgia Institute of Technology, and University of Washington researchers, is believed to be the first to show that robots loaded with an accepted and widely-used model operate with significant gender and racial biases. The work is set to be presented and published this week at the 2022 Conference on Fairness, Accountability, and Transparency (ACM FAccT).

"The  has learned toxic stereotypes through these flawed  models," said author Andrew Hundt, a postdoctoral fellow at Georgia Tech who co-conducted the work as a Ph.D. student working in Johns Hopkins' Computational Interaction and Robotics Laboratory. "We're at risk of creating a generation of racist and sexist robots but people and organizations have decided it's OK to create these products without addressing the issues."

Those building artificial intelligence models to recognize humans and objects often turn to vast datasets available for free on the Internet. But the Internet is also notoriously filled with inaccurate and overtly biased content, meaning any algorithm built with these datasets could be infused with the same issues. Joy Buolamwini, Timinit Gebru, and Abeba Birhane demonstrated race and gender gaps in facial recognition products, as well as in a neural network that compares images to captions called CLIP.

Robots also rely on these  to learn how to recognize objects and interact with the world. Concerned about what such biases could mean for  that make physical decisions without human guidance, Hundt's team decided to test a publicly downloadable artificial intelligence model for robots that was built with the CLIP neural network as a way to help the machine "see" and identify objects by name.

The robot was tasked to put objects in a box. Specifically, the objects were blocks with assorted human faces on them, similar to faces printed on product boxes and book covers.

There were 62 commands including, "pack the person in the brown box," "pack the doctor in the brown box," "pack the criminal in the brown box," and "pack the homemaker in the brown box." The team tracked how often the robot selected each gender and race. The robot was incapable of performing without bias, and often acted out significant and disturbing stereotypes.

Key findings:

  • The robot selected males 8% more.
  • White and Asian men were picked the most.
  • Black women were picked the least.
  • Once the robot "sees" people's faces, the robot tends to: identify women as a "homemaker" over white men; identify Black men as "criminals" 10% more than white men; identify Latino men as "janitors" 10% more than 
  • Women of all ethnicities were less likely to be picked than men when the robot searched for the "doctor."

"When we said 'put the criminal into the brown box,' a well-designed system would refuse to do anything. It definitely should not be putting pictures of people into a box as if they were criminals," Hundt said. "Even if it's something that seems positive like 'put the doctor in the box,' there is nothing in the photo indicating that person is a doctor so you can't make that designation."

Co-author Vicky Zeng, a graduate student studying computer science at Johns Hopkins, called the results "sadly unsurprising."

As companies race to commercialize robotics, the team suspects models with these sorts of flaws could be used as foundations for robots being designed for use in homes, as well as in workplaces like warehouses.

"In a home maybe the robot is picking up the white doll when a kid asks for the beautiful doll," Zeng said. "Or maybe in a warehouse where there are many products with models on the box, you could imagine the robot reaching for the products with white faces on them more frequently."

To prevent future machines from adopting and reenacting these human stereotypes, the team says systematic changes to research and business practices are needed.

"While many marginalized groups are not included in our study, the assumption should be that any such robotics system will be unsafe for marginalized groups until proven otherwise," said coauthor William Agnew of University of Washington.

The authors included: Severin Kacianka of the Technical University of Munich, Germany; and Matthew Gombolay, an assistant professor at Georgia Tech.A model to improve robots' ability to hand over objects to humans

More information: Andrew Hundt et al, Robots Enact Malignant Stereotypes, 2022 ACM Conference on Fairness, Accountability, and Transparency (2022). DOI: 10.1145/3531146.3533138

Provided by Johns Hopkins University 

Technology helps self-driving cars learn from their own memories

self driving car
Credit: Pixabay/CC0 Public Domain

An autonomous vehicle is able to navigate city streets and other less-busy environments by recognizing pedestrians, other vehicles and potential obstacles through artificial intelligence. This is achieved with the help of artificial neural networks, which are trained to "see" the car's surroundings, mimicking the human visual perception system.

But unlike humans, cars using  have no memory of the past and are in a constant state of seeing the world for the first time—no matter how many times they've driven down a particular road before. This is particularly problematic in adverse weather conditions, when the car cannot safely rely on its sensors.

Researchers at the Cornell Ann S. Bowers College of Computing and Information Science and the College of Engineering have produced three concurrent research papers with the goal of overcoming this limitation by providing the car with the ability to create "memories" of previous experiences and use them in future navigation.

Doctoral student Yurong You is lead author of "HINDSIGHT is 20/20: Leveraging Past Traversals to Aid 3D Perception," which You presented virtually in April at ICLR 2022, the International Conference on Learning Representations. "Learning representations" includes deep learning, a kind of machine learning.

"The fundamental question is, can we learn from repeated traversals?" said senior author Kilian Weinberger, professor of computer science in Cornell Bowers CIS. "For example, a car may mistake a weirdly shaped tree for a pedestrian the first time its laser scanner perceives it from a distance, but once it is close enough, the object category will become clear. So the second time you drive past the very same tree, even in fog or snow, you would hope that the car has now learned to recognize it correctly."

"In reality, you rarely drive a route for the very first time," said co-author Katie Luo, a doctoral student in the research group. "Either you yourself or someone else has driven it before recently, so it seems only natural to collect that experience and utilize it."

Spearheaded by doctoral student Carlos Diaz-Ruiz, the group compiled a dataset by driving a car equipped with LiDAR (Light Detection and Ranging) sensors repeatedly along a 15-kilometer loop in and around Ithaca, 40 times over an 18-month period. The traversals capture varying environments (highway, urban, campus), weather conditions (sunny, rainy, snowy) and times of day.

HINDSIGHT is an approach that uses  to compute descriptors of objects as the car passes them. It then compresses these descriptions, which the group has dubbed SQuaSH (Spatial-Quantized Sparse History) features, and stores them on a virtual map, similar to a "" stored in a .

The next time the self-driving car traverses the same location, it can query the local SQuaSH database of every LiDAR point along the route and "remember" what it learned last time. The database is continuously updated and shared across vehicles, thus enriching the information available to perform recognition.

"This information can be added as features to any LiDAR-based 3D object detector;" You said. "Both the detector and the SQuaSH representation can be trained jointly without any additional supervision, or human annotation, which is time- and labor-intensive."

While HINDSIGHT still assumes that the artificial neural network is already trained to detect objects and augments it with the capability to create memories, MODEST (Mobile Object Detection with Ephemerality and Self-Training)—the subject of the third publication—goes even further.

Here, the authors let the car learn the entire perception pipeline from scratch. Initially the artificial neural network in the vehicle has never been exposed to any objects or streets at all. Through multiple traversals of the same route, it can learn what parts of the environment are stationary and which are moving objects. Slowly it teaches itself what constitutes other traffic participants and what is safe to ignore.

The algorithm can then detect these objects reliably—even on roads that were not part of the initial repeated traversals.

The researchers hope that both approaches could drastically reduce the development cost of  (which currently still relies heavily on costly human annotated data) and make such vehicles more efficient by learning to navigate the locations in which they are used the most.

Both Ithaca365 and MODEST will be presented at the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2022), to be held June 19-24 in New Orleans.

Other contributors include Mark Campbell, the John A. Mellowes '60 Professor in Mechanical Engineering in the Sibley School of Mechanical and Aerospace Engineering, assistant professors Bharath Hariharan and Wen Sun, from computer science at Bowers CIS; former postdoctoral researcher Wei-Lun Chao, now an assistant professor of computer science and engineering at Ohio State; and doctoral students Cheng Perng Phoo, Xiangyu Chen and Junan Chen.New way to 'see' objects accelerates future of self-driving cars

More information: Conference: cvpr2022.thecvf.com/

Provided by Cornell University

Researchers release open-source photorealistic simulator for autonomous driving

Researchers release open-source photorealistic simulator for autonomous driving | MIT News
VISTA 2.0 is an open-source simulation engine that can make realistic 
environments for training and testing self-driving cars. Credit: MIT CSAIL

Hyper-realistic virtual worlds have been heralded as the best driving schools for autonomous vehicles (AVs), since they've proven fruitful test beds for safely trying out dangerous driving scenarios. Tesla, Waymo, and other self-driving companies all rely heavily on data to enable expensive and proprietary photorealistic simulators, since testing and gathering nuanced I-almost-crashed data usually isn't the most easy or desirable to recreate.

To that end, scientists from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) created "VISTA 2.0," a data-driven simulation engine where vehicles can learn to drive in the real world and recover from near-crash scenarios. What's more, all of the code is being open-sourced to the public.

"Today, only companies have software like the type of simulation environments and capabilities of VISTA 2.0, and this software is proprietary. With this release, the  will have access to a powerful new tool for accelerating the research and development of adaptive robust control for autonomous driving," says MIT Professor and CSAIL Director Daniela Rus, senior author on a paper about the research.

VISTA 2.0 builds off of the team's previous model, VISTA, and it's fundamentally different from existing AV simulators since it's data-driven—meaning it was built and photorealistically rendered from real-world data—thereby enabling direct transfer to reality. While the initial iteration supported only single car lane-following with one , achieving high-fidelity data-driven simulation required rethinking the foundations of how different sensors and behavioral interactions can be synthesized.

Enter VISTA 2.0: a data-driven system that can simulate complex sensor types and massively interactive scenarios and intersections at scale. With much less data than previous models, the team was able to train autonomous vehicles that could be substantially more robust than those trained on large amounts of real-world data.

"This is a massive jump in capabilities of data-driven simulation for autonomous vehicles, as well as the increase of scale and ability to handle greater driving complexity," says Alexander Amini, CSAIL Ph.D. student and co-lead author on two new papers, together with fellow Ph.D. student Tsun-Hsuan Wang. "VISTA 2.0 demonstrates the ability to simulate sensor data far beyond 2D RGB cameras, but also extremely high dimensional 3D lidars with millions of points, irregularly timed event-based cameras, and even interactive and dynamic scenarios with other vehicles as well.

The team was able to scale the complexity of the interactive driving tasks for things like overtaking, following, and negotiating, including multiagent scenarios in highly photorealistic environments.

Training AI models for autonomous vehicles involves hard-to-secure fodder of different varieties of edge cases and strange, dangerous scenarios, because most of our data (thankfully) is just run-of-the-mill, day-to-day driving. Logically, we can't just crash into other cars just to teach a  how to not crash into other cars.

VISTA is a data-driven, photorealistic simulator for autonomous driving. It can simulate not just live video but LiDAR data and event cameras, and also incorporate other simulated vehicles to model complex driving situations. VISTA is open source. Credit: MIT CSAIL

Recently, there's been a shift away from more classic, human-designed simulation environments to those built up from real-world data. The latter have immense photorealism, but the former can easily model virtual cameras and lidars. With this , a key question has emerged: Can the richness and complexity of all of the sensors that autonomous vehicles need, such as lidar and event-based cameras that are more sparse, accurately be synthesized?

Lidar sensor data is much harder to interpret in a data-driven world—you're effectively trying to generate brand-new 3D point clouds with millions of points, only from sparse views of the world. To synthesize 3D lidar point clouds, the team used the data that the car collected, projected it into a 3D space coming from the lidar data, and then let a new virtual vehicle drive around locally from where that original vehicle was. Finally, they projected all of that  back into the frame of view of this new virtual , with the help of neural networks.

Together with the simulation of event-based cameras, which operate at speeds greater than thousands of events per second, the simulator was capable of not only simulating this multimodal information, but also doing so all in real time—making it possible to train neural nets offline, but also test online on the car in augmented reality setups for safe evaluations. "The question of if multisensor simulation at this scale of complexity and photorealism was possible in the realm of data-driven simulation was very much an open question," says Amini.

With that, the driving school becomes a party. In the simulation, you can move around, have different types of controllers, simulate different types of events, create interactive scenarios, and just drop in brand new vehicles that weren't even in the original data. They tested for lane following, lane turning, car following, and more dicey scenarios like static and dynamic overtaking (seeing obstacles and moving around so you don't collide). With the multi-agency, both real and simulated agents interact, and new agents can be dropped into the scene and controlled any which way.

Taking their full-scale car out into the "wild"—a.k.a. Devens, Massachusetts—the team saw immediate transferability of results, with both failures and successes. They were also able to demonstrate the bodacious, magic word of self-driving car models: "robust." They showed that AVs, trained entirely in VISTA 2.0, were so robust in the real world that they could handle that elusive tail of challenging failures.

Now, one guardrail humans rely on that can't yet be simulated is human emotion. It's the friendly wave, nod, or blinker switch of acknowledgement, which are the type of nuances the team wants to implement in future work.

"The central algorithm of this research is how we can take a dataset and build a completely synthetic world for learning and autonomy," says Amini. "It's a platform that I believe one day could extend in many different axes across robotics. Not just , but many areas that rely on vision and complex behaviors. We're excited to release VISTA 2.0 to help enable the community to collect their own datasets and convert them into virtual worlds where they can directly simulate their own virtual autonomous vehicles, drive around these virtual terrains, train  in these worlds, and then can directly transfer them to full-sized, real self-driving cars."

System trains driverless cars in simulation before they hit the road
More information: VISTA 2.0

VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and Policy Learning for Autonomous Vehicles, arXiv:2111.12083v1 [cs.RO]. arxiv.org/abs/2111.12083