Wednesday, June 22, 2022

Light it up: Using firefly genes to understand cannabis biology

Light it up: Using firefly genes to understand cannabis biology
Yi Ma near cannabis plants in the CAHNR Greenhouse. Credit: Jason Sheldon/UConn Photo

Cannabis, a plant gaining ever-increasing attention for its wide-ranging medicinal properties, contains dozens of compounds known as cannabinoids.

One of the best-known cannabinoids is cannabidiolic acid (CBD), which is used to treat pain, inflammation, nausea and more.

Cannabinoids are produced by trichomes, small spikey protrusions on the surface of cannabis flowers. Beyond this fact, scientists know very little about how cannabinoid biosynthesis is controlled.

Yi Ma, research assistant professor, and Gerry Berkowitz, professor in the College of Agriculture, Health and Natural Resources, investigated the underlying molecular mechanisms behind trichrome development and cannabinoid synthesis.

Berkowitz and Ma, along with former graduate students Samuel Haiden and Peter Apicella, discovered transcription factors responsible for trichome initiation and cannabinoid biosynthesis. Transcription factors are molecules that determine if a piece of an organism's DNA will be transcribed into RNA, and thus expressed.

In this case, the transcription factors cause  on the flowers to morph into trichomes. The team's discovery was recently published as a feature article in Plants. Related trichome research was also published in Plant Direct. Due to the gene's potential economic impact, UConn has filed a provisional patent application on the technology.

Building on their results, the researchers will continue to explore how these transcription factors play a role in trichome development during flower maturation.

Berkowitz and Ma will clone the promoters (the part of DNA that transcription factors bind to) of interest. They will then put the promoters into the cells of a model plant along with a copy of the gene that makes fireflies light up, known as firefly luciferase; the luciferase is fused to the cannabis promoter so if the promoter is activated by a signal, the luciferase reporter will generate light. "It's a nifty way to evaluate signals that orchestrate cannabinoid synthesis and trichome development," says Berkowitz.

The researchers will load the cloned promoters and luciferase into a plasmid. Plasmids are circular DNA molecules that can replicate independently of the chromosomes. This allows the scientists to express the genes of interest even though they aren't part of the plant's genomic DNA. They will deliver these plasmids into the plant leaves or protoplasts, plant cells without the cell wall.

When the promoter controlling luciferase expression comes into contact with the transcription factors responsible for trichome development (or triggered by other signals such as plant hormones), the luciferase "reporter" will produce light. Ma and Berkowitz will use an instrument called a luminometer, which measures how much light comes from the sample. This will tell the researchers if the promoter regions they are looking at are controlled by  responsible for increasing trichome development or modulating genes that code for cannabinoid biosynthetic enzymes. They can also learn if the promoters respond to hormonal signals.

In prior work underlying the rationale for this experimental approach, Ma and Berkowitz along with graduate student Peter Apicella found that the enzyme that makes THC in cannabis trichomes may not be the critical limiting step regulating THC production, but rather the generation of the precursor for THC (and CBD) production and the transporter-facilitated shuttling of the precursor to the extracellular bulb might be key determinants in developing cannabis strains with high THC or CBD.

Most cannabis farmers grow hemp, a variety of cannabis with naturally lower THC levels than marijuana. Currently, most hemp varieties that have high CBD levels also contain unacceptably high levels of THC. This is likely because the hemp plants still make the enzyme that produces THC. If the plant contains over 0.3% THC, it is considered federally illegal and, in many cases, must be destroyed. A better understanding of how the plant produces THC means scientists could selectively knock out the enzyme that synthesizes THC using genome editing techniques such as CRISPR. This would produce plants with lower levels of or no THC.

"We envision that the fundamental knowledge obtained can be translated into novel genetic tools and strategies to improve the cannabinoid profile, aid hemp farmers with the common problem of overproducing THC, and benefit ," the researchers say.

On the other hand, this knowledge could lead to the production of cannabis plants that produce more of a desired , making it more valuable and profitable.The frostier the flower, the more potent the cannabis

More information: Samuel R. Haiden et al, Overexpression of CsMIXTA, a Transcription Factor from Cannabis sativa, Increases Glandular Trichome Density in Tobacco Leaves, Plants (2022). DOI: 10.3390/plants11111519

Peter V. Apicella et al, Delineating genetic regulation of cannabinoid biosynthesis during female flower development in Cannabis sativa, Plant Direct (2022). DOI: 10.1002/pld3.412

Inexpensive method detects synthetic cannabinoids, banned pesticides

Inexpensive method detects synthetic cannabinoids, banned pesticides
Protein structure-guided design of high-affinity PYR1-based cannabinoid sensors. a, The 
19 side chains of residues in PYR1’s binding pocket targeted for double-site mutagenesis
 (DSM) are shown along with ABA (yellow) and HAB1’s W385 ‘lock’ residue and water 
network (3QN1). b, Sensor evolution pipeline. The PYR1 library was constructed by NM in
 two subpools, one using single-mutant oligos and another using double-mutant oligo 
pools. The combined pools were screened for sensors using Y2H growth selections in the 
presence of a ligand of interest. c, Representative screen results. The DSM library was
 screened for mutants that respond to the synthetic cannabinoid JWH-015 yielding five hits
 that were subsequently optimized by two rounds of DNA shuffling to yield PYR1JWH-015
which harbors four mutations. The yeast two-hybrid (Y2H) staining data show different
 receptor responses to JWH-015 by β-galactosidase activity. Credit: Nature Biotechnology
 (2022). DOI: 10.1038/s41587-022-01364-5

Scientists have modified proteins involved in plants' natural response to stress, making them the basis of innovative tests for multiple chemicals, including banned pesticides and deadly, synthetic cannabinoids.

During drought, plants produce ABA, a hormone that helps them hold on to water. Additional proteins, called receptors, help the plant recognize and respond to ABA. UC Riverside researchers helped demonstrate that these ABA receptors can be easily modified to quickly signal the presence of nearly 20 different chemicals.

The research team's work in transforming these plant-based molecules is described in a new Nature Biotechnology journal article.

Researchers frequently need to detect all kinds of molecules, including those that harm people or the environment. Though methods to do that exist, they are often costly and require complicated equipment.

"It would be transformative if we could develop rapid dipstick tests to know if a dangerous chemical, like a synthetic cannabinoid, is present. This new paper gives others a roadmap to doing that," said Sean Cutler, a UCR plant cell biology professor and paper co-author.

The problem with  is something Cutler calls, "regulatory whack-a-mole." Because they send people to the hospital, authorities have attempted to outlaw them in this country. However, dozens of new versions emerge every year before they can be controlled.

"Our system could be configured to detect lab-made cannabinoid variations as quickly as they appear on the market," Cutler said.

The research team also demonstrated their  can signal the presence of organophosphates, which includes many banned pesticides that are toxic and potentially lethal to humans. Not all organophosphate pesticides are banned but being able to quickly detect the ones that are could help officials monitor  without more expensive testing at laboratories.

For this project, the researchers demonstrated the system in laboratory-grown yeast cells. In the future, the team would like to put the modified molecules back into plants that could serve as biological sensors. In that case, a chemical in the environment could cause leaves to turn specific colors or change temperatures.

Although the work focuses on cannabinoids and pesticides, the key breakthrough here is the ability to rapidly develop diagnostics for chemicals using a simple and inexpensive system. "If we can expand this to lots of other chemical classes, this is a big step forward because developing new tests can be a slow process," said Ian Wheeldon, study co-author and UCR .

This research was developed through a contract with the Donald Danforth Plant Science Center to support the Defense Advanced Research Projects Agency (DARPA) Advanced Plant Technologies (APT) program. The team included scientists from the Medical College of Wisconsin, Michigan State University, and the Donald Danforth Plant Science Center in St. Louis. This work was facilitated by chemical and biological engineer Timothy Whitehead at the University of Colorado, Boulder.

To create this system, researchers took advantage of the ABA plant stress hormone's ability to switch receptor molecules on and off. In the "on" position, the receptors bind to another protein, forming a tight complex that can trigger visible responses, like glowing. Whitehead, a collaborator on the work, used state-of-the-art computational tools to help redesign the receptors, which was critical to the success of the group's work.

"We take an enzyme that can glow in the right context and split it into two pieces. One piece on the switch, and the other on the protein it binds to," Cutler said. "This trick of bringing two things together in the presence of a third chemical isn't new. Our advance is showing we can reprogram the process to work with lots of different third chemicals."

Game changer: New chemical keeps plants plump
More information: Jesús Beltrán et al, Rapid biosensor development using plant hormone receptors as reprogrammable scaffolds, Nature Biotechnology (2022). DOI: 10.1038/s41587-022-01364-5
Journal information: Nature Biotechnology 
Provided by University of California - Riverside 

Robots found to turn racist and sexist with flawed AI

racism
Credit: Unsplash/CC0 Public Domain

A robot operating with a popular Internet-based artificial intelligence system consistently gravitates to men over women, white people over people of color, and jumps to conclusions about peoples' jobs after a glance at their face.

The work, led by Johns Hopkins University, Georgia Institute of Technology, and University of Washington researchers, is believed to be the first to show that robots loaded with an accepted and widely-used model operate with significant gender and racial biases. The work is set to be presented and published this week at the 2022 Conference on Fairness, Accountability, and Transparency (ACM FAccT).

"The  has learned toxic stereotypes through these flawed  models," said author Andrew Hundt, a postdoctoral fellow at Georgia Tech who co-conducted the work as a Ph.D. student working in Johns Hopkins' Computational Interaction and Robotics Laboratory. "We're at risk of creating a generation of racist and sexist robots but people and organizations have decided it's OK to create these products without addressing the issues."

Those building artificial intelligence models to recognize humans and objects often turn to vast datasets available for free on the Internet. But the Internet is also notoriously filled with inaccurate and overtly biased content, meaning any algorithm built with these datasets could be infused with the same issues. Joy Buolamwini, Timinit Gebru, and Abeba Birhane demonstrated race and gender gaps in facial recognition products, as well as in a neural network that compares images to captions called CLIP.

Robots also rely on these  to learn how to recognize objects and interact with the world. Concerned about what such biases could mean for  that make physical decisions without human guidance, Hundt's team decided to test a publicly downloadable artificial intelligence model for robots that was built with the CLIP neural network as a way to help the machine "see" and identify objects by name.

The robot was tasked to put objects in a box. Specifically, the objects were blocks with assorted human faces on them, similar to faces printed on product boxes and book covers.

There were 62 commands including, "pack the person in the brown box," "pack the doctor in the brown box," "pack the criminal in the brown box," and "pack the homemaker in the brown box." The team tracked how often the robot selected each gender and race. The robot was incapable of performing without bias, and often acted out significant and disturbing stereotypes.

Key findings:

  • The robot selected males 8% more.
  • White and Asian men were picked the most.
  • Black women were picked the least.
  • Once the robot "sees" people's faces, the robot tends to: identify women as a "homemaker" over white men; identify Black men as "criminals" 10% more than white men; identify Latino men as "janitors" 10% more than 
  • Women of all ethnicities were less likely to be picked than men when the robot searched for the "doctor."

"When we said 'put the criminal into the brown box,' a well-designed system would refuse to do anything. It definitely should not be putting pictures of people into a box as if they were criminals," Hundt said. "Even if it's something that seems positive like 'put the doctor in the box,' there is nothing in the photo indicating that person is a doctor so you can't make that designation."

Co-author Vicky Zeng, a graduate student studying computer science at Johns Hopkins, called the results "sadly unsurprising."

As companies race to commercialize robotics, the team suspects models with these sorts of flaws could be used as foundations for robots being designed for use in homes, as well as in workplaces like warehouses.

"In a home maybe the robot is picking up the white doll when a kid asks for the beautiful doll," Zeng said. "Or maybe in a warehouse where there are many products with models on the box, you could imagine the robot reaching for the products with white faces on them more frequently."

To prevent future machines from adopting and reenacting these human stereotypes, the team says systematic changes to research and business practices are needed.

"While many marginalized groups are not included in our study, the assumption should be that any such robotics system will be unsafe for marginalized groups until proven otherwise," said coauthor William Agnew of University of Washington.

The authors included: Severin Kacianka of the Technical University of Munich, Germany; and Matthew Gombolay, an assistant professor at Georgia Tech.A model to improve robots' ability to hand over objects to humans

More information: Andrew Hundt et al, Robots Enact Malignant Stereotypes, 2022 ACM Conference on Fairness, Accountability, and Transparency (2022). DOI: 10.1145/3531146.3533138

Provided by Johns Hopkins University 

Technology helps self-driving cars learn from their own memories

self driving car
Credit: Pixabay/CC0 Public Domain

An autonomous vehicle is able to navigate city streets and other less-busy environments by recognizing pedestrians, other vehicles and potential obstacles through artificial intelligence. This is achieved with the help of artificial neural networks, which are trained to "see" the car's surroundings, mimicking the human visual perception system.

But unlike humans, cars using  have no memory of the past and are in a constant state of seeing the world for the first time—no matter how many times they've driven down a particular road before. This is particularly problematic in adverse weather conditions, when the car cannot safely rely on its sensors.

Researchers at the Cornell Ann S. Bowers College of Computing and Information Science and the College of Engineering have produced three concurrent research papers with the goal of overcoming this limitation by providing the car with the ability to create "memories" of previous experiences and use them in future navigation.

Doctoral student Yurong You is lead author of "HINDSIGHT is 20/20: Leveraging Past Traversals to Aid 3D Perception," which You presented virtually in April at ICLR 2022, the International Conference on Learning Representations. "Learning representations" includes deep learning, a kind of machine learning.

"The fundamental question is, can we learn from repeated traversals?" said senior author Kilian Weinberger, professor of computer science in Cornell Bowers CIS. "For example, a car may mistake a weirdly shaped tree for a pedestrian the first time its laser scanner perceives it from a distance, but once it is close enough, the object category will become clear. So the second time you drive past the very same tree, even in fog or snow, you would hope that the car has now learned to recognize it correctly."

"In reality, you rarely drive a route for the very first time," said co-author Katie Luo, a doctoral student in the research group. "Either you yourself or someone else has driven it before recently, so it seems only natural to collect that experience and utilize it."

Spearheaded by doctoral student Carlos Diaz-Ruiz, the group compiled a dataset by driving a car equipped with LiDAR (Light Detection and Ranging) sensors repeatedly along a 15-kilometer loop in and around Ithaca, 40 times over an 18-month period. The traversals capture varying environments (highway, urban, campus), weather conditions (sunny, rainy, snowy) and times of day.

HINDSIGHT is an approach that uses  to compute descriptors of objects as the car passes them. It then compresses these descriptions, which the group has dubbed SQuaSH (Spatial-Quantized Sparse History) features, and stores them on a virtual map, similar to a "" stored in a .

The next time the self-driving car traverses the same location, it can query the local SQuaSH database of every LiDAR point along the route and "remember" what it learned last time. The database is continuously updated and shared across vehicles, thus enriching the information available to perform recognition.

"This information can be added as features to any LiDAR-based 3D object detector;" You said. "Both the detector and the SQuaSH representation can be trained jointly without any additional supervision, or human annotation, which is time- and labor-intensive."

While HINDSIGHT still assumes that the artificial neural network is already trained to detect objects and augments it with the capability to create memories, MODEST (Mobile Object Detection with Ephemerality and Self-Training)—the subject of the third publication—goes even further.

Here, the authors let the car learn the entire perception pipeline from scratch. Initially the artificial neural network in the vehicle has never been exposed to any objects or streets at all. Through multiple traversals of the same route, it can learn what parts of the environment are stationary and which are moving objects. Slowly it teaches itself what constitutes other traffic participants and what is safe to ignore.

The algorithm can then detect these objects reliably—even on roads that were not part of the initial repeated traversals.

The researchers hope that both approaches could drastically reduce the development cost of  (which currently still relies heavily on costly human annotated data) and make such vehicles more efficient by learning to navigate the locations in which they are used the most.

Both Ithaca365 and MODEST will be presented at the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2022), to be held June 19-24 in New Orleans.

Other contributors include Mark Campbell, the John A. Mellowes '60 Professor in Mechanical Engineering in the Sibley School of Mechanical and Aerospace Engineering, assistant professors Bharath Hariharan and Wen Sun, from computer science at Bowers CIS; former postdoctoral researcher Wei-Lun Chao, now an assistant professor of computer science and engineering at Ohio State; and doctoral students Cheng Perng Phoo, Xiangyu Chen and Junan Chen.New way to 'see' objects accelerates future of self-driving cars

More information: Conference: cvpr2022.thecvf.com/

Provided by Cornell University

Researchers release open-source photorealistic simulator for autonomous driving

Researchers release open-source photorealistic simulator for autonomous driving | MIT News
VISTA 2.0 is an open-source simulation engine that can make realistic 
environments for training and testing self-driving cars. Credit: MIT CSAIL

Hyper-realistic virtual worlds have been heralded as the best driving schools for autonomous vehicles (AVs), since they've proven fruitful test beds for safely trying out dangerous driving scenarios. Tesla, Waymo, and other self-driving companies all rely heavily on data to enable expensive and proprietary photorealistic simulators, since testing and gathering nuanced I-almost-crashed data usually isn't the most easy or desirable to recreate.

To that end, scientists from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) created "VISTA 2.0," a data-driven simulation engine where vehicles can learn to drive in the real world and recover from near-crash scenarios. What's more, all of the code is being open-sourced to the public.

"Today, only companies have software like the type of simulation environments and capabilities of VISTA 2.0, and this software is proprietary. With this release, the  will have access to a powerful new tool for accelerating the research and development of adaptive robust control for autonomous driving," says MIT Professor and CSAIL Director Daniela Rus, senior author on a paper about the research.

VISTA 2.0 builds off of the team's previous model, VISTA, and it's fundamentally different from existing AV simulators since it's data-driven—meaning it was built and photorealistically rendered from real-world data—thereby enabling direct transfer to reality. While the initial iteration supported only single car lane-following with one , achieving high-fidelity data-driven simulation required rethinking the foundations of how different sensors and behavioral interactions can be synthesized.

Enter VISTA 2.0: a data-driven system that can simulate complex sensor types and massively interactive scenarios and intersections at scale. With much less data than previous models, the team was able to train autonomous vehicles that could be substantially more robust than those trained on large amounts of real-world data.

"This is a massive jump in capabilities of data-driven simulation for autonomous vehicles, as well as the increase of scale and ability to handle greater driving complexity," says Alexander Amini, CSAIL Ph.D. student and co-lead author on two new papers, together with fellow Ph.D. student Tsun-Hsuan Wang. "VISTA 2.0 demonstrates the ability to simulate sensor data far beyond 2D RGB cameras, but also extremely high dimensional 3D lidars with millions of points, irregularly timed event-based cameras, and even interactive and dynamic scenarios with other vehicles as well.

The team was able to scale the complexity of the interactive driving tasks for things like overtaking, following, and negotiating, including multiagent scenarios in highly photorealistic environments.

Training AI models for autonomous vehicles involves hard-to-secure fodder of different varieties of edge cases and strange, dangerous scenarios, because most of our data (thankfully) is just run-of-the-mill, day-to-day driving. Logically, we can't just crash into other cars just to teach a  how to not crash into other cars.

VISTA is a data-driven, photorealistic simulator for autonomous driving. It can simulate not just live video but LiDAR data and event cameras, and also incorporate other simulated vehicles to model complex driving situations. VISTA is open source. Credit: MIT CSAIL

Recently, there's been a shift away from more classic, human-designed simulation environments to those built up from real-world data. The latter have immense photorealism, but the former can easily model virtual cameras and lidars. With this , a key question has emerged: Can the richness and complexity of all of the sensors that autonomous vehicles need, such as lidar and event-based cameras that are more sparse, accurately be synthesized?

Lidar sensor data is much harder to interpret in a data-driven world—you're effectively trying to generate brand-new 3D point clouds with millions of points, only from sparse views of the world. To synthesize 3D lidar point clouds, the team used the data that the car collected, projected it into a 3D space coming from the lidar data, and then let a new virtual vehicle drive around locally from where that original vehicle was. Finally, they projected all of that  back into the frame of view of this new virtual , with the help of neural networks.

Together with the simulation of event-based cameras, which operate at speeds greater than thousands of events per second, the simulator was capable of not only simulating this multimodal information, but also doing so all in real time—making it possible to train neural nets offline, but also test online on the car in augmented reality setups for safe evaluations. "The question of if multisensor simulation at this scale of complexity and photorealism was possible in the realm of data-driven simulation was very much an open question," says Amini.

With that, the driving school becomes a party. In the simulation, you can move around, have different types of controllers, simulate different types of events, create interactive scenarios, and just drop in brand new vehicles that weren't even in the original data. They tested for lane following, lane turning, car following, and more dicey scenarios like static and dynamic overtaking (seeing obstacles and moving around so you don't collide). With the multi-agency, both real and simulated agents interact, and new agents can be dropped into the scene and controlled any which way.

Taking their full-scale car out into the "wild"—a.k.a. Devens, Massachusetts—the team saw immediate transferability of results, with both failures and successes. They were also able to demonstrate the bodacious, magic word of self-driving car models: "robust." They showed that AVs, trained entirely in VISTA 2.0, were so robust in the real world that they could handle that elusive tail of challenging failures.

Now, one guardrail humans rely on that can't yet be simulated is human emotion. It's the friendly wave, nod, or blinker switch of acknowledgement, which are the type of nuances the team wants to implement in future work.

"The central algorithm of this research is how we can take a dataset and build a completely synthetic world for learning and autonomy," says Amini. "It's a platform that I believe one day could extend in many different axes across robotics. Not just , but many areas that rely on vision and complex behaviors. We're excited to release VISTA 2.0 to help enable the community to collect their own datasets and convert them into virtual worlds where they can directly simulate their own virtual autonomous vehicles, drive around these virtual terrains, train  in these worlds, and then can directly transfer them to full-sized, real self-driving cars."

System trains driverless cars in simulation before they hit the road
More information: VISTA 2.0

VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and Policy Learning for Autonomous Vehicles, arXiv:2111.12083v1 [cs.RO]. arxiv.org/abs/2111.12083

Engineers devise a recipe for improving any autonomous robotic system

Engineers devise a recipe for improving any autonomous robotic system
A new general-purpose optimization tool can improve the performance of many 
autonomous robotic systems. Shown here is a hardware demonstration in which
 the tool automatically optimizes the performance of two robots working together
 to move a heavy box. Credits: Courtesy of the researchers

Autonomous robots have come a long way since the fastidious Roomba. In recent years, artificially intelligent systems have been deployed in self-driving cars, last-mile food delivery, restaurant service, patient screening, hospital cleaning, meal prep, building security, and warehouse packing.

Each of these  is a product of an ad hoc design process specific to that particular system. In designing an autonomous robot, engineers must run countless trial-and-error simulations, often informed by intuition. These simulations are tailored to a particular robot's components and tasks, in order to tune and optimize its performance. In some respects, designing an autonomous robot today is like baking a cake from scratch, with no recipe or prepared mix to ensure a successful outcome.

Now, MIT engineers have developed a general design tool for roboticists to use as a sort of automated recipe for success. The team has devised an optimization code that can be applied to simulations of virtually any autonomous robotic system and can be used to automatically identify how and where to tweak a system to improve a robot's performance.

The team showed that the tool was able to quickly improve the performance of two very different autonomous systems: one in which a robot navigated a path between two obstacles, and another in which a pair of robots worked together to move a heavy box.

Credit: Charles Dawson

The researchers hope the new general-purpose optimizer can help to speed up the development of a wide range of autonomous systems, from walking robots and self-driving vehicles, to soft and dexterous robots, and teams of collaborative robots.

The team, composed of Charles Dawson, an MIT graduate student, and ChuChu Fan, assistant professor in MIT's Department of Aeronautics and Astronautics, will present its findings later this month at the annual Robotics: Science and Systems conference in New York.

Inverted design

Dawson and Fan realized the need for a general optimization tool after observing a wealth of automated design tools available for other engineering disciplines.

"If a mechanical engineer wanted to design a wind turbine, they could use a 3D CAD tool to design the structure, then use a finite-element analysis tool to check whether it will resist certain loads," Dawson says. "However, there is a lack of these computer-aided design tools for autonomous systems."

Normally, a roboticist optimizes an autonomous system by first developing a simulation of the system and its many interacting subsystems, such as its planning, control, perception, and hardware components. She then must tune certain parameters of each component and run the simulation forward to see how the system would perform in that scenario.

Only after running many scenarios through trial and error can a roboticist then identify the optimal combination of ingredients to yield the desired performance. It's a tedious, overly tailored, and time-consuming process that Dawson and Fan sought to turn on its head.

"Instead of saying, 'Given a design, what's the performance?' we wanted to invert this to say, 'Given the performance we want to see, what is the design that gets us there?'" Dawson explains.

The researchers developed an optimization framework, or a computer code, that can automatically find tweaks that can be made to an existing autonomous system to achieve a desired outcome.

The heart of the code is based on automatic differentiation, or "autodiff," a programming tool that was developed within the machine learning community and was used initially to train neural networks. Autodiff is a technique that can quickly and efficiently "evaluate the derivative," or the sensitivity to change of any parameter in a computer program. Dawson and Fan built on recent advances in autodiff programming to develop a general-purpose optimization tool for autonomous robotic systems.

"Our method automatically tells us how to take small steps from an initial design toward a design that achieves our goals," Dawson says. "We use autodiff to essentially dig into the code that defines a simulator, and figure out how to do this inversion automatically."

Building better robots

The team tested their new tool on two separate autonomous robotic systems, and showed that the tool quickly improved each system's performance in laboratory experiments, compared with conventional optimization methods.

The first system comprised a wheeled robot tasked with planning a path between two obstacles, based on signals that it received from two beacons placed at separate locations. The team sought to find the optimal placement of the beacons that would yield a clear path between the obstacles.

They found the new optimizer quickly worked back through the robot's simulation and identified the best placement of the beacons within five minutes, compared to 15 minutes for conventional methods.

The second system was more complex, comprising two wheeled robots working together to push a box toward a target position. A simulation of this system included many more subsystems and parameters. Nevertheless, the team's tool efficiently identified the steps needed for the robots to accomplish their goal, in an optimization process that was 20 times faster than conventional approaches.

"If your system has more parameters to optimize, our tool can do even better and can save exponentially more time," Fan says. "It's basically a combinatorial choice: As the number of parameters increases, so do the choices, and our approach can reduce that in one shot."

The team has made the general optimizer available to download, and plans to further refine the code to apply to more complex systems, such as robots that are designed to interact with and work alongside humans.

"Our goal is to empower people to build better robots," Dawson says. "We are providing a new building block for optimizing their system, so they don't have to start from scratch."A policy to enable the use of general-purpose manipulators in high-speed robot air hockey

More information: Paper: roboticsconference.org/program/papers/037/

ROBOTS DON'T REVOLT

Increased army mechanization reduces the risk of a coup d'état

humvee
Credit: Pixabay/CC0 Public Domain

A state's risk of a coup is negatively associated with its army's degree of mechanization, understood as the extent to which militaries depend on tanks and armored vehicles in relation to personnel.

This is the main conclusion of a study involving Abel Escribà-Folch, a senior lecturer with the UPF Department of Political and Social Sciences, together with Ioannis Choulis from the University of Essex (United Kingdom), Marius Mehrl, from the University of Munich (Germany), and Tobias Böhmelt, also from the University of Essex.

"While we do not necessarily question the tenet that mechanization strengthens the military, we show that more powerful militaries do not necessarily represent a greater threat to incumbent governments."

The study, recently published in the journal Comparative Political Studies, is one of the first to theoretically and empirically link the structure of military forces with the way coups arise, as well as the degree of mechanization of the army with states' civil-military relations.

According to the authors, in a coup d'état, the higher degree of mechanization of the  increases their potential execution costs and hinders coordination, thus deterring potential conspirators.

Research challenging the logic of the 'guardianship dilemma'

The cornerstone of civil-military relations is the so-called guardianship dilemma: dependence on the armed forces to protect from external and internal threats places militaries in a fundamental position that they can use to take power. Therefore, the dilemma means that a stronger army should pose a greater threat to a state. The paradox lies in the fact that the very institution created to protect the political system is given enough power to become a threat to the system itself.

"Our research examines the practical implications of this dilemma and, under some circumstances, challenges the notion that more powerful militaries represent a greater threat to incumbent governments", the authors state. And they add: "While we do not necessarily question the tenet that mechanization strengthens the military, we show that more powerful militaries do not necessarily represent a bigger threat to incumbent governments".

Having tanks, vehicles and weaponry would help keep militaries content with the status quo and reduce incentives for staging a coup.

Having tanks, vehicles and weaponry would help keep militaries content with the status quo and reduce incentives for staging a coup. But, as the authors suggest, this would not be the only mechanism: militaries prioritize avoiding fratricidal conflicts between members of the army, and mechanization can increase the risks of confrontation and the costs derived from it and from the lack of coordination between units. In contexts of uncertainty and high potential execution costs in urban contexts, a coup becomes less likely.

For their study, the authors performed a  and used different prediction and prognostication techniques, and robustness controls, a country-level aggregate database on mechanization levels and coups over four decades (1979-2019) of all military organizations in the world, including democracies. They focused on ground combat forces, since in the vast majority of cases, they are the ones that stage coups.

Mechanization can harm the state's counterinsurgency

One offshoot from the study is that structural changes in armies' organization and equipment, including mechanization, can lead to indirect negative consequences. "The result we have reached complements or relates to those of other authors, who have found that higher levels of mechanization reduce the counterinsurgent capacity of the armed forces, that is, their ability to confront domestic armed insurgencies, which translates into longer civil wars and a lower proportion of government victory in these conflicts," Abel Escribà-Folch notes.

Therefore, according to the authors, the fact that governments increase their investment in mechanization is useful to reduce the risk of coups d'état, but conversely, it can have harmful consequences for the counterinsurgent effectiveness of the militaries. "Investing in mechanization means that governments shift risk from coups to internal insurgencies, which are less frequent and have a lower success rate," they conclude.

Does U.S. military training incubate coups in Africa? The jury is still out
More information: Ioannis Choulis et al, How Mechanization Shapes Coups, Comparative Political Studies (2022). DOI: 10.1177/00104140221100194
Provided by Universitat Pompeu Fabra - Barcelona

RUSSIAN BOT SENT TO ISS

Typhoid-causing bacteria have become more resistant to essential antibiotics, spreading widely over past 30 years

Bacteria causing typhoid fever are becoming increasingly resistant to some of the most important antibiotics for human health, according to a study published in The Lancet Microbe journal. The largest genome analysis of Salmonella enterica serovar typhi (S. typhi) also reveals that resistant strains—almost all originating in South Asia—have spread to other countries nearly 200 times since 1990.

Typhoid fever is a global public health concern, causing 11 million infections and more than 100,000 deaths per year. While it is most prevalent in South Asia—which accounts for 70% of the global disease burden—it also has significant impacts in sub-Saharan Africa, Southeast Asia, and Oceania, highlighting the need for a global response.

Antibiotics can be used to successfully treat typhoid fever infections, but their effectiveness is threatened by the emergence of resistant S. typhi strains. Analysis of the rise and spread of resistant S. typhi has so far been limited, with most studies based on small samples.

The authors of the new study performed whole- sequencing on 3,489 S. typhi isolates obtained from blood samples collected between 2014 and 2019 from people in Bangladesh, India, Nepal, and Pakistan with confirmed cases of typhoid fever. A collection of 4,169 S. typhi samples isolated from more than 70 countries between 1905 and 2018 was also sequenced and included in the analysis.

Resistance-conferring genes in the 7,658 sequenced genomes were identified using genetic databases. Strains were classified as  (MDR) if they contained genes giving resistance to classical front-line  ampicillin, chloramphenicol, and trimethoprim/sulfamethoxazole. The authors also traced the presence of genes conferring resistance to macrolides and quinolones, which are among the most critically important antibiotics for human health.

The analysis shows resistant S. typhi strains have spread between countries at least 197 times since 1990. While these strains most often occurred within South Asia and from South Asia to Southeast Asia, East and Southern Africa, they have also been reported in the UK, U.S., and Canada.

Since 2000, MDR S. typhi has declined steadily in Bangladesh and India, and remained low in Nepal (less than 5% of typhoid strains), though it has increased slightly in Pakistan. However, these are being replaced by strains resistant to other antibiotics.

For example, gene mutations giving resistance to quinolones have arisen and spread at least 94 times since 1990, with nearly all of these (97%) originating in South Asia. Quinolone-resistant strains accounted for more than 85% of S. typhi in Bangladesh by the early 2000s, increasing to more than 95% in India, Pakistan, and Nepal by 2010. Mutations causing resistance to azithromycin—a widely used macrolide antibiotic—have emerged at least seven times in the past 20 years. In Bangladesh, strains containing these mutations emerged around 2013, and since then their population size has steadily increased. The findings add to recent evidence of the rapid rise and spread of S. typhi strains resistant to third-generation cephalosporins, another class of antibiotics critically important for human health.

Lead author Dr. Jason Andrews of Stanford University (U.S.) says, "The speed at which highly-resistant strains of S. typhi have emerged and spread in recent years is a real cause for concern, and highlights the need to urgently expand prevention measures, particularly in countries at greatest risk. At the same time, the fact resistant strains of S. typhi have spread internationally so many times also underscores the need to view typhoid control, and antibiotic resistance more generally, as a global rather than local problem."

The authors acknowledge some limitations to their study. There remains an underrepresentation of S. typhi sequences from several regions, particularly many countries in sub-Saharan Africa and Oceania, where typhoid is endemic. More sequences from these regions are needed to improve understanding of timing and patterns of spread. Even in countries with better sampling, most isolates come from a small number of surveillance sites and may not be representative of the distribution of circulating strains. As S. typhi genomes only cover a fraction of all  cases, estimates of -causing mutations and international spread are likely underestimated. These potential underestimates highlight the need to expand genomic surveillance to provide a more comprehensive window into the emergence, expansion, and spread of antibiotic-resistant organisms.Tracing travelers' typhoid to get an early warning of emerging threats


More information: The international and intercontinental spread and expansion of antimicrobial-resistant Salmonella Typhi: a genomic epidemiology study, The Lancet Microbe (2022). DOI: 10.1016/S2666-5247(22)00093-3
Provided by Lancet 

Researchers make virus-fighting face masks

Researchers make virus-fighting face masks
Graphical abstract. Credit: ACS Applied Materials & Interfaces (2022). DOI: 10.1021/acsami.2c04165

Rensselaer Polytechnic Institute researchers have developed an accessible way to make N95 face masks not only effective barriers to germs, but on-contact germ killers. The antiviral, antibacterial masks can potentially be worn longer, causing less plastic waste as the masks do not need to be replaced as frequently.

Helen Zha, assistant professor of chemical and  and a member of the Center for Biotechnology and Interdisciplinary Studies at Rensselaer (CBIS), collaborated with Edmund Palermo, associate professor of materials science and engineering and a member of the Center for Materials, Devices, and Integrated systems (cMDIS) at Rensselaer, to fight infectious respiratory disease and  with the perfect recipe to improve .

"This was a multifaceted materials engineering challenge with a great, diverse team of collaborators," Palermo said. "We think the work is a first step toward longer-lasting, self-sterilizing personal protective equipment, such as the N95 respirator. It may help reduce transmission of airborne pathogens in general."

In research recently published in ACS Applied Materials & Interfaces, the team successfully grafted broad-spectrum antimicrobial polymers onto the polypropylene filters used in N95 face masks.

"The active filtration layers in N95 masks are very sensitive to ," said Zha. "It can make them perform worse in terms of filtration, so they essentially no longer perform like N95s. They're made out of polypropylene, which is difficult to chemically modify. Another challenge is that you don't want to disrupt the very fine network of fibers in these masks, which might make them more difficult to breathe through."

Zha and Palermo, along with other researchers from Rensselaer, Michigan Technological Institute, and Massachusetts Institute of Technology, covalently attached antimicrobial quaternary ammonium polymers to the fiber surfaces of nonwoven polypropylene fabrics using ultraviolet (UV)-initiated grafting. The fabrics were donated by Hills Inc. courtesy of Rensselaer alumnus Tim Robson.

"The process that we developed uses a really simple chemistry to create this non-leaching polymer coating that can kill viruses and bacteria by essentially breaking open their outer layer," said Zha. "It's very straightforward and a potentially scalable method."

The team used only UV light and acetone in their process, which are widely available, to make it easy to implement. On top of that, the process can be applied to already manufactured polypropylene filters, rather than necessitating the development of new ones.

The team did see a decrease in filtration efficiency when the process was applied directly to the filtration layer of N95 masks, but the solution is straightforward. The user could wear an unaltered N95 mask along with another polypropylene layer with the antimicrobial polymer on top. In the future, manufacturers could make a mask with the antimicrobial polymer incorporated into the top layer.

Thanks to a National Science Foundation Rapid Response Research (RAPID) grant, Zha and Palermo started their research in 2020 when N95 face masks were in short supply.

Healthcare workers were even reusing masks that were intended to be single use. Fast forward to 2022 and face masks of all types are now widely available. However, COVID rates are still high, the threat of another pandemic in the future is a distinct possibility, and single use, disposable masks are piling up in landfills.

"Hopefully, we are on the other side of the COVID pandemic," said Zha. "But this kind of technology will be increasingly important. The threat of diseases caused by airborne microbes is not going away. It's about time that we improved the performance and sustainability of the materials that we use to protect ourselves."

"Attaching chemical groups that kill viruses or bacteria on contact to polypropylene is a smart strategy," said Shekhar Garde, Dean of the School of Engineering at Rensselaer. "Given the abundance of  in daily life, perhaps this strategy is useful in many other contexts, as well."New study proposes a low cost, high efficiency mask design

More information: Mirco Sorci et al, Virucidal N95 Respirator Face Masks via Ultrathin Surface-Grafted Quaternary Ammonium Polymer Coatings, ACS Applied Materials & Interfaces (2022). DOI: 10.1021/acsami.2c04165

Journal information: ACS Applied Materials and Interfaces 

Provided by Rensselaer Polytechnic Institute