Saturday, May 04, 2024

 

Toxic chemicals can be detected with new AI method




CHALMERS UNIVERSITY OF TECHNOLOGY

Model architecture for the AI method that predicts toxicity of chemicals 

IMAGE: 

A REPRESENTATION OF THE MOLECULE'S STRUCTURE IS USED AS INPUT TO A PRETRAINED TRANSFORMER, WHICH INTERPRETS THE MOLECULAR STRUCTURE. THE TRANSFORMER CREATES A SO-CALLED “VECTOR EMBEDDING” – A NUMERICAL REPRESENTATION OF THE TOXICITY OF THE STRUCTURE. THAT IS THEN USED AS INPUT TO A DEEP NEURAL NETWORK (DNN), TOGETHER WITH INFORMATION ABOUT THE TYPE OF TOXIC EFFECT YOU WANT TO ASSESS AND THE EXPOSURE DURATION. THE OUTPUT OF THE NEURAL NETWORK IS THE PREDICTED MOLECULE CONCENTRATION THAT CAUSES THE REQUESTED EFFECT.

view more 

CREDIT: CHALMERS UNIVERSITY OF TECHNOLOGY AND THE UNIVERSITY OF GOTHENBURG




Swedish researchers at Chalmers University of Technology and the University of Gothenburg have developed an AI method that improves the identification of toxic chemicals – based solely on knowledge of the molecular structure. The method can contribute to better control and understanding of the ever-growing number of chemicals used in society, and can also help reduce the amount of animal tests.

The use of chemicals in society is extensive, and they occur in everything from household products to industrial processes. Many chemicals reach our waterways and ecosystems, where they may cause negative effects on humans and other organisms. One example is PFAS, a group of problematic substances which has recently been found in concerning concentrations in both groundwater and drinking water. It has been used, for example, in firefighting foam and in many consumer products.

Negative effects for humans and the environment arise despite extensive chemical regulations, that often require time-consuming animal testing to demonstrate when chemicals can be considered as safe. In the EU alone, more than two million animals are used annually to comply with various regulations. At the same time, new chemicals are developed at a rapid pace, and it is a major challenge to determine which of these that need to be restricted due to their toxicity to humans or the environment.

Valuable help in the development of chemicals

The new method developed by the Swedish researchers utilises artificial intelligence for rapid and cost-effective assessment of chemical toxicity. It can therefore be used to identify toxic substances at an early phase and help reduce the need for animal testing.

"Our method is able to predict whether a substance is toxic or not based on its chemical structure. It has been developed and refined by analysing large datasets from laboratory tests performed in the past. The method has thereby been trained to make accurate assessments for previously untested chemicals," says Mikael Gustavsson, researcher at the Department of Mathematical Sciences at Chalmers University of Technology, and at the Department of Biology and Environmental Sciences at the University of Gothenburg.

"There are currently more than 100,000 chemicals on the market, but only a small part of these have a well-described toxicity towards humans or the environment. To assess the toxicity of all these chemicals using conventional methods, including animal testing, is not practically possible. Here, we see that our method can offer a new alternative," says Erik Kristiansson, professor at the Department of Mathematical Sciences at Chalmers and at the University of Gothenburg.

The researchers believe that the method can be very useful within environmental research, as well as for authorities and companies that use or develop new chemicals. They have therefore made it open and publicly available.

Broader and more accurate than today's computational tools

Computational tools for finding toxic chemicals already exist, but so far, they have had too narrow applicability domains or too low accuracy to replace laboratory tests to any greater extent. In the researchers' study, they compared their method with three other, commonly used, computational tools, and found that the new method has both a higher accuracy and that it is more generally applicable.

"The type of AI we use is based on advanced deep learning methods," says Erik Kristiansson. "Our results show that AI-based methods are already on par with conventional computational approaches, and as the amount of available data continues to increase, we expect AI methods to improve further. Thus, we believe that AI has the potential to markedly improve computational assessment of chemical toxicity.”

The researchers predict that AI systems will be able to replace laboratory tests to an increasingly greater extent.

"This would mean that the number of animal experiments could be reduced, as well as the economic costs when developing new chemicals. The possibility to rapidly prescreen large and diverse bodies of data can therefore aid the development of new and safer chemicals and help find substitutes for toxic substances that are currently in use. We thus believe that AI-based methods will help reduce the negative impacts of chemical pollution on humans and on ecosystem services," says Erik Kristiansson.


More about: the new AI method

The method is based on transformers, an AI model for deep learning that was originally developed for language processing. Chat GPT – whose abbreviation means Generative Pretrained Transformer – is one example of the applications.

The model has recently also proved highly efficient at capturing information from chemical structures. Transformers can identify properties in the structure of molecules that cause toxicity, in a more sophisticated way than has been previously possible.

Using this information, the toxicity of the molecule can then be predicted by a deep neural network. Neural networks and transformers belong to the type of AI that continuously improves itself by using training data – in this case, large amounts of data from previous laboratory tests of the effects of thousands of different chemicals on various animals and plants.

 

More about: the research

The study, Transformers enable accurate prediction of acute and chronic chemical toxicity in aquatic organisms, has been published in Science Advances. It was carried out by Mikael Gustavsson and Erik Kristiansson at Chalmers University of Technology and the University of Gothenburg, Styrbjörn Käll, Juan S. Inda-Diaz, and Sverker Molander at Chalmers University of Technology, and Patrik Svedberg, Jessica Coria and Thomas Backhaus at the University of Gothenburg.

 

New AI tool efficiently detects asbestos in roofs so it can be removed



The computer vision system uses freely available aerial photographs and has demonstrated a level of accuracy of over 80%



Peer-Reviewed Publication

UNIVERSITAT OBERTA DE CATALUNYA (UOC)




A team of researchers from the Universitat Oberta de Catalunya (UOC) has designed and tested a new system for detecting asbestos that has not yet been removed from the roofs of buildings, despite regulatory requirements. The software, developed in partnership with DetectAapplies artificial intelligence, deep learning and computer vision methods to aerial photographs, using RGB images, which are the most common and economical type. This represents a very important competitive advantage over previous attempts to create a similar system, which required multiband images that are more complex and difficult to obtain. The success of this much more scalable project will allow the removal of this highly toxic building material to be more systematically and effectively monitored.

"Unlike infrared or hyperspectral imaging methods, our decision to train AI with RGB images ensures the methodology is versatile and adaptable. In Europe and many other countries around the world this type of aerial imaging is freely available in very high resolutions," explained Javier Borge Holthoefer, lead researcher of the Complex Systems group (CoSIN3) at the Internet Interdisciplinary Institute (IN3). Borge Holthoefer is leading this research, together with Àgata Lapedriza, researcher with the eHealth Center's Artificial Intelligence for Human Well-being group (AIWELL) and a member of the UOC's Faculty of Computer Science, Multimedia and Telecommunications. Their research has been published as open access in Remote SensingUOC doctoral students Davoud Omarzadeh, Adonis González-Godoy, Cristina Bustos and Kevin Martín Fernández also contributed to the project, together with the founders of DetectA, Carles Scotto and César Sánchez.

The researchers trained the deep learning system using thousands of photographs held by the Cartographic and Geological Institute of Catalonia, teaching the AI tool which roofs contain asbestos and which do not. 2,244 images were used (1,168 positive for asbestos and 1,076 negative). 80% were used to train and validate the system, with the remaining images reserved for the final test. The software is now able to determine if asbestos is present in new images by assessing different patterns, such as the colour, texture and structure of the roofs, as well as the area surrounding the buildings. The project will be useful in urban, industrial, coastal and rural areas. By law, municipalities should have performed a survey of buildings containing asbestos by April 2023, but not all of them have yet done so.

Hyperspectral photographs make it easier to detect asbestos, because they contain many more layers of information, but they are not ideal for developing an efficient detection method, due to their limited availability and the high cost of obtaining them. The system developed by the UOC researchers is the first to use RGB images, which can be taken from aircraft and are commonly used by many countries' cartographic services. "Although these images contain less information, we have achieved comparable results by training the deep learning system well, with a success rate of over 80%," explained the CoSIN3 researcher.

 

Banned for over two decades

More than twenty years after its use in construction was banned, asbestos remains a major public health problem. It is estimated that, in Catalonia alone, over four million tonnes of asbestos fibre cement is still in place. According to the World Health Organization it causes more than 100,000 deaths a year globally, mainly from lung cancer, but also other conditions including pleural tumours and pulmonary fibrosis. The legal target for removing asbestos from public buildings is 2028 and the target for private buildings is 2032.

The development of this technological solution will contribute to tackling one of the key issues in the fight against asbestos: how authorities can identify which roofs contain asbestos, so it can be removed by qualified, accredited professionals. "There is currently no protocol or effective system for locating the asbestos that is still out there, because it is expensive and time-consuming to inventorize using people on the ground," said Borge Holthoefer.

Now his team is looking into expanding the AI system training base in order to make it as effective in rural environments as it is in urban and industrial locations, where it is a little more reliable because the system was trained with more data from these areas, and also because asbestos wear and conservation is different in rural conditions, and it may be covered by layers of vegetation.

 

This research project contributes to the UN's Sustainable Development Goals (SDGs) 3 (Good Health and Well-being), 9 (Industry, Innovation and Infrastructure) and 11 (Sustainable Cities and Communities).

 

UOC R&I

The UOC's research and innovation (R&I) is helping overcome pressing challenges faced by global societies in the 21st century by studying interactions between technology and human & social sciences with a specific focus on the network society, e-learning and e-health.

Over 500 researchers and more than 50 research groups work in the UOC's seven faculties, its eLearning Research programme and its two research centres: the Internet Interdisciplinary Institute (IN3) and the eHealth Center (eHC).

The university also develops online learning innovations at its eLearning Innovation Center (eLinC), as well as UOC community entrepreneurship and knowledge transfer via the Hubbik platform.

Open knowledge and the goals of the United Nations 2030 Agenda for Sustainable Development serve as strategic pillars for the UOC's teaching, research and innovation. More information: research.uoc.edu.

 

Random robots are more reliable


New AI algorithm for robots consistently outperforms state-of-the-art systems



NORTHWESTERN UNIVERSITY

NoodleBot simulation 

VIDEO: 

RESEARCHERS TESTED THE NEW AI ALGORITHM'S PERFORMANCE WITH SIMULATED ROBOTS, SUCH AS NOODLEBOT.

view more 

CREDIT: NORTHWESTERN UNIVERSITY





Northwestern University engineers have developed a new artificial intelligence (AI) algorithm designed specifically for smart robotics. By helping robots rapidly and reliably learn complex skills, the new method could significantly improve the practicality — and safety — of robots for a range of applications, including self-driving cars, delivery drones, household assistants and automation.

Called Maximum Diffusion Reinforcement Learning (MaxDiff RL), the algorithm’s success lies in its ability to encourage robots to explore their environments as randomly as possible in order to gain a diverse set of experiences. This “designed randomness” improves the quality of data that robots collect regarding their own surroundings. And, by using higher-quality data, simulated robots demonstrated faster and more efficient learning, improving their overall reliability and performance.

When tested against other AI platforms, simulated robots using Northwestern’s new algorithm consistently outperformed state-of-the-art models. The new algorithm works so well, in fact, that robots learned new tasks and then successfully performed them within a single attempt — getting it right the first time. This starkly contrasts current AI models, which enable slower learning through trial and error. 

The research will be published on Thursday (May 2) in the journal Nature Machine Intelligence.

“Other AI frameworks can be somewhat unreliable,” said Northwestern’s Thomas Berrueta, who led the study. “Sometimes they will totally nail a task, but, other times, they will fail completely. With our framework, as long as the robot is capable of solving the task at all, every time you turn on your robot you can expect it to do exactly what it’s been asked to do. This makes it easier to interpret robot successes and failures, which is crucial in a world increasingly dependent on AI.”

Berrueta is a Presidential Fellow at Northwestern and a Ph.D. candidate in mechanical engineering at the McCormick School of Engineering. Robotics expert Todd Murphey, a professor of mechanical engineering at McCormick and Berrueta’s adviser, is the paper’s senior author. Berrueta and Murphey co-authored the paper with Allison Pinosky, also a Ph.D. candidate in Murphey’s lab.

The disembodied disconnect

To train machine-learning algorithms, researchers and developers use large quantities of big data, which humans carefully filter and curate. AI learns from this training data, using trial and error until it reaches optimal results. While this process works well for disembodied systems, like ChatGPT and Google Gemini (formerly Bard), it does not work for embodied AI systems like robots. Robots, instead, collect data by themselves — without the luxury of human curators.

“Traditional algorithms are not compatible with robotics in two distinct ways,” Murphey said. “First, disembodied systems can take advantage of a world where physical laws do not apply. Second, individual failures have no consequences. For computer science applications, the only thing that matters is that it succeeds most of the time. In robotics, one failure could be catastrophic.”

To solve this disconnect, Berrueta, Murphey and Pinosky aimed to develop a novel algorithm that ensures robots will collect high-quality data on-the-go. At its core, MaxDiff RL commands robots to move more randomly in order to collect thorough, diverse data about their environments. By learning through self-curated random experiences, robots acquire necessary skills to accomplish useful tasks.

Getting it right the first time

To test the new algorithm, the researchers compared it against current, state-of-the-art models. Using computer simulations, the researchers asked simulated robots to perform a series of standard tasks. Across the board, robots using MaxDiff RL learned faster than the other models. They also correctly performed tasks much more consistently and reliably than others. 

Perhaps even more impressive: Robots using the MaxDiff RL method often succeeded at correctly performing a task in a single attempt. And that’s even when they started with no knowledge.

“Our robots were faster and more agile — capable of effectively generalizing what they learned and applying it to new situations,” Berrueta said. “For real-world applications where robots can’t afford endless time for trial and error, this is a huge benefit.”

Because MaxDiff RL is a general algorithm, it can be used for a variety of applications. The researchers hope it addresses foundational issues holding back the field, ultimately paving the way for reliable decision-making in smart robotics.

“This doesn’t have to be used only for robotic vehicles that move around,” Pinosky said. “It also could be used for stationary robots — such as a robotic arm in a kitchen that learns how to load the dishwasher. As tasks and physical environments become more complicated, the role of embodiment becomes even more crucial to consider during the learning process. This is an important step toward real systems that do more complicated, more interesting tasks.”

The study, “Maximum diffusion reinforcement learning,” was supported by the U.S. Army Research Office (grant number W911NF-19-1-0233) and the U.S. Office of Naval Research (grant number N00014-21-1-2706).


Future direction: NoodleBot [VIDEO] | 

The published study includes tests performed with simulated robots. Next, they will test the algorithm on robots in the real world. They developed this snake-like robot, called "NoodleBot," for future testing.


Simulated robots learn in one [VIDEO] |

This video illustrates the single-shot learning capabilities of MaxDiff RL.


Although the current study tested the AI algorithm only on simulated robots, the researchers have developed NoodleBot for future testing of the algorithm in the real world.

CREDIT

Northwestern University

 

Significant new discovery in teleportation research — Noise can improve the quality of quantum teleportation




UNIVERSITY OF TURKU





In teleportation, the state of a quantum particle, or qubit, is transferred from one location to another without sending the particle itself. This transfer requires quantum resources, such as entanglement between an additional pair of qubits. In an ideal case, the transfer and teleportation of the qubit state can be done perfectly. However, real-world systems are vulnerable to noise and disturbances — and this reduces and limits the quality of the teleportation.

Researchers from the University of Turku, Finland, and the University of Science and Technology of China, Hefei, have now proposed a theoretical idea and made corresponding experiments to overcome this problem. In other words, the new approach enables reaching high-quality teleportation despite the presence of noise.

“The work is based on an idea of distributing entanglement — prior to running the teleportation protocol — beyond the used qubits, i.e., exploiting the hybrid entanglement between different physical degrees of freedom”, says Professor Jyrki Piilo from the University of Turku.

Conventionally, the polarisation of photons has been used for the entanglement of qubits in teleportation, while the current approach exploits the hybrid entanglement between the photons’ polarisation and frequency.

“This allows for a significant change in how the noise influences the protocol, and as a matter of fact our discovery reverses the role of the noise from being harmful to being beneficial to teleportation”, Piilo describes.

With conventional qubit entanglement in the presence of noise, the teleportation protocol does not work. In a case where there is initially hybrid entanglement and no noise, the teleportation does not work either.

“However, when we have hybrid entanglement and add noise, the teleportation and quantum state transfer occur in almost perfect manner”, says Dr Olli Siltanen whose doctoral dissertation presented the theoretical part of the current research.

In general, the discovery enables almost ideal teleportation despite the presence of certain type of noise when using photons for teleportation.

“While we have done numerous experiments on different facets of quantum physics with photons in our laboratory, it was very thrilling and rewarding to see this very challenging teleportation experiment successfully completed”, says Dr Zhao-Di Liu from the University of Science and Technology of China, Hefei.

“This is a significant proof-of-principle experiment in the context of one of the most important quantum protocols”, says Professor Chuan-Feng Li from the University of Science and Technology of China, Hefei.

Teleportation has important applications, e.g., in transmitting quantum information, and it is of utmost importance to have approaches that protect this transmission from noise and can be used for other quantum applications. The results of the current study can be considered as basic research that carries significant fundamental importance and opens intriguing pathways for future work to extend the approach to general types of noise sources and other quantum protocols.

 

Researchers create new chemical compound to solve 120-year-old problem


Accessing these molecules can have major impacts on agriculture, pharmaceuticals, and electronics



Peer-Reviewed Publication

UNIVERSITY OF MINNESOTA

Synthetic access to N-coordinated 7-azaindolynes 

IMAGE: 

THIS GRAPHIC DEPICTS THE CHEMICAL COMPOUND THAT THE TEAM OF CHEMISTS WAS ABLE TO DISCOVER. CREDIT: THE ROBERTS GROUP/UNIVERSITY OF MINNESOTA

view more 

CREDIT: THE ROBERTS GROUP/UNIVERSITY OF MINNESOTA




MINNEAPOLIS/ST. PAUL (05/02/2024) — For the first time, chemists in the University of Minnesota Twin Cities College of Science and Engineering have created a highly reactive chemical compound that has eluded scientists for more than 120 years. The discovery could lead to new drug treatments, safer agricultural products, and better electronics.

For decades, researchers have been investigating molecules called N-heteroarenes, which are ring-shaped chemical compounds that contain one or more nitrogen atoms. Bio-active molecules having a N-heteroarene core are widely used for numerous medicinal applications, lifesaving pharmaceuticals, pesticides and herbicides, and even electronics.

“While the average person does not think about heterocycles on a daily basis, these unique nitrogen-containing molecules are widely applied across all facets of human life,” said Courtney Roberts, the senior author of the study and a University of Minnesota Department of Chemistry assistant professor who holds the 3M Alumni Professorship. 

These molecules are highly sought out by many industries, but are extremely challenging for chemists to make. Previous strategies have been able to target these specific molecules, but scientists have not been able to create a series of these molecules. One reason for this is that these molecules are extremely reactive. They are so active that chemists have used computational modeling to predict that they should be impossible to make. This has created challenges for more than a century and prevented a solution to create this chemical substance. 

“What we were able to do was to run these chemical reactions with specialized equipment while getting rid of elements commonly found in our atmosphere,” said Jenna Humke, a University of Minnesota chemistry graduate student and lead author on the paper. “Luckily, we have the tools to do that at the University of Minnesota. We ran experiments under nitrogen in a closed-chamber glovebox, which creates a chemically inactive environment to test and move samples.”

These experiments were accomplished by using organometallic catalysis—the interaction between metals and organic molecules. The research required collaboration between both organic and inorganic chemists. This is something that is common at the University of Minnesota.

“We were able to solve this long-standing challenge because the University of Minnesota Department of Chemistry is unique in that we don’t have formal divisions,” Roberts added. “This allows us to put together a team of experts in all fields of chemistry, which was a vital component in completing this project” 

After introducing the chemical compound in this paper, the next steps will be to make it widely available to chemists across multiple fields to streamline the creation process. This could help solve important problems like preventing food scarcity and treating illnesses to save lives. 

Along with Roberts and Humke, the University of Minnesota research team included postdoctoral researcher Roman Belli, graduate students Erin Plasek, Sallu S. Kargbo, and former postdoctoral researcher Annabel Ansel. 

This work was primarily funded by the National Institutes of Health and the National Science Foundation. Funding was also provided by four University of Minnesota-sponsored graduate research fellowships and start-up funding provided by the Department of Chemistry.

To read the entire research paper titled, “Nickel binding enables isolation and reactivity of previously inaccessible 7-Aza-2,3-indolynes”, visit the Science website.

 

Unveiling a polarized world – in a single shot


Researchers develop a compact, single-shot and complete polarization imaging system using metasurfaces



HARVARD JOHN A. PAULSON SCHOOL OF ENGINEERING AND APPLIED SCIENCES

Mueller matrix imaging 

IMAGE: 

A UNIQUE SPECIES OF BEETLE, CHRYSINA GLORIOSA, HAS A DISTINCT RESPONSE FOR CIRCULARLY POLARIZED LIGHT REFLECTING OFF ITS SHELL. THIS CHIRAL RESPONSE IS CORRECTLY IMAGED BY THE NEW SYSTEM.

view more 

CREDIT: (CREDIT: AUN ZAIDI/HARVARD SEAS)




Think of all the information we get based on how an object interacts with wavelengths of light — a.k.a. color. Color can tell us if food is safe to eat or if a piece of metal is hot. Color is an important diagnostic tool in medicine, helping practitioners diagnose diseased tissue, inflammation, or problems in blood flow.

Companies have invested heavily to improve color in digital imaging, but wavelength is just one property of light. Polarization -- how the electric field oscillates as light propagates -- is also rich with information, but polarization imaging remains mostly confined to table-top laboratory settings, relying on traditional optics such as waveplates and polarizers on bulky rotational mounts.

Now, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a compact, single-shot polarization imaging system that can provide a complete picture of polarization. By using just two thin metasurfaces, the imaging system could unlock the vast potential of polarization imaging for a range of existing and new applications, including biomedical imaging, augmented and virtual reality systems and smart phones. 

The research is published in Nature Photonics.

“This system, which is free of any moving parts or bulk polarization optics, will empower applications in real-time medical imaging, material characterization, machine vision, target detection, and other important areas,” said Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering at SEAS and senior author of the paper.

In previous research, Capasso and his team developed a first-of-its-kind compact polarization camera to capture so-called Stokes images, images of the polarization signature reflecting off an object – without controlling the incident illumination.

“Just as the shade or even the color of an object can appear different depending on the color of the incident illumination, the polarization signature of an object depends on the polarization profile of the illumination,” said Aun Zaidi, a recent PhD graduate from Capasso’s group and first author of the paper.  “In contrast to conventional polarization imaging, ‘active’ polarization imaging, known as Mueller matrix imaging, can capture the most complete polarization response of an object by controlling the incident polarization.” 

Currently, Mueller matrix imaging requires a complex optical set-up with multiple rotating plates and polarizers that sequentially capture a series of images which are combined to realize a matrix representation of the image.  

The simplified system developed by Capasso and his team uses two extremely thin metasurfaces — one to illuminate an object and the other to capture and analyze the light on the other side. 

The first metasurface generates what’s known as polarized structured light, in which the polarization is designed to vary spatially in a unique pattern.  When this polarized light reflects off or transmits through the object being illuminated, the polarization profile of the beam changes. That change is captured and analyzed by the second metasurface to construct the final image – in a single shot. 

The technique allows for real-time advanced imaging, which is important for applications such as endoscopic surgery, facial recognition in smartphones, and eye tracking in AR/VR systems. It could also be combined with powerful machine learning algorithms for applications in medical diagnostics, material classification and pharmaceuticals. 

“We have brought together two seemingly separate fields of structured light and polarized imaging to design a single system that captures the most complete polarization information. Our use of nanoengineered metasurfaces, which replace many components that would traditionally be required in a system such as this, greatly simplifies its design,” said Zaidi.

“Our single-shot and compact system provides a viable pathway for the widespread adoption of this type of imaging to empower applications requiring advanced imaging,” said Capasso. 

The Harvard Office of Technology Development has protected the intellectual property associated with this project out of Prof. Capasso’s lab and licensed the technology to Metalenz for further development.

The research was co-authored by Noah Rubin, Maryna Meretska, Lisa Li, Ahmed Dorrah and Joon-Suh Park. It was supported by the Air Force Office of Scientific Research under award Number FA9550-21-1-0312, the Office of Naval Research (ONR) under award number N00014-20-1-2450, the National Aeronautics and Space Administration (NASA) under award numbers 80NSSC21K0799 and 80NSSC20K0318, and the National Science Foundation under award no. ECCS-2025158.


This highly reflective black paint makes objects more visible to autonomous cars



AMERICAN CHEMICAL SOCIETY





Driving at night might be a scary challenge for a new driver, but with hours of practice it soon becomes second nature. For self-driving cars, however, practice may not be enough because the lidar sensors that often act as these vehicles’ “eyes” have difficulty detecting dark-colored objects. Research published in ACS Applied Materials & Interfaces describes a highly reflective black paint that could help these cars see dark objects and make autonomous driving safer.

Lidar, short for light detection and ranging, is a system used in a variety of applications, including geologic mapping and self-driving vehicles. The system works like echolocation, but instead of emitting sound waves, lidar emits tiny pulses of near-infrared light. The light pulses bounce off objects and back to the sensor, allowing the system to map the 3D environment it’s in. But lidar falls short when objects absorb more of that near-infrared light than they reflect, which can occur on black-painted surfaces. Lidar can’t detect these dark objects on its own, so one common solution is to have the system rely on other sensors or software to fill in the information gaps. However, this solution could still lead to accidents in some situations. Rather than reinventing the lidar sensors, though, Chang-Min Yoon and colleagues wanted to make dark objects easier to detect with existing technology by developing a specially formulated, highly reflective black paint.

To produce the new paint, the team first formed a thin layer of titanium dioxide (TiO2) on small fragments of glass. Then the glass was etched away with hydrofluoric acid, leaving behind a hollow layer of white, highly reflective TiO2. This was reduced with sodium borohydride to produce a black material that maintained its reflective qualities. By mixing this material with varnish, it could be applied as a paint. The team next tested the new paint with two types of commercially available lidar sensors: a mirror-based sensor and a 360-degree rotating type sensor. For comparison, a traditional carbon black-based version was also evaluated. Both sensors easily recognized the specially formulated, TiO2-based paint but did not readily detect the traditional paint. The researchers say that their highly reflective material could help improve safety on the roads by making dark objects more visible to autonomous vehicles already equipped with existing lidar technology.

The authors acknowledge funding from the Korea Ministry of SMEs and Startups and the National Research Foundation of Korea.

###

The American Chemical Society (ACS) is a nonprofit organization chartered by the U.S. Congress. ACS’ mission is to advance the broader chemistry enterprise and its practitioners for the benefit of Earth and all its people. The Society is a global leader in promoting excellence in science education and providing access to chemistry-related information and research through its multiple research solutions, peer-reviewed journals, scientific conferences, eBooks and weekly news periodical Chemical & Engineering News. ACS journals are among the most cited, most trusted and most read within the scientific literature; however, ACS itself does not conduct chemical research. As a leader in scientific information solutions, its CAS division partners with global innovators to accelerate breakthroughs by curating, connecting and analyzing the world’s scientific knowledge. ACS’ main offices are in Washington, D.C., and Columbus, Ohio.

To automatically receive news releases from the American Chemical Society, contact newsroom@acs.org.

Note: ACS does not conduct research, but publishes and publicizes peer-reviewed scientific studies.

Follow us: X, formerly Twitter | Facebook | LinkedIn | Instagram