Thursday, November 04, 2021

A framework to automatically identify wildlife in collaboration with humans

A framework to automatically identify wildlife in collaboration with humans
In real-world applications, AI models do not stop at one training stage. As data 
collection progresses over time, there is a continuous cycle of inference, 
annotation, and model updating. When there are novel and difficult samples, 
human annotation is inevitable. Credit: Miao et al.

Over the past few decades, computer scientists have developed numerous machine learning tools that can recognize specific objects or animals in images and videos. While some of these techniques have achieved remarkable results on simple animals or items (e.g., cats, dogs, houses), they are typically unable to recognize wildlife and less renowned plants or animals.

Researchers at University of California, Berkeley (UC Berkeley) have recently developed a new wildlife identification approach that performs far better than techniques developed in the past. The approach, presented in a paper published in Nature Machine Intelligence, was conceived by Zhongqi Miao, who initially started exploring the idea that artificial intelligence (AI) tools could classify wildlife images collected by movement-triggered . These are cameras that wildlife ecologists and researchers often set up to monitor species inhabiting specific geographic locations and estimate their numbers.

The  of AI for identifying species in wildlife images captured by camera traps could significantly simplify the work of ecologists and reduce their workload, preventing them from having to look through hundreds of thousands of images to generate maps of the distribution of species in specific locations. The framework developed by Miao and his colleagues is different from other methods proposed in the past, as it merges machine learning with an approach dubbed 'humans in the loop' to generalize better on real-world tasks.

"An important aspect of our 'humans in the loop innovation' is that it addresses the 'long-tailed distribution problem," Wayne M. Getz, one of the researchers who carried out the study, told TechXplore. "More specifically, in a set of hundreds of thousands of images generated using camera traps deployed in an area over a season, images of common species may appear hundreds or even thousands of times, while those of rare species may appear just a few times. This produces a long-tailed distribution of the frequency of images of different species."

If all species were captured by camera traps with equal frequency, their distribution would be what is known as 'rectangular." On the other hand, if these frequencies are highly imbalanced, the most common frequencies (plotted first down the y-axis) would be far larger than least common frequencies (plotted at the bottom of the graph), resulting in a long-tailed distribution

"If standard AI image recognition software were applied to long-tailed distributional data, then the method would fail miserably when it comes to identifying rare species," Getz explained. "The primary purpose of our study was to find a way to improve the identification of rare species by incorporating humans into the process in an iterative manner."

When trying to apply conventional AI tools in real-world settings, computer scientists can encounter several challenges. As mentioned by Getz, the first is that data collected in the real world often follows a long-tail distribution and current state-of-the-art AI models do not perform as well on this data, compared to data with a rectangular or normal distribution.

"In other words, when applied to data with a long-tailed distribution, large or more frequent categories always lead to much better performance than smaller and rare categories," Miao, lead author of the paper, told TechXplore. "Furthermore, instances of rare categories (especially images of rare animals) are not easy to collect, making it even harder to get around this long-tail distribution issue through data collection."

Another challenge of applying AI in real-world settings is that the problems they are meant to solve are typically open-ended. For instance, wildlife monitoring projects can continue indefinitely and span across long periods of time, during which new camera traps will be set up and a variety of new data will be collected.

In addition, new animal species might suddenly appear in the sites monitored by the cameras due to several possible factors, including unexpected invasions, animal reintroduction projects or recolonizations. All of these changes will be reflected in the data, ultimately impairing the performance of pre-trained machine learning techniques.

"So far, the human contribution to the training of AI has been inevitable," Miao said. "As real-world applications are open-ended, ensuring that AI models learn and adapt to new content requires additional human annotations, especially when we want the models to identify new animal species. Thus, we think there is a loop of AI recognition system of new data collection, human annotation on new data and model update to the novel categories."

In their previous research, the researchers tried to address the factors impairing the performance of AI in real-world settings in several different ways. While the approaches they devised were in some ways promising, their performance was not as good as they had hoped, achieving a classification accuracy below 70 percent when tested on standardized long-tailed datasets.

"It's hard for people to trust an AI model that could only produce ~70 percent accuracy," Miao said. "Overall, we think a deployable AI model should: achieve a balanced performance across imbalanced distribution (long-tailed recognition), be able to adapt to different environments (multi-domain adaptation), be able to recognize novel samples (out-of-distribution detection), and be able to learn from novel samples as fast as possible (few-shot learning, life-long learning, etc.). However, each one these characteristics have proved difficult to realize, and none of them have been fully solved yet, let alone combining them together and coming up with a perfect AI solution."

Instead of using renowned and existing AI tools or trying to develop an 'ideal' method, therefore, Miao and his colleagues decided to create a highly performing tool that relies on a certain amount of input from humans. As so far human annotations on data have proved to be particularly valuable for enhancing the performance of deep learning-based models, they focused their efforts on maximizing their efficiency.

"The goal of our project was to minimize the need for human intervention as much as possible, by applying human annotation solely on difficult images or novel species, while maximizing the recognition performance/accuracy of each model update procedure (i.e., update efficiency)," Miao said.

By combining machine learning techniques with human efforts in an efficient way, the researchers hoped to achieve a system that was better at recognizing animals in real-world wildlife images, overcoming some of the issues they encountered in their past studies. Remarkably, they found that their method could achieve 90 percent accuracy on wildlife image classification tasks, using 1/5 of the annotations that standard AI approaches would require to achieve this accuracy.

"Putting AI techniques into practice has always been significantly challenging, no matter how promising theoretical results are in previous studies on standard datasets," Miao said. "We thus tried to propose an AI recognition framework that can be deployed in the field even when the AI models are not perfect. And our solution is to introduce efficient human efforts back into the recognition system. And in this project, we use wildlife recognition as a practical use case of our framework."

Instead of evaluating AI models using a single dataset, the framework devised by Miao and his colleagues focuses on how efficiently a previously trained model can analyze newly collected datasets containing images of previously unobserved species. Their approach incorporates an active learning technique, which uses a prediction confidence metric to select low-confidence predictions, so that they can be annotated further by humans. When a model identifies animals with high levels of confidence, on the other hand, their framework stores these predictions as pseudo labels.

"Models are then updated according to both human annotations and pseudo labels," Miao explained. "The model is evaluated based on: the overall validation accuracy of each category after the update (i.e., update performance); percentage of high-confidence predictions on validation (i.e., saved human effort for annotation); accuracy of high-confidence predictions; and percentage of novel categories that are detected as low-confidence predictions (i.e., sensitivity to novelty)."

The overall aim of the optimization algorithm used by Miao and his colleagues is to minimize human efforts (i.e., to maximize a model's high-confidence percentage), while maximizing performance and accuracy. Technically speaking, the researchers' framework is a combination of active learning and semi-supervised learning with humans in the loop. All of the codes and data used by Miao and his colleagues are publicly available and can be accessed online.

"We proposed a deployable human-machine recognition framework that is also applicable when the models are not perfectly performing by themselves," Miao said. "With the iterative human-machine updating procedure, the framework can keep updated be deployed when new data are continuously collected. Furthermore, each technical component in this framework can be replaced with more advanced methods in the future to achieve better results."

The experimental setting outlined by Miao and his colleagues is arguably more realistic than those considered in previous works. In fact, instead of focusing on a single cycle of model training, validation and testing, it focuses on numerous cycles or stages, which allows models to better adapt to changes in the data.

"Another unique aspect of our work is that we proposed a synergistic relationship between humans and machines," Miao said." Machines help relieve the burden of humans (e.g., ~80 percent annotation requirements), and humans help annotate novel and challenging samples, which are then used to update the machines, such that the machines are more powerful and more generalized in the future. This is a continuous and long-term relationship."

In the future, the framework introduced by this team of researchers could allow ecologists to monitor animal species in different places more efficiently, reducing the time they spend examining images collected by trap cameras. In addition, their framework could be adapted to tackle other real-world problems that involve the analysis of data with a long-tailed distribution or that continuously changes over time.

"Miao is now working on the problem of trying to identify species from satellite or aerial images which present two challenges compared with camera trap images: the resolution is much lower because cameras are much more distant from the subjects that are capturing and the individual being imaged may be one of many in the overall frame; images generally show only a 1-d projection (i.e., from the top) rather than the 2-d projections (front/back and leftside/rightside) of camera trap data," Getz said.

Miao, Getz ad their colleagues now also plan to deploy and test the framework they created in real-world settings, such as camera trap wildlife monitoring projects in Africa organized by some of their collaborators. Meanwhile, Miao is working on other deep learning tools for the analysis of aerial images and audio recordings, as these could be particularly useful for identifying birds or marine animals. His overall goal is to make deep learning more accessible for ecologists and researchers analyzing wildlife images.

"On a broader scale, we think that the synergistic relationship between humans and machines is an exciting topic and that the goal of AI research should be to develop tools that augment people's abilities (or intelligence), rather than to eliminate the existence of humans (e.g., looking for perfect machines that can handle everything without the need for humans)," Miao added. "It is more like a loop where machines make humans better, and humans make machines more powerful in return, just like in the iterative framework we proposed in the paper. We call this Artificial Augmented Intelligence (A2I or A-square I), where ultimately, people's intelligence is augmented with artificial intelligence and vice versa. In the future, we want to keep exploring the possibilities of A2I."Researchers successfully train computers to identify animals in photos

More information: Zhongqi Miao et al, Iterative human and automated identification of wildlife images, Nature Machine Intelligence (2021). DOI: 10.1038/s42256-021-00393-0

Ziwei Liu et al, Large-scale long-tailed recognition in an open world. arXiv:1904.05160v2 [cs.CV], arxiv.org/abs/1904.05160

Ziwei Liu et al, Open compound domain adaptation. arXiv:1909.03403v2 [cs.CV], arxiv.org/abs/1909.03403

Journal information: Nature Machine Intelligence 

Using ocean plastic waste to power ocean cleanup ships

Using ocean plastic waste to power ocean cleanup ships
Proposals for ocean plastic cleanup currently require traveling back to port to 
unload the plastics and refuel the vessel. Credit: Worcester Polytechnic Institute.

A team of researchers from Worcester Polytechnic Institute, Woods Hole Oceanographic Institution and Harvard University believes that the plastic amassing in floating islands in the oceans could be used to power the ships that are sent to clean them up. In their paper published in Proceedings of the National Academy of Sciences, the group describes how ocean plastics could be converted to ship fuel.

Prior research has shown that millions of tons of plastics enter the  each year—some of it is ground into fragments and disperses, and some of it winds up in colossal garbage patches floating in remote parts of the ocean. Because of the danger that such plastics present to ocean life, some environmentalists have begun cleanup operations. Such operations typically involve sending a ship to a garbage patch, collecting as much as the ship will hold and then bringing it back to port for processing. In this new effort, the researchers suggest it would be far more efficient and greener to turn the  into fuel for both a processing machine and for uninterrupted operation of the ships.

The researchers note that the plastic in a  could be converted to a type of oil via hydrothermal liquefaction (HTL). In this process, the plastic is heated to 300–550 degrees Celsius at pressures 250 to 300 times that of sea-level conditions. The researchers have calculated that a ship carrying an HTL converter would be capable of producing enough oil to run the HTL converter and the ship's engine. Under their scenario, plastic collection booms would be permanently stationed at multiple sites around a large  patch, able to load the plastic it collects onto ships.

The researchers acknowledge that burning the oil produced would release carbon into the atmosphere, but note that the amount emitted would still be less than that emitted by a ship burning conventional oil making trips back and forth to ports. They also note that HTL does produce a small amount of solid waste, which would have to be taken back to port, likely every few months—excess fuel produced by the HTL could be used for these trips.

'The Ocean Cleanup' ship sweeps first Pacific plastic

More information: Elizabeth R. Belden et al, Thermodynamic feasibility of shipboard conversion of marine plastics to blue diesel for self-powered ocean cleanup, Proceedings of the National Academy of Sciences (2021). DOI: 10.1073/pnas.2107250118

Journal information: Proceedings of the National Academy of Sciences 

© 2021 Science X Network

Hybrid cars' green credentials under scrutiny

Sales of hybrid cars, which use both a conventional combustion engine and a small electric motor, could soon overtake those of p
Sales of hybrid cars, which use both a conventional combustion engine and a 
small electric motor, could soon overtake those of petrol vehicles in the EU.

Hybrid cars are increasingly popular in the European Union as eco-conscious drivers turn away from their more polluting petrol and diesel counterparts, but environmentalists warn they're not as green as they seem.

Sales of the cars, which use both a conventional combustion engine and a small electric motor, allowing owners to drive a few kilometres without emitting CO2, could soon overtake those of petrol vehicles in the EU.

In the third quarter of this year, 20.7 percent of cars sold in the bloc were new hybrid versions whose batteries are recharged by collecting wasted energy from elsewhere, like braking, and 9.1 percent were hybrid plug-ins that can be charged from an electric outlet.

Close to 40 percent were petrol-powered, 17.6 percent diesel and just 9.8 percent were fully electric.

Cheaper than fully , they also provide some reassurance for those worried about their battery running out of power at a time when charging stations are still not widespread.

Auto giants like Toyota, Stellantis, Renault and Hyundai-Kia are banking on hybrids, not least because they allow them to comply with EU norms on CO2 emissions at a lesser cost than fully electric cars.

'Barely cleaner'

But are they truly less polluting, or more of a transition solution as the world edges towards ditching petrol and diesel altogether?

Greenpeace and the pressure group Transport & Environment believe that hybrids actually slow down this transition.

They want to accelerate the shift to fully electric and to other forms of transport, pointing out that hybrids aren't that green.

"Conventional 'full' hybrids in particular, which run for the majority of the time on fossil fuel energy, are barely any cleaner than traditional petrol and diesel engines," Greenpeace said last year.

Marie Cheron of France's Nicolas Hulot Foundation, an environmental group, concurred.

"For example, some hybrids have been bought for fleets (of cars), they do not have a system that allows them to recharge, people don't charge them, and so they don't drive electric."

But Philippe Degeilh, an engineer at IFP Energies Nouvelles (Ifpen), an energy, transport and environment research group, said people just need to be educated in how to use hybrids correctly.

According to an Ifpen study published at the end of 2020, hybrids emit an average of 12 percent less CO2 than a similar petrol-powered car.

That rises to 33 percent in town, while it drops to almost zero on highways.

Plug-ins that are driven smoothly—draining batteries less—and often recharged are "capable of nearing zero emissions," according to Ifpen.

"A household that has just one car can have a better environmental record with a hybrid rather than with an electric car equipped with a large battery. It's designed to do 50 kilometres a day and sometimes to go on holiday," said Degeilh.

To stay or not?

Meanwhile, fully electric cars aren't necessarily all that green either.

Their batteries, which are getting bigger and bigger, require a lot of energy in their production.

Where the electricity comes from is also important to determine their environmental credentials.

The debate around hybrids is also a political one.

As the EU plans to ban the sale of petrol and diesel engines from 2035, some of the auto industry wants to ensure a role for hybrids.

"We think the  is here to stay," Jim Crosbie, head of Toyota Motor Manufacturing France, told AFP.

Hybrids—excluding plug-ins—represent 70 percent of the Japanese group's sales in Western Europe.

"If we're talking about a model life cycle of seven to nine years, it will remain an important asset for us in the years to come," he said.Sales of electric cars charge ahead in Europe

© 2021 AFP

Augmented reality: an early taste of the metaverse?

Under Peggy Johnson, Magic Leap has pivoted to developing augmented reality goggles for professionals, including surgeons
Under Peggy Johnson, Magic Leap has pivoted to developing augmented reality
 goggles for professionals, including surgeons.

When Facebook unveiled a mock-up last week of the "metaverse"—supposedly the internet of the future—it showed people transported to a psychedelic world of flying fish and friendly robots.

But while even Facebook CEO Mark Zuckerberg acknowledges these kinds of experiences could be many years away, some enthusiasts argue that a more modest version of the metaverse is already here.

"We're in the early stages of the metaverse, in some ways," Peggy Johnson, CEO of Magic Leap, told AFP at the Web Summit in Lisbon on Tuesday.

Magic Leap makes augmented reality (AR) headsets, which have already been used by surgeons preparing to separate a pair of conjoined twins, and by factory supervisors carrying out site inspections.

In both cases, information popped up before the users' eyes about what they were seeing.

It might not feel quite as immersive—or as kooky—as the virtual reality (VR) experiences that Zuckerberg wants to eventually bring to people's homes. But it nonetheless blurs the divide between the physical world and the digital one, a key idea behind the metaverse.

"With VR, you put on a device, and then you're in another world," Johnson said. "With AR, you put on a device, you're still in your world, but we're augmenting it with digital content."

So far, many people's experiences of AR have been limited to playing Pokemon Go or experimenting with image filters that transplant a comical pair of ears onto someone's face.

But it is in healthcare that the true potential of AR is starting to be realised, Johnson said.

Magic Leap's first augmented reality headset, released in 2018, failed to take off among the general public
Magic Leap's first augmented reality headset, released in 2018, failed to take off
 among the general public.

"You can call in experts who can look at the same thing as you are, from another part of the world," she said. "During surgery, you can lay down digital lines where perhaps the incision is going to occur."

Founded in 2010, Magic Leap's initial mission to bring AR to the masses generated huge hype and nearly $2.3 billion in venture funding.

Early promo material imagined it being used to bring a killer whale into a gymnasium full of schoolchildren.

But when Magic Leap's first headset was finally revealed in 2018, there was widespread disappointment; the product was too bulky and expensive to catch on among the general public.

The company was forced to lay off around half its staff last year.

Restaurant reviews and forgotten names

Johnson, a former Microsoft executive, took over as CEO in August 2020 and pivoted towards developing the goggles for use by professionals.

The Florida-based company last month announced that it has raised another $500 million in funding, with a new headset, the Magic Leap 2, set to be released in 2022.

The updated version is more lightweight, but it is still set to be used mostly by people accustomed to wearing goggles at work—like surgeons performing delicate work, or defence industry specialists.

If the AR revolution arrives, the market may be a crowded one with companies like Snapchat's developer, Snap, trialling spectacl
If the AR revolution arrives, the market may be a crowded one with companies
 like Snapchat's developer, Snap, trialling spectacles.

Google Glass, a pair of "smart glasses" that failed to take off when they launched in 2014, has similarly re-emerged as a product aimed at professional users.

Johnson predicted it might still be "a few more years" before Magic Leap or one of its competitors creates an AR headset that could feasibly be worn by consumers everywhere.

But that's the moment when Johnson predicts that AR could really transform our everyday lives.

It might, she suggested, allow us to see reviews for restaurants pinging before our eyes as we walk down a street perusing the options.

Forgotten someone's name? No problem. As they walk towards you, it could appear above their head.

"Right now we're all looking down at our mobile phones," Johnson said. Augmented reality, she hopes, could help us to soak up the world around us—a world with extra information layered over the top of it.

If that revolution arrives, the market may be a crowded one. Facebook is working on its own AR headset, while Apple is rumoured to be following suit. Snapchat's developer, Snap, is meanwhile trialling a new pair of its "Spectacles" on AR artists.

What does Johnson think the metaverse will look like in 15 years?

"I think you'll go back home to pick up your glasses because you left them at home," she predicted. "The same way you do with your mobile phone today."

Facebook assembles team to build 'metaverse'

© 2021 AFP

Study finds public support for nuclear energy in Southeast Asia generally low

nuclear plant
Credit: CC0 Public Domain

Nuclear energy may be the world's second-largest low carbon energy source for generating electricity after hydroelectric power, but reception to its adoption remains lukewarm in Southeast Asia, an NTU Singapore study has found.

Conducted by NTU's Wee Kim Wee School of Communication and Information, the study surveyed 1,000 people each in Singapore, Malaysia, Indonesia, Vietnam, and Thailand through door-to-door questionnaires and found that more than half of the respondents in every country were against the idea of  energy development.

Based on its surveys, the study found that about one in five (22 percent) of those surveyed in Singapore were in favor of nuclear energy development. The level of  in the other four countries surveyed ranged from 3 percent to 39 percent.

The NTU scientists also found that the respondents tended to use "cognitive shortcuts" such as risk and benefit perception (an individual's belief in the threat or benefits of nuclear energy), religious beliefs, and trust in various entities such as university scientists, business leaders, and the government to aid their decision on their level of support for nuclear  development.

With the five countries surveyed in this NTU study geographically close to each other, having a nuclear power plant in any of the five Southeast Asian countries will impact the others, said Prof. Shirley Ho, who led the study.

Prof. Ho, who is NTU's Research Director for Arts, Humanities, Education, and Social Sciences, added that the findings are a key point for consideration for policymakers in these countries, given that data suggests the public is collectively unsupportive of having a nuclear power plant in their own country.Merkel: No way back on German plan to end nuclear power use

More information: The study is available as a PDF at www.ntu.edu.sg/docs/default-so … df?sfvrsn=6d2991a3_1

Provided by Nanyang Technological University 

'Trojan Source' bug a novel way to attack program encodings

hacked data
Credit: CC0 Public Domain

A pair of security experts at TrojanSource have found a novel way to attack computer source code—one that fools a compiler (and human reviewer) into thinking code is safe. Nicholas Boucher and Ross Anderson, both with the University of Cambridge, have posted a paper on the TrojanSource web page detailing the vulnerability and ways that it might be fixed.

As Boucher and Anderson describe it, the vulnerability involves  being committed by nefarious types using Unicode control characters to reorder characters in source  that appears to programmers to be legitimate. More specifically, the vulnerability involves the use of a 'Bidi' algorithm, in Unicode (an international encoding standard that can be used in ) where characters can be placed both left to right and right to left—because some languages, such as Hebrew and Arabic are written and read right to left.

The vulnerability exists because the algorithms that process such code do not take into consideration that some of the characters that are being read left to right, can have a different meaning or purpose if they are read right to left. Because virtually all of the most popular programming languages in use today—C, C+, Java, Python, Go, Rust and JavaScript—allow Unicode, that means that virtually all programs are potentially at risk.

As an example, Boucher and Anderson show that a line of code such as:

/* begin admins only */ if (isAdmin) {

Could be changed to:

/* if (isAdmin) { begin admins only */

The first line is a harmless comment inserted by a programmer, the second is code that could be used to conduct a desired outcome by a hacker. The researchers suggest the vulnerability represents a serious threat to software supply chains—if such vulnerabilities were exploited, they could impact downstream software by allowing them to inherit the same vulnerability.

Because the  exists for such a wide variety of programming languages, its disclosure was first coordinated with officials charged with maintaining the rules for such languages giving them time to add changes to compilers and interpreters to account for and mitigate such a threat.

Vulnerability found in Kindle e-reader

More information: Report: www.trojansource.codes/trojan-source.pdf

TrojanSource: www.trojansource.codes/

© 2021 Science X Network

Keeping one step ahead of earthquakes

Keeping one step ahead of earthquakes
As technologies continue to improve, earthquake-prone cities will be better prepared. 
Credit: © Marco Iacobucci Epp, Shutterstock

While accurately predicting earthquakes is in the realm of science fiction, early warning systems are very much a reality. As advances in research and technology make these systems increasingly effective, they're vital to reducing an earthquake's human, social and economic toll.

Damaging earthquakes can strike at any time. While we can't prevent them from occurring, we can make sure casualties, economic loss and disruption of essential services are kept to a minimum.

Building more resilient cities is key to withstanding  disasters. If we had a better idea of when earthquakes would strike, authorities could initiate local emergency, evacuation and shelter plans. But unfortunately, this is not the case.

"Because earthquakes occur on faults, we know where they will occur. The problem is that we don't know how to predict when an earthquake will strike," explained Quentin Bletery, from the Research Institute for Development (IRD) in France. He is a researcher at the Géoazur laboratory at Université Côte d'Azur.

"Successful earthquake prediction must provide the location, time and magnitude of a future event with high accuracy, [something] which as of now, can't be done," added Johannes Schweitzer, Principal Research Geophysicist at NORSAR, an independent research foundation specialized in seismology and .

Potential of AI to improve the accuracy and speed of early warning systems

Earthquake  (EEW) systems are evolving rapidly thanks to advances in computer power and network communication.

EEW systems work by identifying the first signals generated by an earthquake rupture before the strongest shaking and tsunami reach populated areas. These signals follow the origin of the earthquake and can be recorded seconds before the seismic waves.

A promising, recently identified early signal is the prompt elasto-gravity signal (PEGS), which travels at the speed of light but is a million times smaller than seismic waves, and therefore, often goes undetected.

According to Bletery, artificial intelligence (AI) could play a key role in identifying this signal. With the support of the EARLI project, he is leading an effort to develop an AI algorithm capable of doing exactly that.

"Our AI system aims to increase the accuracy and speed of early  systems by enabling them to pick up an extremely weak signal that precedes even the fastest seismic waves," said Bletery.

Albeit still in its very early stages, if the project succeeds, Bletery says public authorities will have access to nearly instantaneous information about an earthquake's magnitude and location. "This would allow them to take such immediate mitigation efforts as, for example, shutting down infrastructure like trains and nuclear power plants and moving people to earthquake- and tsunami-safe zones," he noted.

Statistical technique to enhance seismic resilience

Another approach to improve seismic seismic resilience and reduce human losses is operational earthquake forecasting (OEF). TURNkey, led by NORSAR, aims to improve the effectiveness of this statistical technique used to study seismic sequences to provide timely warnings.

"OEF can inform us about changing seismic hazards over time, enabling emergency managers and public authorities to prepare for a potentially damaging earthquake," explained Ivan Van Bever, TURNkey project manager. "What OEF can't do, is provide warnings with a high level of accuracy."

In addition to improving existing methods, TURNkey is developing the "Forecasting—Early Warning—Consequence Prediction—Response' (FWCR) platform to increase the accuracy of earthquake warnings and ensure that all warning-related information is sent to end-users in a format that is both understandable and useful.

"The platform will forecast and issue warnings for aftershocks and will improve the ability for users to estimate both direct and indirect losses," said Van Bever

Better prepared than ever

The platform is currently being tested at six locations across Europe: Bucharest (Romania), the Pyrenees mountain range (France), the towns of Hveragerdi and Husavik (Iceland), the cities of Patras and Aigio (Greece), and the port of Gioia Tauro (Southern Italy). It is also being tested in Groningen province (Netherlands), which is affected by induced seismicity—minor earthquakes and tremors caused by human activity that alters the stresses and strains on the Earth's crust.

Johannes Schweitzer, who is the project coordinator, is confident the multi-sensor-based earthquake information system will prove capable of enabling early warning and rapid response. "The TURNkey platform will close the gap between theoretical systems and their practical application in Europe," remarked Schweitzer. "In doing so, it will improve a city's seismic resilience before, during and after a damaging earthquake."

"As these technologies and systems continue to improve, they could reduce an earthquake's human, social and economic toll," added Bletery.

Earthquake-prone cities will be better prepared than ever before. At the very least these new systems will give people a heads up to drop, cover and hold on during an earthquake.DeepShake uses machine learning to rapidly estimate earthquake shaking intensity

Laboratory will illuminate formation, composition, activity of comets

New experiments will measure the properties of comet material in space-like conditions.

Peer-Reviewed Publication

AMERICAN INSTITUTE OF PHYSICS

Chamber to simulate space-like conditions and measure comet properties 

IMAGE: THE NEW CHAMBER, WHICH WILL SIMULATE SPACE-LIKE CONDITIONS AND HAS 14 ASSOCIATED INSTRUMENTS TO MEASURE COMET PROPERTIES. view more 

CREDIT: KREUZIG ET AL.

WASHINGTON, November 3, 2021 -- Comets are icy and dusty snowballs of material that have remained relatively unchanged since they first formed billions of years ago. Studying the small bodies provides clues about the formation of the solar system.

In Review of Scientific Instruments, by AIP Publishing, researchers from the Technische Universität Braunschweig, the Austrian Academy of Science, the University of Bern, the German Aerospace Center, and the Max Planck Institute for Solar System Research developed a laboratory to simulate comets in space-like conditions.

The goal of the international research group, the Comet Physics Laboratory (CoPhyLab), is to understand the internal structure of comets, as well as how their constituent materials form and react. While comets are made of ice and dust, the composition and ratios of that material remain a mystery.

Many of the lab's future experiments will involve creating sample comet materials with differing compositions. By testing those materials in the space-like chamber, the researchers can compare each sample to what has been observed on actual comets.

To accomplish this, the scientists place a sample in their chamber, then pump it down to low pressures and cool it down to low temperatures. One window of the chamber lets in radiation from an artificial star, which heats the comet material much like it would in space.

"Before [this project], every group was using different samples. That made it very hard to compare if what they were seeing was the same as what we were seeing," said author Christopher Kreuzig. "A major goal of this project is to establish a comparable standard for comet experiments where everyone is using the same equipment and production protocol for the sample material."

Combining 14 instruments into one chamber allows the scientists to measure the comet material's evolution, as well as the conditions inside the experiment, all at once.

In space, radiation from the sun causes ice to evaporate and particles to fly away from comets, creating a tail that is visible on Earth. In the chamber, high-speed cameras track any particles that fly away from the sample. The chamber also uses a unique cooling system to accommodate a scale that can detect if those same particles land near the sample and track gas evaporation in real time.

"Underneath our sample sits a scale, which is capable of measuring the weight of the sample over the whole experiment time," said Kreuzig. "You can really see how much water ice or CO2 ice we lose over time due to evaporation."

The team completed construction of the lab and is now optimizing their sample production. They are planning the next big experiment run for early 2022.

###

The article "The CoPhyLab comet-simulation chamber" is authored by Christopher Kreuzig, Guenter Kargl, Antoine Pommerol, Joerg Knollenberg, Anthony Lethuillier, Noah Salomon Molinski, Thorben Gilke, Dorothea Bischoff, Clément Feller, Ekkehard Kührt, Holger Sierks, Nora Hänni, Holly Capelo, Carsten Güttler, David Haack, Katharina Otto, Erika Kaufmann, Maria Schweighard, Wolfgang Macher, Patrick Tiefenbacher, Bastian Gundlach, and Jürgen Blum.  The article appeared in Review of Scientific Instruments on Nov. 2, 2021 (DOI: 10.1063/5.0057030 and can be accessed at https://aip.scitation.org/doi/full/10.1063/5.0057030.

ABOUT THE JOURNAL

Review of Scientific Instruments publishes novel advancements in scientific instrumentation, apparatuses, techniques of experimental measurement, and related mathematical analysis. Its content includes publication on instruments covering all areas of science including physics, chemistry, materials science, and biology. See https://aip.scitation.org/journal/rsi.

###

Cannabis use disorder rising significantly during pregnancy

Columbia and Weill Cornell researchers found cannabis use disorders increased 150 percent in prenatal hospitalizations from 2010 to 2018

Peer-Reviewed Publication

COLUMBIA UNIVERSITY IRVING MEDICAL CENTER

As more states legalize cannabis (now 37) for medical or recreational purposes its use during pregnancy is increasing, along with the potential for abuse or dependence. 

A new study, co-led by researchers from Columbia University and Weill Cornell Medicine, has captured the magnitude and issues related to cannabis use disorders during pregnancy by examining diagnostic codes for more than 20 million U.S. hospital discharges. Most of those hospitalizations were for childbirth.

The study, “Association of Comorbid Behavioral and Medical Conditions with Cannabis Use Disorder in Pregnancy,” published in the online edition of JAMA Psychiatry Nov. 3, found that the proportion of hospitalized pregnant patients identified with cannabis use disorder—defined as cannabis use with clinically significant impairment or distress—rose 150 percent from 2010 to 2018.

“This is the largest study to document the scale of cannabis use disorder in prenatal hospitalizations,” said Claudia Lugo-Candelas, PhD, assistant professor of clinical medical psychology in Columbia’s Department of Psychiatry and one of the study’s co-authors. She notes the study found that pregnant patients with the condition had sharply higher levels of depression, anxiety, and nausea—results warranting clinical concern.  

“It’s a red flag that patients may not be getting the treatment they need,” Lugo-Candelas said.

Cannabis legalization has likely lessened fears about its risks in pregnancy. Some pregnant patients use cannabis instead of prescribed medications, thinking it’s a safer choice. Both the American Academy of Pediatrics (AAP) and the American College of Obstetricians and Gynecologists (ACOG) have recommended against using cannabis while pregnant, chiefly because of known and unknown fetal effects. Concerns for maternal effects focus on smoking or vaping risks, not mental health.

The study identified 249,084 hospitalized pregnant patients with cannabis use disorder and classified them into three sub-groups: those with cannabis use disorder only; those with use disorders for cannabis and other substances, including at least one controlled substance; and those with cannabis use disorder and other substances (alcohol, tobacco) not related to controlled substances. Data from hospitalized pregnant patients without any substance use disorders were analyzed for comparison.

Those with the cannabis condition were more likely to be younger (ages 15 to 24), Black non-Hispanic, and covered by Medicaid rather than private insurance.

Patients’ records were analyzed for depression, anxiety, trauma, and ADHD, and a broader category of mood-related disorders. Medical conditions measured included chronic pain, epilepsy, multiple sclerosis, nausea, and vomiting.

All disorder sub-groups had elevated rates of nearly every factor studied. Patients with cannabis use disorder alone had levels of depression and anxiety three times higher than patients with no use conditions. Mood-related disorders affected 58 percent of cannabis disorder patients but only 5 percent of those without any substance use disorders. 

“The least other substance use you have, the more that cannabis use makes a difference,” Lugo-Candelas said. “That’s really striking.”

Nausea was also high in the cannabis use disorder hospitalizations. Whether that was due to patients using cannabis to mitigate nausea, or due to cannabis use, which can cause a vomiting syndrome, or a symptom of pregnancy is unknown. Study co-author Angélica Meinhofer, PhD, assistant professor of population health sciences at Weill Cornell Medicine, noted that many states allow medical use of cannabis for nausea and vomiting.

Screening for cannabis use during pregnancy could help, but state mandatory reporting requirements may deter some clinicians from asking about use. Better patient education could reduce the problem and get treatment to patients sooner, especially for those identified with co-occurring cannabis dependency and psychiatric disorders.  

“Hopefully these findings will motivate better conversations between pregnant patients and their health care providers,” said Meinhofer.

The authors emphasize they aren’t arguing for or against cannabis use in pregnancy. The science on prenatal effects of the disorder is still largely unknown, although frequent use has been linked to low birth weight and other adverse outcomes. Their study, the researchers say, instead underscores the need to further explore the disorder and its links to psychiatric and medical conditions.

The rising rate of cannabis use by pregnant patients shows that such investigations are needed now. “This is a population that’s showing a level of distress that is very, very high,” said Lugo-Candelas. “Care and attention need to be rolled out.”

###

Katherine M. Keyes, PhD, MPH, associate professor of epidemiology at Columbia’s Mailman School of Public Health, and Jesse Hinde, PhD, Community Health Research Division, RTI International, were also on the study’s research team.

Disclaimer: AAAS and EurekAlert!