Friday, May 19, 2023

Hanging by a purple thread

Endangered plant species critical to survival of cultural legacy

Peer-Reviewed Publication

KYOTO UNIVERSITY

Endangered murasaki legacy 

IMAGE: INTRICATE TAPESTRY INTERTWINED WITH THE SURVIVAL OF NATIVE PURPLE GROMWELL view more 

CREDIT: KYOTOU GLOBAL COMMS/KAZUFUMI YAZAKI

Kyoto, Japan -- Purple is a color that has historically been associated with nobility around the world. Japan is no exception. However, its distinct murasaki hue is threatened as the native gromwell plant -- synonymous with murasaki -- has become an endangered species.

Disease and cross-breeding with non-native species are partly to blame for murasaki's growing demise.

Now, a research group including Kyoto University, is leading a movement to raise awareness of gromwell's importance in preserving Japanese culture. For example, murasaki revival projects currently underway throughout Japan are investigating the seed's origins and educating the public on the importance of protecting the plant's homogeneity.

"Various non-profits involved in the revival of gromwell are also keen to maintain the silk staining technique through the collaboration with plant scientists," says lead author Kazufumi Yazaki.

Purple gromwell -- or Lithospermum erythrorhizon -- contains shikonin derivatives in the plant's root surfaces, which are red naphthoquinones.  This natural pigment and medicinal properties are linked to ancient East Asian traditions. Among the range of hues, the most sought dye was the dark purple reserved for members in the top levels of government and the Imperial family, as well as the highest-ranking Buddhist monks.

"The purple color was also used for a national treasure called Koku-Bun-Ji Kyo, the ten-volume Buddhist scripture papers on which letters were written with gold," says co-author Ryosuke Munakata of KyotoU's Research Institute for Sustainable Humanosphere.

For medicinal purposes, the roots are prescribed in several remedies as an ointment called Shi-Un-Koh, which is still popular today in treating hemorrhoids, burns, frostbite, and other wounds.

Recovery initiatives, such as the Mitaka Gromwell Restoration Project, are focused on ensuring the native gromwell's survival, impacted by the spread of cucumber mosaic virus and sudden environmental changes. Cross-breeding with the European species L officinale is another factor in this plant's uncertain future.

Excavated official wooden documents from Kyushu -- found to have been used to transport cargo during the Asuka dynasty -- were unexpectedly related to gromwell, highlighting its purple dye's crucial administrative role.

"We hope our research raises awareness of murasaki's importance in Japanese history and culture," comments co-author Emi Ito of Ochanomizu University.

###

The paper "Gromwell, a purple link between traditional Japanese culture and plant science" appeared on 18 May 2023 in Plant and Cell Physiology, with doi: 10.1093/pcp/pcad038

About Kyoto University

Kyoto University is one of Japan and Asia's premier research institutions, founded in 1897 and responsible for producing numerous Nobel laureates and winners of other prestigious international prizes. A broad curriculum across the arts and sciences at undergraduate and graduate levels complements several research centers, facilities, and offices around Japan and the world. For more information, please see: http://www.kyoto-u.ac.jp/en

Before worrying about AI's threat to humankind, here's what else Canada can do

Story by Benjamin Shingler • May 6, 2023

A visitor speaks with a PAL Robotic robot at the Mobile World Congress in Barcelona, Spain, last month. Experts say the Canadian government should strengthen its proposed legislation that would govern emerging AI technologies.© AP

The headlines have been, to say the least, troubling.

Most recently, Geoffrey Hinton, the so-called Godfather of AI, quit his post at Google and warned the rapid advances in artificial intelligence could ultimately pose an existential threat to humankind.

"I think that it's conceivable that this kind of advanced intelligence could just take over from us," the renowned British-Canadian computer scientist told CBC's As It Happens.

"It would mean the end of people."

While such stark comments are impossible to ignore, some experts say they risk obscuring more immediate, practical concerns for Canada.

"Whether deliberately or inadvertently, folks who are talking about the existential risk of AI – even in the negative – are kind of building up and hyping the field," said Luke Stark, an assistant professor of information and media studies at Western University in London, Ont.

"I think it's a bit of a red herring from many of the concerns about the ways these systems are being used by institutions and businesses and governments right now around the world and in Canada."

Stark, who researches the social impacts of technologies such as artificial intelligence, is among the signatories of an open letter critical of the federal government's proposed legislation on artificial intelligence, Bill C27.

The letter argues the government's Artificial Intelligence and Data Act (AIDA), which is part of C27, is too short on details, leaving many important aspects of the rules around AI to be decided after the law is passed.

Look to EU for guidance, experts say

The legislation, tabled last June, recently completed its second reading in the House of Commons and will be sent to committee for study.

In a statement, a spokesperson for Innovation, Science and Economic Development Canada said "the government expects that amendments will be proposed in response to testimony from experts at committee, and is open to considering amendments that would improve the bill."

Experts say other jurisdictions, including the European Union and the United Kingdom, have moved more quickly toward putting in place strong rules governing AI.

Related video: Report: 61% Americans believe AI can threaten humanity (WION)

They cite a long list of human rights and privacy concerns related to the technology, ranging from its use by law enforcement, misinformation and instances where it reinforces patterns of racism and discrimination.

The proposed legislation wouldn't adequately address such concerns, said Maroussia Lévesque, a PhD candidate in law at Harvard University who previously led the AI and human rights file at Global Affairs Canada.

Lévesque described the legislation as an "empty shell" in a recent essay, saying it lacks "basic legal clarity."

In an interview over Zoom, Lévesque held up a draft of the law covered in blue sticky tabs – each one marking an instance where a provision of the law remains undefined.

"This bill leaves really important concepts to be defined later in regulation," she said.

The bill also proposes the creation of a new commissioner to oversee AI and data in Canada, which seems like a positive step on the surface for those hoping for greater oversight.

But Lévesque said the position is a "misnomer," since unlike some other commissioners, the AI and Data appointee won't be an independent agent, heading a regulatory agency.

"From a structural standpoint, it is really problematic," she said.

"You're folding protection into an innovation-driven mission and sometimes these will be at odds. It's like putting the brakes and stepping on the accelerator at the same time."

Lévesque said the EU has a "much more robust scheme," when it comes to proposed legislation on artificial intelligence.

The European Commission began drafting their legislation in 2021 and is nearing the finish line.

Under the legislation, companies deploying generative AI tools, such as ChatGPT, will have to disclose any copyrighted material used to develop their systems.

Lévesque likened their approach to the checks required before a new airplane or pharmaceutical drug is brought to market.

"It's not perfect — people can disagree about it. But it's on the brink of being adopted now, and it bans certain types of AI systems."

In Stark's view, the Liberal government has put an emphasis on AI as a driver of economic growth and tried to brand Canada as an "ethical AI centre."

"To fulfil the promise of that kind of messaging, I'd like to see the government being much more, broadly, consultative and much more engaged outside the kind of technical communities Montreal, and Toronto that I think have a lot of sway with the government," he said.

'Hurry up and slow down'

The Canadian Civil Liberties Association is among the groups hoping to be heard in this next round of consultations.

"We have not had sufficient input from key stakeholders, minority groups and people who we think are likely to be disproportionately affected by this bill," said Tashi Alford-Duguid, a privacy lawyer with CCLA.

Alford-Duguid said the government needs to take a "hurry up and slow down" approach.

"The U.K. has undertaken much more extensive consultations; we know that the EU is in the midst of very extensive consultations. And while neither of those laws look like they're going to be perfect, the Canadian government is coming in at this late hour, and trying to give us such rushed and ineffective legislation instead," he said.

"We can just look around and see we can already do better than this."


















New use for A.I.: correctly estimating fish stocks

First-ever A.I. algorithm correctly estimates fish stocks, could save millions and bridge global data and sustainability divide

Peer-Reviewed Publication

WILDLIFE CONSERVATION SOCIETY

Healthy reef 

IMAGE: NEW AI ALGORITHM COULD LEVEL THE PLAYING FIELD FOR COUNTRIES WITH HISTORICALLY “DATA POOR” FISHERIES, QUICKLY GENERATING A HIGHLY ACCURATE SNAPSHOT OF FISH STOCK LEVELS IN COASTAL WATERS view more 

CREDIT: RENATA ROMEO / OCEAN IMAGE BANK

For the first time, a newly published artificial intelligence (AI) algorithm is allowing researchers to quickly and accurately estimate coastal fish stocks without ever entering the water. This breakthrough could save millions of dollars in annual research and monitoring costs while bringing data access to least-developed countries about the sustainability of their fish stocks.

Understanding “fish stocks” – the amount of living fish found in an area’s waters – is critical to understanding the health of our oceans. This is especially true in coastal areas where 90 percent of people working in the fisheries industry live and work. In the wealthiest countries, millions of dollars are spent each year on “stock assessments” – expensive and labor-intensive efforts to get people and boats out into the water to count fish and calculate stocks. That extremely high cost has long been a barrier for tropical countries in Africa and Asia, home to the highest percentage of people who depend on fishing for food and income. Small-scale fishers working coastal waters in many countries are essentially operating blindly, with no real data about how many fish are available in their fisheries. Without data, coastal communities and their governments cannot create management plans to help keep their oceans healthy and productive for the long-term.

Now, thanks to advances in satellite data and machine learning algorithms, researchers have created a model that has successfully estimated fish stocks with 85 percent accuracy in the Western Indian Ocean pilot region. This tool has the potential to get data quickly and cheaply into the hands of local and national governments, so they can make informed decisions about their natural resources and keep “blue foods” on the table.

“Our goal is to give people the information required to know the status of their fish resources and whether their fisheries need time to recover or not. The long term goal is that they, their children, and their neighbors can find a balance between peoples’ needs and ocean health,” said Tim McClanahan, Director of Marine Science at WCS. “This tool can tell us how fish stocks are doing, and how long it will take for them to recover to healthy levels using various management options. It can also tell you how much money you’re losing or can recoup every year by managing your fishery – and in the Western Indian Ocean region where we piloted this tool, it’s no less than $50 to $150 million each year.”

WCS’ McClanahan and fellow co-authors used years of fish abundance data combined with satellite measurements and an AI tool to produce this model. The result? A simple, easy to use pilot tool to better understand and manage our oceans. With further development, anyone from anywhere in the world would be able to input seven easily accessible data points - things like distance from shore, water temperature, ocean productivity, existing fisheries management, and water depth - and receive back an accurate fish stock estimate for their nearshore ecosystems. 

“We know that during times of crisis and hardship, from climate change-induced weather events to the COVID-19 pandemic, people living on the coast increasingly rely on fishing to feed themselves and their families,” said Simon Cripps, Executive Director of Marine Conservation at WCS. “The value of this model is that it tells managers, scientists, and importantly, local communities how healthy a fishery is and how well it can support the communities that depend on it, especially during times of crisis. Once a fishery’s status is known, it gives communities and managers the information to move forward to design solutions to improve fish stocks and improve the resilience of local communities, the fishing industry, and local and national economies.” 

The algorithm has been shown to work with high accuracy for coral reef fisheries in the Western Indian Ocean pilot region. WCS is currently seeking new partnerships and funding to scale the tool so it can be deployed and fill critical data gaps  around the world. 

This work was completed over a number of years and with the support of grants from The Tiffany and Co. Foundation, the John D. and Catherine T. MacArthur Foundation, the Bloomberg Ocean Initiative, the UK Darwin Initiative, and the Western Indian Ocean Marine Science Association’s Marine Science for Management Program (WIOMSA-MASMA).

JOURNAL

DOI

METHOD OF RESEARCH

SUBJECT OF RESEARCH

ARTICLE TITLE

ARTICLE PUBLICATION DATE

Integrating IoT, AI, and machine learning for next-generation healthcare

Special topic: Artificial intelligence innovation in remote sensing


Peer-Reviewed Publication

SCIENCE CHINA PRESS

Artificial Intelligence (AI) plays a growing role in remote sensing. In particular, during the last decade there has been an exponentially increasing interest in deep learning research for analysis of optical satellite images, hyperspectral images, and radar images. The main reasons for this interest is the increased availability of a wealthy stream of data coming from different Earth observation instruments and that AI techniques enable a learning-based “data model” in remote sensing. In order to promote research in this area, we have organized a special focus on Artificial Intelligence Innovation in Remote Sensing in SCIENCE CHINA Information Sciences(Vol.66, Issue.4, 2023). Eight papers are included in this special focus as detailed below.

Multimodal remote sensing imagery interpretation (MRSII) is an emerging direction in the communities of Earth Observation and Computer Vision. In the contribution entitled “From single- to multi-modal remote sensing imagery interpretation: a survey and taxonomy”, Sun et al. provide a comprehensive overview on the developments of this field. Importantly, in the paper, an easily understandable hierarchical taxonomy is developed for the categorization of MRSII, further providing a systematic discussion on the recent advances and guidance to researchers in many realistic MRSII problems.

Hyperspectral imaging enables the integration of 2D plane imaging and spectroscopy to capture the spectral diagram/signatures and spatial distribution of the objects in the region of interest. However, ground objects and the reflectance received by the imaging instruments may be degraded, owing to environmental disturbances, atmospheric effects and hardware limitations of sensors. HSI restoration aims at reconstructing a high-quality clean hyperspectral image from a degraded one. In the contribution entitled “A survey on hyperspectral image restoration: from the view of low-rank tensor approximation”, Liu et al. present a cutting-edge and comprehensive technical survey of low-rank tensor approximation toward HSI restoration, with a specific focus on denoising, fusion, restriping, inpainting, deblurring and super-resolution, along with their state-of-the-art methods, and quantitative and visual performance assessment.

Recently, hyperspectral and multispectral image fusion (aimed at generating images with both high spectral and spatial resolutions) has been a popular topic. However, it remains a challenging and underdetermined problem. In the contribution entitled “Learning the external and internal priors for multispectral and hyperspectral image fusion”, Li et al. propose two kinds of priors, i.e., external priors and internal priors, to regularize the fusion problem. The external prior represents the general image characteristics and is learned from abundant sample data by using a Gaussian denoising convolutional neural network trained with additional grayscale images. On the other hand, the internal prior represents the unique characteristics of the hyperspectral and multispectral images to be fused. Experiments on simulated and real datasets demonstrate the superiority of the proposed method. The source code for this paper is available at https://github.com/renweidian.

Wide-beam autofocus processing is essential for high-precision imaging of airborne synthetic aperture radar (SAR) data, due to the absence of inertial navigation system/global positioning system (INS/GPS) data or insufficient accuracy. In the contribution entitled “Wide-beam SAR autofocus based on blind resampling”, Chen and Yu propose a full-aperture autofocus method for wide-beam SAR based on blind resampling. The proposed method does not require INS/GPS data as baseline methods, which can significantly improve the overall image quality. The measured data processing results of the wide-beam SAR verify the effectiveness of the newly proposed algorithm in this contribution.

Remote sensing image (RSI) semantic segmentation has attracted increased research interest during the last few years. However, RSI is difficult in holistic processing for currently available graphics processing units cards on account of large field-of-views (FOVs) of the imagery. Furthermore, prevailing practices such as image down sampling and cropping inevitably decrease the quality of semantic segmentation. In the contribution entitled “MFVNet: a deep adaptive fusion network with multiple field-of-views for remote sensing image semantic segmentation”, Li et al. propose a new deep adaptive fusion network with multiple FOVs (MFVNet) for RSI semantic segmentation, surpassing the previous state-of-the-art models on three typical RSI datasets. Codes and pre-trained models for this paper are publicly available https://github.com/weichenrs/MFVNet.

Change detection of buildings, given two registered aerial images captured at different times, aims to detect and localize image regions where buildings have been added or torn down between flyovers is challenging. The main challenges are the mismatch of the nearby buildings and the semantic ambiguity of the building facades. In the contribution entitled “Detecting building changes with off-nadir aerial images”, Pang et al. present a multi-task guided change detection network model, named as MTGCD-Net, providing indispensable and complementary building parsing and matching information, along with extensive comparisons to existing methods. More importantly, a new benchmark dataset, named BANDON, were created fin this research and it is available at https://github.com/fitzpchao/BANDON.

Photovoltaic devices, a typical new energy source, have progressed rapidly and become among the main sources of power generation in the world. In the contribution “AIR-PV: a benchmark dataset for photovoltaic panel extraction in optical remote sensing imagery”, Yan et al. propose a large-scale benchmark dataset, namely AIR-PV, for photovoltaic panel extraction in RS imagery. The main features of this benchmark dataset are: (1) large-scale with wide distribution across five provinces of western China to cover a wide range of geographical styles and background diversity, covering more than 3 million square kilometers with more than 300,000 photovoltaic panels; (2) one of the earliest publicly available datasets (https://github.com/AICyberTeam) for photovoltaic panel extraction, providing a standard data foundation for applying advanced deep learning technology to photovoltaic panel extraction in remote sensing, thereby promoting various social applications related to photovoltaic power.

In the last contribution, “Multi-layer composite autoencoders for semi-supervised change detection in heterogeneous remote sensing images”, Shi et al. develop concise multi-layer composite autoencoders for change detection in heterogeneous remote sensing images, which avoid complex alignment or transformations in the traditional change detection frameworks, which only require 0.1% of true labels (approaching the cost of unsupervised models).

Please find below details of this Special Topic: Artificial Intelligence Innovation in Remote Sensing.

Sun X, Tian Y, Lu W X, et al. From single- to multi-modal remote sensing imagery interpretation: a survey and taxonomy. Sci China Inf Sci, 2023, 66(4): 140301

https://link.springer.com/article/10.1007/s11432-022-3588-0

Liu N, Li W, Wang Y J, et al. A survey on hyperspectral image restoration: from the view of low-rank tensor approximation. Sci China Inf Sci, 2023, 66(4): 140302

https://link.springer.com/article/10.1007/s11432-022-3609-4

Li S T, Dian R W, Liu H B. Learning the external and internal priors for multispectral and hyperspectral image fusion. Sci China Inf Sci, 2023, 66(4): 140303

https://link.springer.com/article/10.1007/s11432-022-3610-5

Chen J L, Yu H W. Wide-beam SAR autofocus based on blind resampling. Sci China Inf Sci, 2023, 66(4): 140304

https://link.springer.com/article/10.1007/s11432-022-3574-7

Li Y S, Chen W, Huang X, et al. MFVNet: a deep adaptive fusion network with multiple field-of-views for remote sensing image semantic segmentation. Sci China Inf Sci, 2023, 66(4): 140305

https://link.springer.com/article/10.1007/s11432-022-3599-y

Pang C, Wu J, Ding J, et al. Detecting building changes with off-nadir aerial images. Sci China Inf Sci, 2023, 66(4): 140306

https://link.springer.com/article/10.1007/s11432-022-3691-4

Yan Z Y, Wang P J, Xu F, et al. AIR-PV: a benchmark dataset for photovoltaic panel extraction in optical remote sensing imagery. Sci China Inf Sci, 2023, 66(4): 140307

https://link.springer.com/article/10.1007/s11432-022-3663-1

Shi J, Wu T C, Yu H W, et al. Multi-layer composite autoencoders for semi-supervised change detection in heterogeneous remote sensing images. Sci China Inf Sci, 2023, 66(4): 140308

https://link.springer.com/article/10.1007/s11432-022-3693-0

 

Visual processing before moving hands: insights into our visual sensory system

TOHOKU UNIVERSITY

NEWS RELEASE 

Figure 1 

IMAGE: VISUAL STIMULI WERE PRESENTED THROUGH A HALF MIRROR SO THAT THEIR HANDS WERE NOT VISIBLE TO PARTICIPANTS DURING THE EXPERIMENT. EEG SIGNALS AND HAND MOVEMENTS WERE MEASURED AND ANALYZED LATER. view more 

CREDIT: TOHOKU UNIVERSITY

Our hands do more than just hold objects. They also facilitate the processing of visual stimuli. When you move your hands, your brain first perceives and interprets sensory information, then it selects the appropriate motor plan before initiating and executing the desired movement. The successful execution of that task is influenced by numerous things, such as ease, whether external stimuli are present (distractions), and how many times someone has performed that task.

Take, for example, a baseball outfielder catching a ball. They want to make sure that when the ball heads their way, it ends up in their glove (the hand-movement goal). Once the batter hits the ball and it flies towards the outfielder, they begin to visually perceive and select what course of action is best (hand-movement preparation). They will then anticipate where they should position their hand and body in relation to the ball to ensure they catch it (future-hand location).

Researchers have long since pondered whether the hand-movement goal influences endogenous attention. Sometimes referred to as top-down attention, endogenous attention acts like our own personal spotlight; we choose where to shine it. This can be in the form of searching for an object, trying to block out distraction whilst working, or talking in a noisy environment. Elucidating the mechanisms behind hand movements and attention may help develop AI systems that support the learning of complicated movements and manipulations.

Now, a team of researchers at Tohoku University has identified that the hand-movement goal attention acts independently from endogenous attention.

"We conducted two experiments to determine whether hand-movement preparation shifts endogenous attention to the hand-movement goal, or whether it is a separate process that facilitates visual processing," said Satoshi Shioiri, a researcher at Tohoku University's Research Institute of Electrical Communication (RIEC), and co-author of the paper.

In the first experiment, researchers isolated the attention of the hand-movement goal from top-down visual attention by having participants move their hands to either the same location as a visual target or a differing location to the visual target based on cues. Participants could not see their hands. For both cases, there was a control condition where the participants were not asked to move their hand.

The second experiment examined whether the order in cues to the hand-movement goal and the visual target impacted visual performance.

Satoshi and his team employed an electroencephalogram (EEG) to measure the brain activity of participants. They also focused on steady state visual evoked potential (SSVEP). When a person is exposed to a visual stimulus, such as a flashing light or moving pattern, their brain produces rhythmic electrical activity at the same frequency. SSVEP is the change in EEG signal that occurs, and this helps assess the extent to which our brain selectively attends to or processes visual information, i.e, the spatial window.

"Based on the experiments, we concluded that when top-down attention is oriented to a location far from the future hand location, the visual processing of future hand location still occurs. We also found that this process has a much narrower spatial window than top-down attention, suggesting that the processes are separate," adds Satoshi.

The research group is hopeful the knowledge from the study can be applied to develop systems that maintain appropriate attention states in different occasions.

Details of the research were published in the Journal of Cognitive Neuroscience on May, 8, 2023.

People can perform tasks simultaneously, directing their attention to different locations for different tasks. For example, when reaching for a coffee mug while working on a PC, attention could be directed to the cup whilst keeping your attention on the display. Attention to the cup is related to hand movement, which could be different from top-down attention to the display. The study's results showed a difference in spatial profile between the two types of attention. The spatial extent of the attention to the hand-movement goal (bottom right) is much narrower than top-down attention (top right). This suggests that there is an attention mechanism that moves to the location of where the hand intends to go, independent of top-down attention.

CREDIT

Tohoku University