Saturday, November 18, 2023

New deep learning AI tool helps ecologists monitor rare birds through their songs


Peer-Reviewed Publication

BRITISH ECOLOGICAL SOCIETY

Dunlin spectrogram 

IMAGE: 

DUNLIN SPECTROGRAM

view more 

CREDIT: NICOLAS LECOMTE




Researchers have developed a new deep learning AI tool that generates life-like birdsongs to train bird identification tools, helping ecologists to monitor rare species in the wild. The findings are presented in the British Ecological Society journal, Methods in Ecology and Evolution.

Identifying common bird species through their song has never been easier, with numerous phone apps and software available to both ecologists and the public. But what if the identification software has never heard a particular bird before, or only has a small sample of recordings to reference? This is a problem facing ecologists and conservationists monitoring some of the world’s rarest birds.

To overcome this problem, researchers at the University of Moncton, Canada, have developed ECOGEN, a first of its kind deep learning tool, that can generate lifelike bird sounds to enhance the samples of underrepresented species. These can then be used to train audio identification tools used in ecological monitoring, which often have disproportionately more information on common species.

The researchers found that adding artificial birdsong samples generated by ECOGEN to a birdsong identifier improved the bird song classification accuracy by 12% on average.

Dr Nicolas Lecomte, one of the lead researchers, said: “Due to significant global changes in animal populations, there is an urgent need for automated tools, such acoustic monitoring, to track shifts in biodiversity. However, the AI models used to identify species in acoustic monitoring lack comprehensive reference libraries.

“With ECOGEN, you can address this gap by creating new instances of bird sounds to support AI models. Essentially, for species with limited wild recordings, such as those that are rare, elusive, or sensitive, you can expand your sound library without further disrupting the animals or conducting additional fieldwork.”

The researchers say that creating synthetic bird songs in this way can contribute to the conservation of endangered bird species and also provide valuable insight into their vocalisations, behaviours and habitat preferences.

The ECOGEN tool has other potential applications. For instance, it could be used to help conserve extremely rare species, like the critically endangered regent honeyeaters, where young individuals are unable to learn their species' songs because there aren’t enough adult birds to learn from.

The tool could benefit other types of animal as well. Dr Lecomte added: “While ECOGEN was developed for birds, we’re confident that it could be applied to mammals, fish (yes they can produce sounds!), insects and amphibians.”

As well as its versatility, a key advantage of the ECOGEN tool is its accessibility, due to it being open source and able to used on even basic computers.

ECOGEN works by converting real recordings of bird songs into spectrograms (visual representations of sounds) and then generating new AI images from these to increase the dataset for rare species with few recordings. These spectrograms are then converted back into audio to train bird sound identifiers. In this study the researchers used a dataset of 23,784 wild bird recordings from around the world, covering 264 species.

-ENDS-

Audiomoth acoustic monitoring box, used by ecologists to record wild animals.

CREDIT

Nicolas Lecomte

Long-tailed Jaeger

CREDIT

Nicolas Lecomte

JOURNAL

DOI

METHOD OF RESEARCH

SUBJECT OF RESEARCH

ARTICLE TITLE

ARTICLE PUBLICATION DATE

HKU develops novel ‘AI virtual patients’ diagnostic application Breaking spatial and geographical barriers for medical training Revolutionizing global medical education exchanges


Business Announcement

THE UNIVERSITY OF HONG KONG

group photo 

IMAGE: 

DR MICHAEL CO TIONG-HONG (MIDDLE) AND DR JOHN YUEN TSZ-HON (SECOND RIGHT) ALONG WITH THEIR STUDENTS, SHOWCASING THE NOVEL 'AI VIRTUAL PATIENTS' DIAGNOSTIC APPLICATION WHICH BREAKS SPATIAL AND GEOGRAPHICAL BARRIERS.

view more 

CREDIT: THE UNIVERSITY OF HONG KONG




With the rapid development and extensive applications of generative artificial intelligence (AI) technology across various sectors, Dr Michael Co Tiong-hong from the LKS Faculty of Medicine, the University of Hong Kong (HKUMed), and Dr John Yuen Tsz-hon from the Department of Computer Science, HKU, have jointly developed Hong Kong’s first ‘AI virtual patients’ diagnostic application for training medical students. Leveraging generative AI technology and real-life surgical cases, the research team has designed ‘humanised’ AI virtual patients with distinct personalities and medical histories, which allow medical students to virtually simulate interactions with patients during bedside consultations. This initiative greatly enhances the students’ professional skills and ability to accurately gather patients’ medical history.

To provide students with a more diverse range of clinical learning opportunities, HKUMed collaborated with the National University of Singapore (NUS) to introduce cross-regional medical cases in the diagnostic app. This revolutionary approach has redefined traditional medical teaching methods. Looking ahead, HKUMed also plans to collaborate with other overseas medical schools.

About the ‘AI virtual patients’ diagnostic application
The virtual mode of clinical teaching provides personalised patient cases tailored to the specific needs of individual medical students. In 2020, Dr Co and Dr Yuen initiated the development of an AI chatbot to help HKUMed students who could not attend hospital-based classes amid the pandemic. In 2021, a system prototype was available for trial with a selected group of HKUMed students. Teachers could design virtual patients suited to each student’s diagnostic skill level. Students would compile the medical records for case discussions and analysis with their teachers. In 2022, the outcomes of this innovative teaching mode were published in an internationally renowned journal (link to the publication).

Through continuous research and improvement, the HKU team developed Hong Kong’s first ‘AI virtual patient’ diagnostic application. Integrated with generative AI technology, the latest model of the chatbot goes beyond standardised and monotonous replies, providing highly dynamic and lively responses. Even for the same medical case, the ‘AI virtual patient’ is capable of providing distinct responses, interacting with students in a remarkably human-like and personality-driven manner.

Significance and Impact
This innovative virtual clinical teaching mode provides personalised teaching cases with equal access for all students, and addresses the limitations of the traditional teaching mode. Dr Co explained, ‘Traditional clinical teaching relies heavily on in-person interaction with real patients. But for various reasons, like scheduling difficulties, not all medical students have equal opportunities to engage in face-to-face consultations. The “AI virtual patients” app allows us to overcome time and geographical barriers, offering our students access to practice with rare cases and providing them with invaluable clinical experience. Through a virtual learning environment, equipped with a wide range of diverse patient cases, medical students can enhance their patient history-taking skills and improve the accuracy of their diagnoses.’

Dr John Yuen Tsz-hon, from the Department of Computer Science, HKU, said, ‘The “AI virtual patients” app has the capacity to accumulate information, resulting in each response it generates having a slight variation in tone and wording. This enables more authentic interactions between doctors and “patients”. Additionally, teachers can utilise the data collected by the system to conduct in-depth analysis and assessment of students’ performance, which allows them to provide specific feedback and guidance to individual students, ultimately enhancing the efficiency of clinical teaching.’

Virtual clinical teaching can remove spatial and geographical barriers, fostering international exchange in medical education. In early October this year, Dr Co collaborated with Dr Serene Goh, a specialist surgeon from the National University of Singapore, to launch the world's first cross-regional virtual clinical teaching programme. The two doctors devised distinct patient cases for students in their respective locations to practise consultations utilising the ‘AI virtual patients’ app. Through online case discussions, the medical students jointly analysed patients’ imaging studies, endoscopic images and pathological slides in online case discussions.

‘Collaboration and exchange with medical schools in other regions will enable medical students to learn from each other's strengths, broaden their horizons and knowledge, and promote international cooperation and development in medical education. This will set the foundation for boundless educational innovations in the future,’ Dr Co added.

The cross-regional virtual clinical teaching collaboration between Hong Kong and Singapore has set a remarkable precedent for international medical teaching. The Department of Surgery at the University of Edinburgh's Western General Hospital has expressed interest in joining future endeavours in virtual surgical clinical teaching.

Media enquiries
Please contact LKS Faculty of Medicine of The University of Hong Kong by email (medmedia@hku.hk).

JOURNAL

DOI

METHOD OF RESEARCH

AI supporting creative Industries


NYC Media Lab at NYU Tandon School of Engineering and Bertelsmann partner on the Creative Industries and AI Challenge, focusing on books, music, film and television


Business Announcement

NYU TANDON SCHOOL OF ENGINEERING



NYC Media Lab (NYCML) and Bertelsmann unveiled  the latest cohort joining the AI & the Creative Industries Challenge, a nine-week program in which teams explore new ways to use artificial intelligence (AI) to create digital content and reach new audiences for three Bertelsmann companies: FremantlePenguin Random House, and BMG. The teams are tasked with addressing how AI will impact these important creative industries. 

This ongoing partnership, NYCML’s third project with Bertelsmann, will continue to build on new business frontiers enabled by technology. The four selected teams, from around the globe, come from various multidisciplinary backgrounds. 

“Bertelsmann is deeply involved in experimenting with AI. Our Bertelsmann team will broaden their perspectives on this technology by teaming up with NYC Media Lab to work with this new cohort,” said Bertelsmann, Inc. Senior Director of Human Resources Freddie Helrich.

“Ensuring that the newest technologies are applied to the creative industries we have held near and dear to our hearts is a perfect example of why industry - Bertelsmann in this case - and academia should work hand in hand,” said Sayar Lonial NYC Media Lab Interim Executive Director and Associate Dean for Communications & Public Affairs at NYU Tandon.  “We are  excited to work with Bertelsmann to see how AI can support communications in all forms.”
 

The AI & the Creative Industries Challenge Teams

Author AI from Abelana VR   

Mik Labanok, Denis Chernitsyn

Brooklyn, New York

Author AI, is a tool for publishers, producers, and digital marketers to create interactive, virtual experiences based on their characters and authors.  Author AI represents the lore of a property through virtual assistants immersed in a theme-based environment and connected to a variety of third-party resources. 

Abelana VR is a developer and publisher of virtual applications for education, training, and other knowledge-driven content. Its main production is focused on online multiplayer experiences created with a VR-first approach and designed to fit across a wide range of ecosystems, including VR, AR, mobile, and web.
 

Smartplayr from SAOViVO & Axle.ai

Elisa Hecker, Emiliano Billi, Nicolas J. Russo, Sam Bogoch

Buenos Aires (Argentina) and Boston (USA)

Smartplayr repurposes existing media into live streams while keeping it current and relevant using AI.  They are leveraging AI in a number of ways to automate and optimize live streaming: Face detection for better screen composition and dynamic chyrons; Adaptive user interface to fit different aspect ratios; Scene detection to facilitate the reuse of pre-recorded content; Live transcription of breaking news to highlight important information. 

They are a multidisciplinary team composed of two companies utilized by current newsrooms: SAOViVO, an open source software that turns video playlist into a live stream, and Axle.ai, a powerful media asset manager (MAM) and publishing solution. 

 

Theater of Latent Possibilities from Speculative Devices + Cohab Labs

Ash Eliza Smith, Ryan Schmaltz, Robert Twomey, Jinku Kim, Patrick Coleman

Lincoln, Nebraska

Theater of Latent Possibilities focuses on the construction of workflows for pre-production and performance for TV, Film, and Theater. utilizing generative AI with sound, visuals, and writing. Their“writer’s room” tool allows for worldbuilding and co-creation with generative AI. Their system surfaces unique moments, unexpected connections, and latent narratives present in input datasets. At runtime, they employ these generative techniques for real-time performance—creating live, immersive, participatory experiences that hinge on the improvisatory dynamics of human-machine co-authorship.

The team is composed of artists, writers, musicians, engineers, and business practitioners exploring the frontiers of worldbuilding, co-creation, and generative AI in media and performance. They have published and performed our work in a range of international venues spanning academic conferences, arts festivals, and research institutes. 

 

Wavetable

Johann Diedrick, Sylvia Ke

New York City

Wavetable is an innovative web-based platform for sound and music production, offering a swift and efficient solution for professionals in the music, publishing, and film/TV industries. Using text-to-audio generative models, Wavetable empowers creators to articulate their sonic visions using natural language. It then transforms these descriptions into tangible audio outputs that can serve as preliminary placeholders before custom, polished audio content is crafted. Wavetable expedites the realization of creative concepts, substantially shortening project timelines, while also affording creators the freedom to develop audio content autonomously.

The team possesses a distinctive blend of industry experience spanning both the technical and creative realms. Their unique background enables them to pursue seamless integrations of AI-driven solutions into the established workflows and industry-standard products of the creative sectors. Their approach to product development is characterized by a design-first mindset, prioritizing creating AI tools that are not only powerful but also intuitive, versatile, and accessible

 

Program Details

Teams will work with mentors from Bertelsmann’s music, book publishing, film and TV production, digital and investment arms. NYC Media Lab colleagues and academic partners will also provide direction and feedback to the teams. The Challenge will conclude in December with an internal Demo Day, where teams will demonstrate their project outcomes and discoveries.

 

About Bertelsmann

Bertelsmann is a media, services and education company that operates in about 50 countries around the world. It includes the entertainment group RTL Group, the trade book publisher Penguin Random House, the music company BMG, the service provider Arvato Group, Bertelsmann Marketing Services, the Bertelsmann Education Group and Bertelsmann Investments, an international network of funds. The company has 165,000 employees worldwide and generated revenues of €20.2 billion in the 2022 financial year. Bertelsmann stands for creativity and entrepreneurship. This combination promotes first-class media content and innovative service solutions that inspire customers around the world. Bertelsmann aspires to achieve climate neutrality by 2030.

 

About The NYC Media Lab

The NYC Media Lab connects media and technology companies with both NYU Tandon and industry affiliates to drive innovation, entrepreneurship and talent development. Our interdisciplinary community of innovators from industry and academia allows our network to gain valuable insights, explore the potential of emerging technology and address the challenges and opportunities created by the rapidly evolving digital media landscape. Learn more at engineering.nyu.edu/nyc-media-lab.

A deep-sea fish inspired researchers to develop supramolecular light-driven machinery


Peer-Reviewed Publication

TAMPERE UNIVERSITY

Disequilibration by sensitization under confinement (DESC) 

VIDEO: 

DISEQUILIBRATION BY SENSITIZATION UNDER CONFINEMENT (DESC). VIDEO BY RAFAL KLAJN, THE WEIZMANN INSTITUTE OF SCIENCE.

view more 

CREDIT: RAFAL KLAJN




The vision system, evolved over millions of years, is highly complex. To make vision sensitive throughout the whole range of visible wavelengths, Nature employs a supramolecular chemistry approach. The visual pigment, cis-retinal, changes its shape upon capturing a photon. This shape transformation is accompanied by changes in the supramolecular organization of the surrounding proteins, subsequently triggering a cascade of chemical signaling events that get amplified and eventually lead to visual perception in the brain.

“Some deep-sea fish have evolved antenna-like molecules capable of absorbing photons in the red wavelength range, whose abundance at great depths is close to zero. After absorbing a photon, this antenna molecule transfers its energy to the nearby retinal molecule, thus inducing its conformational change from the cis to trans-retinal. In synthetic systems, such process would enable using low-energy light for applications in for instance energy storage or controlled drug release”, explains the lead author of the work Prof. Rafal Klajn from the Weizmann Institute of Science.

Inspired by this phenomenon, the researchers developed a superior supramolecular machine capable to efficiently convert widely used synthetic photoswitchable molecules – azobenzenes – from the stable to the metastable conformation with almost any wavelength of visible light. The approach includes a metal–organic cage filled with one azobenzene molecule and one light-absorbing antenna molecule, the sensitizer. In close confinement inside the supramolecular cage, chemical processes that would not take place in normal conditions, become possible.

“A common problem of azobenzenes is that they cannot efficiently undergo photoswitching from the stable trans form to the metastable cis form upon low-energy red and near-infrared light, but the process has to be driven by UV light. This substantially limits their applications in fields such as photocatalysis or photopharmacology. Now, using the supramolecular caging approach we can reach almost quantitative trans-to-cis isomerization with any color of visible range,” says Dr. Nikita Durandin, Academy of Finland Research Fellow in Supramolecular Chemistry of Bio- and Nanomaterials group, who has been working with sensitization approaches in Tampere University for the last 7 years.

“Time-resolved spectroscopic studies done at Tampere University revealed that the photochemical processes triggering the isomerization happen superfast, in the nanosecond range. In other words, almost 1 billion times faster than the blink of your eyes,” continues Dr. Tero-Petri Ruoko, Marie Sklodowska-Curie Fellow in Smart Photonics Materials group, and expert in ultrafast spectroscopy.

“Once you shine light on this supramolecular cage, it quickly converts almost all of the trans isomers into cis isomers. Simple mixing of components and light that matches the absorption profile of the sensitizer is enough to make this machinery work,” he adds.

According to Prof. Arri Priimägi, the leader of Smart Photonics Materials group specializing in light-active materials, the study presents a new approach for activating photoresponsive molecules with low-energy light, pushing them out from their thermodynamic equilibrium utilizing chemistry that only takes place under confinement.

It took millions of years of evolution for the eye of deep-sea fish to emerge. Learning from that, the research led by Rafal Klajn’s group extended these concepts to synthetic materials in less than 5 years.

“We are already working on the next generation of the light-driven supramolecular machines, aiming at applying the developed methodologies in soft robotics and light-activated drug delivery systems,” concludes Priimägi.

The scientific article on the research “Disequilibrating azobenzenes by visible-light sensitization under confinement” has been published in the journal Science.


HKU State Key Laboratory of Brain and Cognitive Sciences provides a roadmap for unlocking the brain secrets of social media


Peer-Reviewed Publication

THE UNIVERSITY OF HONG KONG

Research team 

IMAGE: 

PROFESSOR CHRISTIAN MONTAG, PROFESSOR OF ULM UNIVERSITY IN GERMANY (LEFT) AND PROFESSOR BENJAMIN BECKER, PROFESSOR OF THE DEPARTMENT OF PSYCHOLOGY AND PRINCIPAL INVESTIGATOR OF THE STATE KEY LABORATORY OF BRAIN AND COGNITIVE SCIENCES OF THE UNIVERSITY OF HONG KONG (RIGHT)

view more 

CREDIT: THE UNIVERSITY OF HONG KONG




With nearly 5 billion users worldwide spending an average of over two hours daily on platforms like TikTok, Instagram, and Facebook, the impact of social media on mental health and well-being has garnered increasing attention. Concerns about excessive and problematic usage, particularly among vulnerable adolescents, have led to discussions around terms such as 'brain hacking,' 'dopamine trigger,' and 'social media addiction.' However, there is limited scientific understanding of the relationship between social media and the brain.

Professor Benjamin Becker, from the Department of Psychology and State Key Laboratory of Brain and Cognitive Sciences at the University of Hong Kong, collaborated with Professor Christian Montag from Ulm University in Germany to assemble an international expert team. Together, they called for promoting neuroscientific research to determine social media's effects on the brain, aiming to provide evidence-based information for policy makers, public health initiatives, and users. Their call to action was published in Trends in Cognitive Sciences entitled ‘Unlocking the brain secrets of social media through neuroscience’.

The team noted that despite a growing number of studies on the adverse impacts of social media on mental health and well-being, current understanding remains patchy and critically limited by the reliance on self-reported measures, where past studies have reported that people can exhibit subjective time distortions when estimating their online times.

During the last ten years, only a handful of studies have employed modern brain imaging technologies, i.e., Magnetic Resonance Imaging (MRI), to determine the impact of social media usage on the brain and studies in adolescents are scant. While these studies suggest that neural changes in motivational, affective and cognitive brain systems may mediate the detrimental impact of social media usage, interpretation of the findings remained strongly hampered by methodological shortcomings and the current findings do not allow a clear evaluation of the subject.

The researchers emphasized the need for evidence-based policy making, such as determining an appropriate age for platform access. They outline the following areas that are in urgent need of neuroscientific evidence:
1. Does excessive social media use share brain mechanisms of addiction?
2. Which emotional and motivational brain mechanisms keep users engaged and while they are spending time on social media?
3. How does social media use affect the adolescent brain and are there particular vulnerable time windows in adolescent brain development for the effects?
4. Does social media act as trigger of ‘dopamine’ a neurotransmitter in the brain related to pleasure and addiction?

Professor Becker concluded that “it is essential to support multidisciplinary research projects to determine the impact of social media on brain development and mental health in adolescents with the aim to develop brain-based strategies to strengthen resilience and improve the treatment of addictive behavior, psychosocial stress and depression in adolescents.”

Professor Montag added that “Social media has opened tremendous opportunities for communication, self-expression and social connection but social media should be redesigned to better protect and promote mental health and well-being”. This will ultimately require a better understanding of the brain mechanisms that keep users online and impact their well-being.

Link to the journal article: https://www.sciencedirect.com/science/article/abs/pii/S1364661323002528

Media enquiries:
Professor Benjamin Becker, State Key Laboratory of Brain & Cognitive Sciences (Email: bbecker@hku.hk)

 

HKU Engineering ‘Super Steel’ team develops new ultra stainless steel for hydrogen production


Peer-Reviewed Publication

THE UNIVERSITY OF HONG KONG

Research team 

IMAGE: 

PROFESSOR MINGXIN HUANG AND DR KAIPING YU 

view more 

CREDIT: THE UNIVERSITY OF HONG KONG




A research project led by Professor Mingxin Huang at the Department of Mechanical Engineering of the University of Hong Kong (HKU) has made a brand-new breakthrough over conventional stainless steel and the development of stainless steel for hydrogen (SS-H2).

This marks another major achievement by Professor Huang’s team in its ‘Super Steel’ Project, following the development of the anti-COVID-19 stainless steel in 2021, and ultra-strong and ultra-tough Super Steel in 2017 and 2020 respectively.

The new steel developed by the team exhibits high corrosion resistance, enabling its potential application for green hydrogen production from seawater, where a novel sustainable solution is still in the pipeline.  

The performance of the new steel in salt water electrolyser is comparable to the current industrial practice using Titanium as structural parts to produce hydrogen from desalted seawater or acid, while the cost of the new steel is much cheaper.

The discovery has been published in Materials Today in the paper titled “A sequential dual-passivation strategy for designing stainless steel used above water oxidation.” The research achievements are currently applying for patents in multiple countries, and two of them has already been granted authorisation.

Since its discovery a century ago, stainless steel has always been an important material widely used in corrosive environments. Chromium is an essential element in establishing the corrosion resistance of stainless steel. Passive film is generated through the oxidation of chromium (Cr) and protects stainless steel in natural environments. Unfortunately, this conventional single-passivation mechanism based on Cr has halted further advancement of stainless steel. Owing to the further oxidation of stable Cr2O3 into soluble Cr(VI) species, tranpassive corrosion inevitably occurs in conventional stainless steel at ~1000 mV (saturated calomel electrode, SCE), which is below the potential required for water oxidation at ~1600 mV.

254SMO super stainless steel, for instance, is a benchmark among Cr-based anti-corrosion alloys and has superior pitting resistance in seawater; however, transpassive corrosion limits its application at higher potentials.

By using a “sequential dual-passivation” strategy, Professor Huang’s research team developed the novel SS-H2 with superior corrosion resistance. In addition to the single Cr2O3-based passive layer, a secondary Mn-based layer forms on the preceding Cr-based layer at ~720 mV. The sequential dual-passivation mechanism prevents the SS-H2 from corrosion in chloride media to an ultra-high potential of 1700 mV. The SS-H2 demonstrates a fundamental breakthrough over conventional stainless steel.

“Initially, we did not believe it because the prevailing view is that Mn impairs the corrosion resistance of stainless steel. Mn-based passivation is a counter-intuitive discovery, which cannot be explained by current knowledge in corrosion science.  However, when numerous atomic-level results were presented, we were convinced. Beyond being surprised, we cannot wait to exploit the mechanism,” said Dr Kaiping Yu, the first author of the article, whose PhD is supervised by Professor Huang.

From the initial discovery of the innovative stainless steel to achieving a breakthrough in scientific understanding, and ultimately preparing for the official publication and hopefully its industrial application, the team devoted nearly six years to the work.

“Different from the current corrosion community, which mainly focuses on the resistance at natural potentials, we specialises in developing high-potential-resistant alloys. Our strategy overcame the fundamental limitation of conventional stainless steel and established a paradigm for alloy development applicable at high potentials. This breakthrough is exciting and brings new applications.” Professor Huang said.

At present, for water electrolyser in desalted seawater or acid solutions, expensive Au- or Pt-coated Ti are required for structural components. For instance, the total cost of a 10-megawatt PEM electrolysis tank system in its current stage is approximately HK$17.8 million, with the structural components contributing up to 53% of the overall expense. The breakthrough made by Professor Huang’s team makes it possible to replace these expensive structural components with more economically steel. As estimated, the employment of SS-H2 is expected to cut the cost of structural material by about 40 times, demonstrating a great foreground of industrial applications.

“From experimental materials to real products, such as meshes and foams, for water electrolysers, there are still challenging tasks at hand. Currently, we have made a big step toward industrialisation. Tons of SS-H2-based wire has been produced in collaboration with a factory from the Mainland. We are moving forward in applying the more economical SS-H2 in hydrogen production from renewable sources,” added Professor Huang.

Link to the paper:
https://www.sciencedirect.com/science/article/abs/pii/S1369702123002390

Please click here for a short video showing how the new stainless steel produces hydrogen in salt water.


Ultra stainless steel