Saturday, November 18, 2023

New deep learning AI tool helps ecologists monitor rare birds through their songs


Peer-Reviewed Publication

BRITISH ECOLOGICAL SOCIETY

Dunlin spectrogram 

IMAGE: 

DUNLIN SPECTROGRAM

view more 

CREDIT: NICOLAS LECOMTE




Researchers have developed a new deep learning AI tool that generates life-like birdsongs to train bird identification tools, helping ecologists to monitor rare species in the wild. The findings are presented in the British Ecological Society journal, Methods in Ecology and Evolution.

Identifying common bird species through their song has never been easier, with numerous phone apps and software available to both ecologists and the public. But what if the identification software has never heard a particular bird before, or only has a small sample of recordings to reference? This is a problem facing ecologists and conservationists monitoring some of the world’s rarest birds.

To overcome this problem, researchers at the University of Moncton, Canada, have developed ECOGEN, a first of its kind deep learning tool, that can generate lifelike bird sounds to enhance the samples of underrepresented species. These can then be used to train audio identification tools used in ecological monitoring, which often have disproportionately more information on common species.

The researchers found that adding artificial birdsong samples generated by ECOGEN to a birdsong identifier improved the bird song classification accuracy by 12% on average.

Dr Nicolas Lecomte, one of the lead researchers, said: “Due to significant global changes in animal populations, there is an urgent need for automated tools, such acoustic monitoring, to track shifts in biodiversity. However, the AI models used to identify species in acoustic monitoring lack comprehensive reference libraries.

“With ECOGEN, you can address this gap by creating new instances of bird sounds to support AI models. Essentially, for species with limited wild recordings, such as those that are rare, elusive, or sensitive, you can expand your sound library without further disrupting the animals or conducting additional fieldwork.”

The researchers say that creating synthetic bird songs in this way can contribute to the conservation of endangered bird species and also provide valuable insight into their vocalisations, behaviours and habitat preferences.

The ECOGEN tool has other potential applications. For instance, it could be used to help conserve extremely rare species, like the critically endangered regent honeyeaters, where young individuals are unable to learn their species' songs because there aren’t enough adult birds to learn from.

The tool could benefit other types of animal as well. Dr Lecomte added: “While ECOGEN was developed for birds, we’re confident that it could be applied to mammals, fish (yes they can produce sounds!), insects and amphibians.”

As well as its versatility, a key advantage of the ECOGEN tool is its accessibility, due to it being open source and able to used on even basic computers.

ECOGEN works by converting real recordings of bird songs into spectrograms (visual representations of sounds) and then generating new AI images from these to increase the dataset for rare species with few recordings. These spectrograms are then converted back into audio to train bird sound identifiers. In this study the researchers used a dataset of 23,784 wild bird recordings from around the world, covering 264 species.

-ENDS-

Audiomoth acoustic monitoring box, used by ecologists to record wild animals.

CREDIT

Nicolas Lecomte

Long-tailed Jaeger

CREDIT

Nicolas Lecomte

JOURNAL

DOI

METHOD OF RESEARCH

SUBJECT OF RESEARCH

ARTICLE TITLE

ARTICLE PUBLICATION DATE

HKU develops novel ‘AI virtual patients’ diagnostic application Breaking spatial and geographical barriers for medical training Revolutionizing global medical education exchanges


Business Announcement

THE UNIVERSITY OF HONG KONG

group photo 

IMAGE: 

DR MICHAEL CO TIONG-HONG (MIDDLE) AND DR JOHN YUEN TSZ-HON (SECOND RIGHT) ALONG WITH THEIR STUDENTS, SHOWCASING THE NOVEL 'AI VIRTUAL PATIENTS' DIAGNOSTIC APPLICATION WHICH BREAKS SPATIAL AND GEOGRAPHICAL BARRIERS.

view more 

CREDIT: THE UNIVERSITY OF HONG KONG




With the rapid development and extensive applications of generative artificial intelligence (AI) technology across various sectors, Dr Michael Co Tiong-hong from the LKS Faculty of Medicine, the University of Hong Kong (HKUMed), and Dr John Yuen Tsz-hon from the Department of Computer Science, HKU, have jointly developed Hong Kong’s first ‘AI virtual patients’ diagnostic application for training medical students. Leveraging generative AI technology and real-life surgical cases, the research team has designed ‘humanised’ AI virtual patients with distinct personalities and medical histories, which allow medical students to virtually simulate interactions with patients during bedside consultations. This initiative greatly enhances the students’ professional skills and ability to accurately gather patients’ medical history.

To provide students with a more diverse range of clinical learning opportunities, HKUMed collaborated with the National University of Singapore (NUS) to introduce cross-regional medical cases in the diagnostic app. This revolutionary approach has redefined traditional medical teaching methods. Looking ahead, HKUMed also plans to collaborate with other overseas medical schools.

About the ‘AI virtual patients’ diagnostic application
The virtual mode of clinical teaching provides personalised patient cases tailored to the specific needs of individual medical students. In 2020, Dr Co and Dr Yuen initiated the development of an AI chatbot to help HKUMed students who could not attend hospital-based classes amid the pandemic. In 2021, a system prototype was available for trial with a selected group of HKUMed students. Teachers could design virtual patients suited to each student’s diagnostic skill level. Students would compile the medical records for case discussions and analysis with their teachers. In 2022, the outcomes of this innovative teaching mode were published in an internationally renowned journal (link to the publication).

Through continuous research and improvement, the HKU team developed Hong Kong’s first ‘AI virtual patient’ diagnostic application. Integrated with generative AI technology, the latest model of the chatbot goes beyond standardised and monotonous replies, providing highly dynamic and lively responses. Even for the same medical case, the ‘AI virtual patient’ is capable of providing distinct responses, interacting with students in a remarkably human-like and personality-driven manner.

Significance and Impact
This innovative virtual clinical teaching mode provides personalised teaching cases with equal access for all students, and addresses the limitations of the traditional teaching mode. Dr Co explained, ‘Traditional clinical teaching relies heavily on in-person interaction with real patients. But for various reasons, like scheduling difficulties, not all medical students have equal opportunities to engage in face-to-face consultations. The “AI virtual patients” app allows us to overcome time and geographical barriers, offering our students access to practice with rare cases and providing them with invaluable clinical experience. Through a virtual learning environment, equipped with a wide range of diverse patient cases, medical students can enhance their patient history-taking skills and improve the accuracy of their diagnoses.’

Dr John Yuen Tsz-hon, from the Department of Computer Science, HKU, said, ‘The “AI virtual patients” app has the capacity to accumulate information, resulting in each response it generates having a slight variation in tone and wording. This enables more authentic interactions between doctors and “patients”. Additionally, teachers can utilise the data collected by the system to conduct in-depth analysis and assessment of students’ performance, which allows them to provide specific feedback and guidance to individual students, ultimately enhancing the efficiency of clinical teaching.’

Virtual clinical teaching can remove spatial and geographical barriers, fostering international exchange in medical education. In early October this year, Dr Co collaborated with Dr Serene Goh, a specialist surgeon from the National University of Singapore, to launch the world's first cross-regional virtual clinical teaching programme. The two doctors devised distinct patient cases for students in their respective locations to practise consultations utilising the ‘AI virtual patients’ app. Through online case discussions, the medical students jointly analysed patients’ imaging studies, endoscopic images and pathological slides in online case discussions.

‘Collaboration and exchange with medical schools in other regions will enable medical students to learn from each other's strengths, broaden their horizons and knowledge, and promote international cooperation and development in medical education. This will set the foundation for boundless educational innovations in the future,’ Dr Co added.

The cross-regional virtual clinical teaching collaboration between Hong Kong and Singapore has set a remarkable precedent for international medical teaching. The Department of Surgery at the University of Edinburgh's Western General Hospital has expressed interest in joining future endeavours in virtual surgical clinical teaching.

Media enquiries
Please contact LKS Faculty of Medicine of The University of Hong Kong by email (medmedia@hku.hk).

JOURNAL

DOI

METHOD OF RESEARCH

AI supporting creative Industries


NYC Media Lab at NYU Tandon School of Engineering and Bertelsmann partner on the Creative Industries and AI Challenge, focusing on books, music, film and television


Business Announcement

NYU TANDON SCHOOL OF ENGINEERING



NYC Media Lab (NYCML) and Bertelsmann unveiled  the latest cohort joining the AI & the Creative Industries Challenge, a nine-week program in which teams explore new ways to use artificial intelligence (AI) to create digital content and reach new audiences for three Bertelsmann companies: FremantlePenguin Random House, and BMG. The teams are tasked with addressing how AI will impact these important creative industries. 

This ongoing partnership, NYCML’s third project with Bertelsmann, will continue to build on new business frontiers enabled by technology. The four selected teams, from around the globe, come from various multidisciplinary backgrounds. 

“Bertelsmann is deeply involved in experimenting with AI. Our Bertelsmann team will broaden their perspectives on this technology by teaming up with NYC Media Lab to work with this new cohort,” said Bertelsmann, Inc. Senior Director of Human Resources Freddie Helrich.

“Ensuring that the newest technologies are applied to the creative industries we have held near and dear to our hearts is a perfect example of why industry - Bertelsmann in this case - and academia should work hand in hand,” said Sayar Lonial NYC Media Lab Interim Executive Director and Associate Dean for Communications & Public Affairs at NYU Tandon.  “We are  excited to work with Bertelsmann to see how AI can support communications in all forms.”
 

The AI & the Creative Industries Challenge Teams

Author AI from Abelana VR   

Mik Labanok, Denis Chernitsyn

Brooklyn, New York

Author AI, is a tool for publishers, producers, and digital marketers to create interactive, virtual experiences based on their characters and authors.  Author AI represents the lore of a property through virtual assistants immersed in a theme-based environment and connected to a variety of third-party resources. 

Abelana VR is a developer and publisher of virtual applications for education, training, and other knowledge-driven content. Its main production is focused on online multiplayer experiences created with a VR-first approach and designed to fit across a wide range of ecosystems, including VR, AR, mobile, and web.
 

Smartplayr from SAOViVO & Axle.ai

Elisa Hecker, Emiliano Billi, Nicolas J. Russo, Sam Bogoch

Buenos Aires (Argentina) and Boston (USA)

Smartplayr repurposes existing media into live streams while keeping it current and relevant using AI.  They are leveraging AI in a number of ways to automate and optimize live streaming: Face detection for better screen composition and dynamic chyrons; Adaptive user interface to fit different aspect ratios; Scene detection to facilitate the reuse of pre-recorded content; Live transcription of breaking news to highlight important information. 

They are a multidisciplinary team composed of two companies utilized by current newsrooms: SAOViVO, an open source software that turns video playlist into a live stream, and Axle.ai, a powerful media asset manager (MAM) and publishing solution. 

 

Theater of Latent Possibilities from Speculative Devices + Cohab Labs

Ash Eliza Smith, Ryan Schmaltz, Robert Twomey, Jinku Kim, Patrick Coleman

Lincoln, Nebraska

Theater of Latent Possibilities focuses on the construction of workflows for pre-production and performance for TV, Film, and Theater. utilizing generative AI with sound, visuals, and writing. Their“writer’s room” tool allows for worldbuilding and co-creation with generative AI. Their system surfaces unique moments, unexpected connections, and latent narratives present in input datasets. At runtime, they employ these generative techniques for real-time performance—creating live, immersive, participatory experiences that hinge on the improvisatory dynamics of human-machine co-authorship.

The team is composed of artists, writers, musicians, engineers, and business practitioners exploring the frontiers of worldbuilding, co-creation, and generative AI in media and performance. They have published and performed our work in a range of international venues spanning academic conferences, arts festivals, and research institutes. 

 

Wavetable

Johann Diedrick, Sylvia Ke

New York City

Wavetable is an innovative web-based platform for sound and music production, offering a swift and efficient solution for professionals in the music, publishing, and film/TV industries. Using text-to-audio generative models, Wavetable empowers creators to articulate their sonic visions using natural language. It then transforms these descriptions into tangible audio outputs that can serve as preliminary placeholders before custom, polished audio content is crafted. Wavetable expedites the realization of creative concepts, substantially shortening project timelines, while also affording creators the freedom to develop audio content autonomously.

The team possesses a distinctive blend of industry experience spanning both the technical and creative realms. Their unique background enables them to pursue seamless integrations of AI-driven solutions into the established workflows and industry-standard products of the creative sectors. Their approach to product development is characterized by a design-first mindset, prioritizing creating AI tools that are not only powerful but also intuitive, versatile, and accessible

 

Program Details

Teams will work with mentors from Bertelsmann’s music, book publishing, film and TV production, digital and investment arms. NYC Media Lab colleagues and academic partners will also provide direction and feedback to the teams. The Challenge will conclude in December with an internal Demo Day, where teams will demonstrate their project outcomes and discoveries.

 

About Bertelsmann

Bertelsmann is a media, services and education company that operates in about 50 countries around the world. It includes the entertainment group RTL Group, the trade book publisher Penguin Random House, the music company BMG, the service provider Arvato Group, Bertelsmann Marketing Services, the Bertelsmann Education Group and Bertelsmann Investments, an international network of funds. The company has 165,000 employees worldwide and generated revenues of €20.2 billion in the 2022 financial year. Bertelsmann stands for creativity and entrepreneurship. This combination promotes first-class media content and innovative service solutions that inspire customers around the world. Bertelsmann aspires to achieve climate neutrality by 2030.

 

About The NYC Media Lab

The NYC Media Lab connects media and technology companies with both NYU Tandon and industry affiliates to drive innovation, entrepreneurship and talent development. Our interdisciplinary community of innovators from industry and academia allows our network to gain valuable insights, explore the potential of emerging technology and address the challenges and opportunities created by the rapidly evolving digital media landscape. Learn more at engineering.nyu.edu/nyc-media-lab.

No comments:

Post a Comment