ETRI breathes digital life into cultural heritage
Leading cooperation with the National Museum of Korea for digital transformation of world-class cultural heritage. Development and demonstration of an intelligent platform combining AI and cultural heritage digitization.
NATIONAL RESEARCH COUNCIL OF SCIENCE & TECHNOLOGY
South Korean researchers are revitalizing the nation's world-class cultural heritage through digital transformation. By collaborating with museums, they are bringing the rich history and culture of Korea to life using AI-based technology development.
Since 2020, the Electronics and Telecommunications Research Institute (ETRI) and the National Museum of Korea have been working together under a Ministry of Culture, Sports and Tourism R&D project to develop and demonstrate key technologies for the digital transformation of Korean cultural heritage.
The two institutions have been applying AI technology to enhance the quality and usability of museum data, and they have been promoting research on foundational technologies and the development of an intelligent heritage platform that manages and utilizes new types of data in various environments and for different purposes.
ETRI has focused on AI-based data analysis and standardization of cultural heritage. Notable efforts include:
- Data fabric-based archives1)
1) Data Fabric Based Archives: Technology that connects and provides access to relevant data anytime, anywhere, based on AI.
- AI-based cultural heritage analysis2)
2) AI-Based Cultural Heritage Analysis: Technology for analyzing cultural heritage data and automatically generating metadata.
- Digital heritage standards3)
3) Digital Heritage Standards: Standards for the comprehensive utilization of increasing digital cultural heritage data.
- Generative AI-based data expansion4)
4) Generative AI-Based Data Expansion: Technology supporting the generation of required resolutions, qualities, and styles for content and devices.
- Sharing platforms for various demands5)
5) Sharing Platforms for Various Demands: Platform technology that can support purposes such as preservation, exhibition, education, and management.
Using these technologies, the research team has been striving to create an intelligent digital heritage sharing platform to support:
- Museum artifact management
- Cultural heritage preservation research
- Immersive content creation
- Interactive cultural heritage education
The developing digital heritage sharing platform uses data fabric technology based on AI, enabling immediate utilization of the diverse forms of digital cultural heritage data continuously increasing in museums, aiming for the world’s top-class usability.
Globally, attempts to enhance the value and usability of cultural heritage by integrating digital and AI technologies are increasing. However, museums often find it challenging to have a practical platform integrating these technologies due to the complexity and specific needs of digital heritage.
Around the world, experts in cultural heritage and digital technology are collaborating, yet significant trial and error occurs due to differences in experience and knowledge, along with issues related to data structure, usability limitations, and conflicts with museum processes.
In this context, developing and applying the new platform to the work of the National Museum of Korea staff is considered crucial in advancing South Korea's leadership in the digital transformation of cultural heritage.
Moreover, a digital standard process for storing and utilizing high-quality digital cultural heritage data, generated and utilized by new technologies annually, is being completed in collaboration with museums, a world-first achievement.
This digital heritage standard process guarantees the availability of data not only for existing cultural heritage but also for various applications like virtual reality, digital twin, and the metaverse. This transformation allows museums to become proprietors of their data, adapting it into various forms.
Data produced through the standard process creates a foundation for the highest level of cultural heritage, usable in exhibitions, preservation, education, and more. In the digital age, we can expect strategic digital incursions and unfounded claims from certain countries aiming to advance their own interests. Particularly during such times, it is crucial for nations to strive to secure the digital scalability of their cultural heritage. Thus, the completion of a comprehensive digital standards process holds significant importance.
Through four years of joint efforts, ETRI and the National Museum of Korea have established a high-quality digitalization process for cultural heritage, which includes sharing and spreading this process to affiliated research institutes, related industries, and academic institutions such as Technology Research Institute for Culture & Heritage, LiST Co., Ltd., Chung-Ang University, and Korea National University of Cultural Heritage.
Using the digital standard technology developed last year, ETRI, the National Museum of Korea, and the Technology Research Institute for Culture & heritage have created digital content of the National Treasure “Pensive Bodhisattva” at the Millennium Hall of Incheon International Airport Terminal 1. This work won the public branding category of the German IF Design Award, one of the world's top three design awards, last year.
Additionally, the “Pyeongsaengdo” content of the National Museum of Korea, which implements world-class high-quality cultural heritage content, won the Red Dot Award last year.
The research team has utilized ultra-high-resolution digital asset data for the digital “Gwanggaeto Stele” content in the main lobby of the National Museum of Korea, “The Path of History.” In addition, the Korea Heritage Service co-exhibited cultural heritage immersive content using ‘Chilbo Sando Folding Screen’ with the Cleveland Museum of Art in the U.S., showcasing the world's top-level digitalization of cultural heritage based on the developed technology.
The foundation of these achievements lies in the followings:
- Improving the quality of cultural heritage digital data
- Developing technology for visualizing cultural heritage networks
- Text mining technology for generating knowledge-based cultural heritage relationships
This includes research on developing AI technologies specialized for cultural heritage and studies on the creation and utilization of cultural heritage assets.
The collaboration model between the two institutions addresses digital cultural heritage data utilization and field issues in areas like preservation, exhibition, education, archives, and open storage.
Complete digital transformation will be achieved when a platform is built that allows broad searchability and easy utilization and sharing of cultural heritage information and data within people's lives.
ETRI, together with the National Museum of Korea, has advanced numerous cutting-edge studies, including:
- Cultural property database modeling
- AI-based automatic digital conversion of traditional cultural heritage data
- Standardization research on ultra-high-resolution digital cultural heritage assets
Tae-hee Lee, a researcher at the National Museum of Korea, said, “We expect that the long-term collaboration between these two leading institutions in cultural heritage and advanced technology will set the stage for developing AI technology and application models usable in the specialized field of Korean cultural heritage.”
Jae-Ho Lee, the head researcher at ETRI's Content Convergence Research Section, added, “The numerous digital projects on cultural heritage data at the National Museum of Korea can be considered the starting point of South Korea's digital transformation. Both institutions have prepared for the digitalization of heritage-related information, such as descriptions of each cultural property, related materials, and relationships with other heritage items.”
ETRI explained that their new challenge this year in the data fabric field is innovative and promising, offering a positive opportunity to secure international technological competitiveness in the era of digital transformation.
Digital Pyeongsaengdo - 2023 Red Dot Award Winner
Joint exhibition and digital repatriation of “Mountain Chilbo Screen” owned by the Cleveland Museum of Art
Joint exhibition and digital repatriation of “Mountain Chilbo Screen” owned by the Cleveland Museum of Art
CREDIT
Electronics and Telecommunications Research Institute(ETRI)
Electronics and Telecommunications Research Institute(ETRI)
###
This technology has achieved results as part of the Ministry of Culture, Sports and Tourism's project, “Development of Intelligent Heritage Sharing Platform Technology Leading Digital Standards for Cultural Heritage.”
About Electronics and Telecommunications Research Institute (ETRI)
ETRI is a non-profit government-funded research institute. Since its foundation in 1976, ETRI, a global ICT research institute, has been making its immense effort to provide Korea a remarkable growth in the field of ICT industry. ETRI delivers Korea as one of the top ICT nations in the World, by unceasingly developing world’s first and best technologies.
Revolutionizing the abilities of adaptive
radar with AI
AI approaches and an enormous open-source dataset could spark rapid advancements in adaptive radar systems similar to those seen in computer vision over the past two decades.
DUKE UNIVERSITY
DURHAM, N.C. – The world around us is constantly being flash photographed by adaptive radar systems. From salt flats to mountains and everything in between, adaptive radar is used to detect, locate and track moving objects. Just because human eyes can’t see these ultra-high frequency (UHF) ranges doesn’t mean they’re not taking pictures.
Although adaptive radar systems have been around since World War II, they’ve hit a fundamental performance wall in the past couple of decades. But with the help of modern AI approaches and lessons learned from computer vision, researchers at Duke University have broken through that wall, and they want to bring everyone else in the field along with them.
In a new paper published July 16 in the journal IET Radar, Sonar & Navigation, Duke engineers show that using convolutional neural networks (CNNs) — a type of AI that revolutionized computer vision — can greatly enhance modern adaptive radar systems. And in a move that parallels the impetus of the computer vision boom, they have released a large dataset of digital landscapes for other AI researchers to build on their work.
“Classical radar methods are very good, but they aren’t good enough to meet industry demands for products such as autonomous vehicles,” said Shyam Venkatasubramanian, a graduate research assistant working in the lab of Vahid Tarokh, the Rhodes Family Professor of Electrical and Computer Engineering at Duke. “We’re working to bring AI into the adaptive radar space to tackle problems like object detection, localization and tracking that industry needs solved.”
At its most basic level, radar is not difficult to understand. A pulse of high-frequency radio waves is broadcast, and an antenna gathers data from any waves that bounce back. As technology has advanced, however, so too have the concepts used by modern radar systems. With the ability to shape and direct signals, process multiple contacts at once, and filter out background noise, the technology has come a long way in the past century.
But radar has come just about as far as it can using these techniques alone. Adaptive radar systems still struggle to accurately localize and track moving objects, especially in complex environments like mountainous terrain.
To move adaptive radar into the age of AI, Venkatasubramanian and Tarokh were inspired by the history of computer vision. In 2010, researchers at Stanford University released an enormous image database consisting of over 14 million annotated images called ImageNet. Researchers around the world used ImageNet to test and compare new AI approaches that became industry standard.
In the new paper, Venkatasubramanian and his collaborators show that using the same AI approaches greatly improves the performance of current adaptive radar systems.
“Our research parallels the research of the earliest users of AI in computer vision and the creators of ImageNet, but within adaptive radar,” Venkatasubramanian said. “Our proposed AI takes as input processed radar data and outputs a prediction of the target's location through a simple architecture that can be thought of as paralleling the predecessor of most modern computer vision architectures.”
While the group has yet to test their methods in the field, they benchmarked their AI’s performance on a modeling and simulation tool called RFView®, which gains its accuracy by incorporating the Earth's topography and terrain into its modeling toolbox. Then, continuing in the footsteps of computer vision, they created 100 airborne radar scenarios based on landscapes from across the contiguous United States and released it as an open-source asset called “RASPNet.”
This is a valuable asset, as only a handful of teams have access to RFView®. The researchers, however, received special permission from the creators of RFView® to build the dataset — which contains more than 16 terabytes of data built over the course of several months — and make it publicly available.
“I am delighted that this groundbreaking work has been published, and particularly that the associated data is being made available in the RASPNet repository,” said Hugh Griffiths, Fellow Royal Academy of Engineering, Fellow IEEE, Fellow IET, OBE, and the THALES/Royal Academy Chair of RF Sensors at University College London, who was not involved with the work. “This will undoubtedly stimulate further work in this important area, and ensure that the results can readily be compared with each other.”
The scenarios included were handpicked by radar and machine learning experts and have a wide range of geographical complexity. On the easiest side for adaptive radar systems to handle is the Bonneville Salt Flats, while the hardest is Mount Rainier. Venkatasubramanian and his group hope that others will take their ideas and dataset and build even better AI approaches.
For example, in a previous paper, Venkatasubramanian showed that an AI tailored to a specific geographical location could achieve up to a seven-fold improvement in localizing objects over classical methods. If an AI could select a scenario on which it had already been trained that is similar to its current environment, it should substantially improve in performance.
“We think this will have a really big impact on the adaptive radar community,” Venkatasubramanian said. “As we move forward and continue adding capabilities to the dataset, we want to provide the community with everything it needs to push the field forward into using AI.”
This work was supported by the Air Force Office of Scientific Research (FA9550-21-1-0235, 20RYCORO51, 20RYCOR052).
CITATIONS: “Data-Driven Target Localization Using Adaptive Radar Processing and Convolutional Neural Networks,” Shyam Venkatasubramanian, Sandeep Gogineni, Bosung Kang, Ali Pezeshki, Muralidhar Rangaswamy, Vahid Tarokh. IET Radar, Sonar & Navigation, July 16, 2024. DOI: 10.1049/rsn2.12600
“RASPNet: A Benchmark Dataset for Radar Adaptive Signal Processing Applications,” Shyam Venkatasubramanian, Bosung Kang, Ali Pezeshki, Muralidhar Rangaswamy, Vahid Tarokh. arXiv preprint arXiv:2406.09638
# # #
JOURNAL
IET Radar Sonar & Navigation
METHOD OF RESEARCH
Experimental study
SUBJECT OF RESEARCH
Not applicable
ARTICLE TITLE
Data-Driven Target Localization Using Adaptive Radar Processing and Convolutional Neural Networks
ARTICLE PUBLICATION DATE
16-Jul-2024
Texas A&M engineers explore intelligence augmentation to improve safety
Researchers are developing a framework to merge AI and human intelligence in hopes of improving process safety systems
Artificial intelligence (AI) has grown rapidly in the last few years, and with that increase, industries have been able to automate and improve their efficiency in operations.
A feature article published in AIChE Journal identifies the challenges and benefits of using Intelligence Augmentation (IA) in process safety systems.
Contributors to this work are Dr. Faisal Khan, professor and chemical engineering department head at Texas A&M University, Dr. Stratos Pistikopoulos, professor and director of the Energy Institute, Drs. Rajeevan Arunthavanathan, Tanjin Amin, and Zaman Sajid from the Mary Kay O’Connor Safety Center.
Additionally, Dr. Yuhe Tian from West Virginia University contributed the novel perspective of using AI in process plants from a safety perspective.
The research basis is to use an AI approach to process safety alongside humans rather than replacing them in operational decision-making, according to Khan.
“This research aims to develop a comprehensive framework based on IA that integrates AI and Human intelligence (HI) into process safety systems, ensuring enhanced safety and efficiency,” Arunthavanathan said. “We aim to provide a clear understanding of the potential and limitations of AI, propose IA strategies for their effective implementation to minimize risks and improve safety outcomes.”
Helping Humans, Not Replacing Them
Khan believes that AI and human intelligence can be combined, dispelling the fear that AI may eventually replace humans as it advances in its ability to perform tasks.
“The study examines the challenges in incorporating AI technology in real-world industrial applications and how IA can improve process monitoring, fault detection, and decision-making to improve process safety,” Amin said.
Khan contends that AI will improve safety by analyzing real-time data, predicting maintenance needs, and automatically detecting faults. However, the IA approach, using human decision-making, is also expected to reduce incident rates, lower operational costs, and increase reliability.
“The application of AI in chemical engineering presents significant challenges, which means it is not enough to ensure comprehensive process safety,” Sajid said. “To overcome these limitations, IA is introduced to work alongside human expertise rather than replace it.”
The research identifies several risks associated with implementing AI and IA in process industries. AI risks include data quality issues, overreliance on AI, lack of contextual understanding, model misinterpretation, and training and adaptation challenges. On the other hand, the risks associated with IA include human error in feedback, conflict in AI-HI decision-making, biased judgment, complexity in implementation, and reliability issues.
“The researchers are particularly interested in the challenges of AI and conceptualize IA to augment human decision-making in process safety,” Tian said. “They are fascinated by how AI can provide accurate and prompt responses based on data analysis while human intelligence can offer broader insights and considerations, including ethical and social factors.”
Khan believes that this research emphasizes the importance of developing reliable, trustworthy, and safe AI systems tailored to industrial applications.
“The collaboration between AI and human intelligence is seen as essential for advancing process safety,” Khan said. “Ongoing exploration of this synergy to meet the evolving demands of industrial safety will continue to enhance AI’s capabilities while ensuring robust risk management frameworks are in place,” Pistikopoulos added.
By Raven Wuebker, Texas A&M University Engineering
###
JOURNAL
AIChE Journal
ARTICLE TITLE
Process safety 4.0: Artificial intelligence or intelligence augmentation for safer process operation?
Estimating rainfall intensity using surveillance audio and deep-learning
A new approach for high-resolution hydrological sensing for environmental resilience
EURASIA ACADEMIC PUBLISHING GROUP
Surveillance cameras generate both video and audio outputs. Unlike video images recorded, the audio can be supplemented reliably as audio sources resist background interference and lighting variability. Creating a reliable way to use these audio sources to estimate the intensity of rainfall could open a new chapter in rainfall intensity estimation.
In a study published in Environmental Science and Ecotechnology, researchers created an audio dataset of six real-world rainfall events, named the Surveillance Audio Rainfall Intensity Dataset (SARID). This dataset's audio recordings were segmented into 12,066 pieces and annotated with rainfall intensity and environmental information, such as underlying surfaces, temperature, humidity, and wind.
The researchers developed a deep learning-based baseline to estimate rainfall intensity from surveillance audio. Validated from ground truth data, the research baseline from the system deployed achieved a root mean absolute error of 0.88 mm h-1 and a coefficient of correlation of 0.765.
These findings demonstrate the potential of surveillance audio-based models as practical and effective tools for rainfall observation systems, initiating a new chapter in rainfall intensity estimation.
The work offers a new approach for high-resolution hydrological sensing and contributes to the broader landscape of urban sensing, emergency response, and environmental resilience.
JOURNAL
Environmental Science and Ecotechnology
METHOD OF RESEARCH
Data/statistical analysis
SUBJECT OF RESEARCH
Not applicable
ARTICLE TITLE
Estimating Rainfall Intensity Based on Surveillance Audio and Deep-Learning
ARTICLE PUBLICATION DATE
18-Jul-2024
No comments:
Post a Comment