KAIST develops AI that automatically detects defects in smart factory manufacturing processes even when conditions change
image:
(From left) Ph.D candidate Jihye Na, Professor Jae-Gil Lee
view moreCredit: KAIST
Recently, defect detection systems using artificial intelligence (AI) sensor data have been installed in smart factory manufacturing sites. However, when the manufacturing process changes due to machine replacement or variations in temperature, pressure, or speed, existing AI models fail to properly understand the new situation and their performance drops sharply. KAIST researchers have developed AI technology that can accurately detect defects even in such situations without retraining, achieving performance improvements up to 9.42%. This achievement is expected to contribute to reducing AI operating costs and expanding applicability in various fields such as smart factories, healthcare devices, and smart cities.
KAIST (President Kwang Hyung Lee) announced on the 26th of August that a research team led by Professor Jae-Gil Lee from the School of Computing has developed a new “time-series domain adaptation” technology that allows existing AI models to be utilized without additional defect labeling, even when manufacturing processes or equipment change.
Time-series domain adaptation technology enables AI models that handle time-varying data (e.g., temperature changes, machine vibrations, power usage, sensor signals) to maintain stable performance without additional training, even when the training environment (domain) and the actual application environment differ.
Professor Lee’s team paid attention to the fact that the core problem of AI models becoming confused by environmental (domain) changes lies not only in differences in data distribution but also in changes in defect occurrence patterns (label distribution) themselves. For example, in semiconductor wafer processes, the ratio of ring-shaped defects and scratch defects may change due to equipment modifications.
The research team developed a method for decomposing new process sensor data into three components—trends, non-trends, and frequencies—to analyze their characteristics individually. Just as humans detect anomalies by combining pitch, vibration patterns, and periodic changes in machine sounds, AI was enabled to analyze data from multiple perspectives.
In other words, the team developed TA4LS (Time-series domain Adaptation for mitigating Label Shifts) technology, which applies a method of automatically correcting predictions by comparing the results predicted by the existing model with the clustering information of the new process data. Through this, predictions biased toward the defect occurrence patterns of the existing process can be precisely adjusted to match the new process.
In particular, this technology is highly practical because it can be easily combined like an additional plug-in module inserted into existing AI systems without requiring separate complex development. That is, regardless of the AI technology currently being used, it can be applied immediately with only simple additional procedures.
In experiments using four benchmark datasets of time-series domain adaptation (i.e., four types of sensor data in which changes had occurred), the research team achieved up to 9.42% improvement in accuracy compared to existing methods.[TT1]
Especially when process changes caused large differences in label distribution (e.g., defect occurrence patterns), the AI demonstrated remarkable performance improvement by autonomously correcting and distinguishing such differences. These results proved that the technology can be used more effectively without defects in environments that produce small batches of various products, one of the main advantages of smart factories.
Professor Jae-Gil Lee, who supervised the research, said, “This technology solves the retraining problem, which has been the biggest obstacle to the introduction of artificial intelligence in manufacturing. Once commercialized, it will greatly contribute to the spread of smart factories by reducing maintenance costs and improving defect detection rates.”
This research was carried out with Jihye Na, a Ph.D. student at KAIST, as the first author, with Youngeun Nam, a Ph.D. student, and Junhyeok Kang, a researcher at LG AI Research, as co-authors. The research results were presented in August 2025 at KDD (the ACM SIGKDD Conference on Knowledge Discovery and Data Mining), the world’s top academic conference in artificial intelligence and data.
※Paper Title: “Mitigating Source Label Dependency in Time-Series Domain Adaptation under Label Shifts”
※DOI: https://doi.org/10.1145/3711896.3737050
This technology was developed as part of the research outcome of the SW Computing Industry Original Technology Development Program’s SW StarLab project (RS-2020-II200862, DB4DL: Development of Highly Available and High-Performance Distributed In-Memory DBMS for Deep Learning), supported by the Ministry of Science and ICT and the Institute for Information & Communications Technology Planning & Evaluation (IITP).
Method of Research
Meta-analysis
Subject of Research
Not applicable
Article Title
Mitigating Source Label Dependency in Time-Series Domain Adaptation under Label Shifts
Article Publication Date
23-Aug-2025
University of Tennessee collaborates on NSF grants to improve outcomes through AI
University of Tennessee at Knoxville
image:
Tabitha Samuel
view moreCredit: University of Tennessee
Faculty members from the Min H. Kao Department of Electrical Engineering and Computer Science at University of Tennessee are involved in two collaborative National Science Foundation grants that aim to address health disparities research and enhance the performance and productivity of AI science.
Tabitha Samuel, the interim director and operations group leader for UT’s National Institute for Computational Sciences (NICS), is the principal investigator for UT on both projects.
AI Advancement in Health Research
The first grant ($82,824) is a statewide collaboration among Tennessee Tech University, UT, Meharry Medical College, and Vanderbilt University that is part of the NSF’s National Artificial Intelligence Research Resource (NAIRR) Pilot.
The project, Mid-South Conferences on Cyberinfrastructure Advances to Enable Interdisciplinary AI Research in Health, will train participants how to use high-performance computing, cloud-based AI applications, and open data tools in medical research and healthcare delivery.
According to the 2023 America’s Health Rankings report, Tennessee ranks 44th among the 50 states in national health outcomes. This project will advance the use of modern, AI/ML-enabled computer technology in medical research and healthcare delivery while fostering sustained collaboration among medical professionals, engineers, scientists, and students who participate. Adhering to UT’s land-grant mission, the researchers will share content and outcomes with the NSF NAIRR program and broadly with the public at no charge.
The project consists of three workshops to be held in Knoxville, Nashville, and Memphis every six months over an 18-month period.
The co-PIs on this grant from UT are Vasileios Maroulas, director of the AI Tennessee Initiative; Courtney Cronley, a professor in the College of Social Work; Hector Santos-Villalobos, EECS assistant professor; and Fatima Zahra, an assistant professor of evaluation, statistics, and research methodology in the Department of Educational Leadership and Policy Studies.
“We focused on bringing AI training for health disparities research in Tennessee and the Mid-South area because we are aware of the magnitude of research being done around health disparities unique to the region,” Samuel said. “We hope that this AI training, coupled with exposure to the expanse of NAIRR resources, will empower Tennessee researchers with a distinct advantage in addressing and mitigating health disparities through innovative and impactful research.”
Boosting AI Speed and Efficiency
The second grant ($800,000) is a collaboration among Tennessee Tech, UT, Illinois Institute of Technology, and Stony Brook University. This NSF Cyber Infrastructure for Sustained Scientific Innovation grant aims to improve how massively parallel computers run large-scale artificial intelligence (AI) applications by enhancing the Message Passing Interface (MPI), a widely used standard for coordinating work across many high-performance-computing nodes in parallel programs.
Currently, the enabling data-transfer software used in AI for communication between computers enhanced by Graphical Processing Units (GPUs) are often proprietary and/or limited in scope; they cannot be expanded or enhanced by an open community. That situation restricts innovation, making it harder for scientists to collaborate and enhance their science output on limited computer resources, while also creating dependency on a few vendors.
By contrast, this project, Enhancing Performance and Productivity of AI Science through Next-generation High Performance Communication Abstractions, builds on and advances Open MPI, a major open-source implementation of MPI with a long history of broad impact, to make it more efficient, flexible, and better suited for modern AI tasks.
“In this age of AI, how do we instrument and improve MPI to perform better for AI codes?” Samuel said. “The hardware is at the stage where AI can do fairly well on HPC hardware. But the next question is, how do AI codes perform across multiple nodes and scaling? That’s where this project comes in.”
No comments:
Post a Comment