Training AI to identify ancient artists
Griffith researchers built and tested a digital archaeology framework to learn more about the ancient humans who created one of the oldest forms of rock art, finger fluting.
image:
Participant creates finger flutings in VR setup
view moreCredit: Andrea Jalandoni
Griffith researchers built and tested a digital archaeology framework to learn more about the ancient humans who created one of the oldest forms of rock art, finger fluting.
Finger flutings are marks drawn by fingers through a soft mineral film called moonmilk on cave walls.
Experiments were conducted - both with adult participants in a tactile setup and using VR headsets in a custom-built program - to explore whether image-recognition methods could learn enough from finger-fluting images made by modern people to identify the sex of the person who created them.
Finger flutings appear in pitch dark caves across Europe and Australia. The oldest known examples in France have been attributed to Neanderthals around 300,000 years ago.
Dr Andrea Jalandoni, a digital archaeologist from the Griffith Centre for Social and Cultural Research, who led the study, said one of the key questions around finger flutings was who made them?
“Whether the marks were made by men or women can have real world implications,” she said.
“This information has been used to decide who can access certain sites for cultural reasons.”
Past attempts to identify who made cave marks often relied on finger measurements and ratios, or hand size measurements.
Those methods turned out to be inconsistent or vulnerable to error; finger pressure varied, surfaces weren’t uniform, pigments distorted outlines, and the same measurements could overlap heavily between males and females.
“The goal of this research was to avoid those assumptions and use digital archaeology instead,” Dr Jalandoni said.
Two controlled experiments with 96 adult participants were conducted with each person creating nine flutings twice: once on a moonmilk clay substitute developed to mimic the look and feel of cave surfaces and once in virtual reality (VR) using Meta Quest 3.
Images were taken of all the flutings, which were then curated and two common image-recognition models were trained on them.
The team evaluated performance using standard metrics and, crucially, looked for signs that models were simply memorising the training data (overfitting), rather than learning patterns that generalised.
Team member, Dr Gervase Tuxworth from the School of Information and Communication Technology said the results were mixed but revealed some promising insights.
The VR images did not yield reliable sex classification; even when accuracy looked acceptable in places, overall discrimination and balance were weak.
But the tactile images performed much better.
“Under one training condition, models reached about 84 per cent accuracy, and one model achieved a relatively strong discrimination score,” Dr Tuxworth said.
However, the models did learn patterns specific to the dataset; for example, subtle artefacts of the setup, rather than robust features of fluting that would hold elsewhere, which meant there was more work to be done.
The study showed a computational pipeline, from a realistic tactile representation and a VR capture environment to an open machine-learning workflow, could be built, replicated, and improved by others for a more rigorous scientific approach.
“We’ve released the code and materials so others can replicate the experiment, critique it, and scale it,” said Dr Robert Haubt, co-author and Information Scientist from the Australian Research Centre for Human Evolution (ARCHE).
"That’s how a proof of concept becomes a reliable tool.”
The team said this research paved the way for interdisciplinary applications across archaeology, forensics, psychology, and human-computer interaction, while contributing new insights into the cultural and cognitive practices of early humans.
The study ‘Using digital archaeology and machine learning to determine sex in finger flutings’ has been published in Scientific Reports.
Journal
Scientific Reports
Method of Research
Experimental study
Subject of Research
People
Article Title
Training AI to identify ancient artists
Article Publication Date
16-Oct-2025
Dr Andrea Jalandoni studies finger flutings at a cave site in Australia
Credit
Dr Andrea Jalandoni
Tactile set up with finger flutings
Credit
Dr Andrea Jalandoni
Dr Andrea Jalandoni, Digital Archaeologist
Credit
Dr Andrea Jalandoni
Large language models could transform
clinical trials: New review highlights
opportunities and challenges
Large language models could streamline multiple stages of clinical trials — from protocol design to outcome prediction — yet face hurdles in privacy, transparency, and regulatory compliance.
image:
The left boxes illustrate three steps of clinical trial design and their specific contents, in the following top-to-bottom order: establishment of the research background and objectives, protocol development, and ethical approval with informed consent. The right boxes demonstrate how large language models (LLMs) can assist researchers in optimizing and accelerating specific tasks in each design phase.
view moreCredit: Anqi Lin, Zhihan Wang
Clinical trials are essential for advancing medical innovation, but they face growing challenges — from recruiting eligible participants to managing complex data. In a new review published in BMC Medicine, researchers detail how large language models (LLMs), a type of advanced artificial intelligence trained on vast amounts of text, could help streamline these processes.
By automatically extracting research elements from prior studies, refining eligibility criteria, and tailoring informed consent materials, LLMs could improve trial design quality and participant understanding. In trial operations, they show potential for faster, more accurate patient screening, standardized data collection, and real-time safety monitoring, including adverse event and drug–drug interaction detection. LLMs may also predict trial outcomes or simulate trial scenarios, enabling more informed decision-making.
The review notes that LLMs outperform traditional natural language processing models in context comprehension, adaptability, and multitask execution. However, risks remain, including the possibility of generating inaccurate information, sensitivity to prompt wording, and the inability to update knowledge without retraining. The authors emphasize that integrating LLMs into clinical trials will require strict data privacy safeguards, transparent model evaluation, and clear regulatory guidelines.
Journal
BMC Medicine
Method of Research
Literature review
Subject of Research
Not applicable
Article Title
Large Language Models in Clinical Trials: Applications, Technical Advances, and Future Directions
Article Publication Date
14-Oct-2025
SEOULTECH researchers develop VFF-Net, a
revolutionary alternative to backpropagation
that transforms AI training
The proposed algorithm applies the concept of forward-forward network to CNNs, avoiding away traditional back-propagation and its limitations
Seoul National University of Science & Technology
image:
VFF-Net introduces three new methodologies: label-wise noise labelling (LWNL), cosine similarity-based contrastive loss (CSCL), and layer grouping (LG), addressing the challenges of applying a forward-forward network for training convolutional neural networks.
view moreCredit: Hyung Kim from Seoul National University and Technology
Deep neural networks (DNNs), which power modern artificial intelligence (AI) models, are machine learning systems that learn hidden patterns from various types of data, be it images, audio or text, to make predictions or classifications. DNNs have transformed many fields with their remarkable prediction accuracy. Training DNNs typically relies on back-propagation (BP). While it has become indispensable for the success of DNNs, BP has several limitations, such as slow convergence, overfitting, high computational requirements, and its black box nature. Recently, forward-forward networks (FFN) have emerged as a promising alternative, where each layer is trained individually, bypassing BP. However, applying FFNs to convolutional neural networks (CNNs), which are widely used for image analysis, has proven difficult.
To address this challenge, a research team led by Mr. Gilha Lee and Associate Professor Hyun Kim from the Department of Electrical and Information Engineering at Seoul National University of Science and Technology has developed a new training algorithm, called visual foward-forward network (VFF-Net). The team also included Mr. Jin Shin. Their study was made available online on June 16, 2025, and published in Volume 190 of the journal Neural Networks on October 01, 2025.
Explaining the challenge of FNN for training CNN, Mr. Lee says, “Directly applying FFNs for training CNNs can cause information loss in input images, reducing accuracy. Furthermore, for general purpose CNNs with numerous convolutional layers, individually training each layer can cause performance issues. Our VFF-Net effectively addresses these issues.”
VFF-Net introduces three new methodologies: label-wise noise labelling (LWNL), cosine similarity-based contrastive loss (CSCL), and layer grouping (LG). In LWNL, the network is trained on three types of data: the original image without any noise, positive images with correct labels, and negative images with incorrect labels. This helps eliminate the loss of pixel information in the input images.
CSCL modifies the conventional goodness-based greedy algorithm, applying a contrastive loss function based on the cosine similarity between feature maps. Essentially, it compares the similarity between two feature representations based on the direction of the data patterns. This helps preserve the meaningful spatial information necessary for image classification. Finally, LG solves the problem of individual layer training by grouping layers with the same output characteristics and adding auxiliary layers, significantly improving performance.
Thanks to these innovations, VFF-Net significantly improves image classification performance compared to conventional FFNs. For a CNN model with four convolutional layers, test errors on the CIFAR-10 and CIFAR-100 datasets were reduced by 8.31% and 3.80%, respectively. Additionally, the fully connected layer-based VFF-Net achieved a test error of just 1.70% on the MNIST dataset.
“By moving away from BP, VFF-Net paves the way toward lighter and more brain-like training methods that do not need extensive computing resources,” says Dr. Kim. “This means powerful AI models could run directly on personal devices, medical devices, and household electronics, reducing reliance on energy-intensive data centres and making AI more sustainable.”
Overall, VFF-Net will allow AI to become faster and cheaper, while allowing more natural, brain-like learning, facilitating more trustworthy AI systems.
***
Reference
DOI: 10.1016/j.neunet.2025.107697
About the Institute Seoul National University of Science and Technology (SEOULTECH)
Seoul National University of Science and Technology, commonly known as 'SEOULTECH,' is a national university located in Nowon-gu, Seoul, South Korea. Founded in April 1910, around the time of the establishment of the Republic of Korea, SEOULTECH has grown into a large and comprehensive university with a campus size of 504,922 m2.
It comprises 10 undergraduate schools, 35 departments, 6 graduate schools, and has an enrollment of approximately 14,595 students.
Website: https://en.seoultech.ac.kr/
About Associate Professor Hyun Kim
Dr. Hyun Kim is an Associate Professor in the Department of Electrical and Information Engineering at Seoul National University of Science and Technology, South Korea, and leads the Intelligent Digital Systems Design Lab (IDSL). His research focuses on artificial intelligence and hardware-efficient computing architectures, emphasizing AI semiconductors, system-on-chip, and accelerator design. He also serves on editorial boards and program committees for leading international journals and conferences.
About Mr. Gilha Lee
Mr. Gilha Lee is a Ph.D. candidate in the in the Department of Electrical and Information Engineering at Seoul National University of Science and Technology, South Korea. His research focuses on deep learning algorithms, neural network optimization, and training methods beyond backpropagation, with broader interests in model compression, lightweighting, and FPGA-based acceleration for resource-constrained environments.
Journal
Neural Networks
Method of Research
Computational simulation/modeling
Subject of Research
Not applicable
Article Title
VFF-Net: Evolving forward–forward algorithms into convolutional neural networks for enhanced computational insights
Article Publication Date
17-Oct-2025
No comments:
Post a Comment