A.I.
Artificial intelligence helps unlock advances in wireless communications
UBC Okanagan researchers are clearing the way for the next generation of wireless technology
UNIVERSITY OF BRITISH COLUMBIA OKANAGAN CAMPUS
A new wave of communication technology is quickly approaching and researchers at UBC Okanagan are investigating ways to configure next-generation mobile networks.
Dr. Anas Chaaban works in the UBCO Communication Theory Lab where researchers are busy analyzing a theoretical wireless communication architecture that will be optimized to handle increasing data loads while sending and receiving data faster.
Next-generation mobile networks are expected to outperform 5G on many fronts such as reliability, coverage and intelligence, explains Dr. Chaaban, an Assistant Professor in UBCO’s School of Engineering.
And the benefits go far beyond speed. The next generation of technology is expected to be a fully integrated system that allows for instantaneous communications between devices, consumers and the surrounding environment, he says.
These new networks will call for intelligent architectures that support massive connectivity, ultra-low latency, ultra-high reliability, high-quality experience, energy efficiency and lower deployment costs.
“One way to meet these stringent requirements is to rethink traditional communication techniques by exploiting recent advances in artificial intelligence,” he says. “Traditionally, functions such as waveform design, channel estimation, interference mitigation and error detection and correction are developed based on theoretical models and assumptions. This traditional approach is not capable of adapting to new challenges introduced by emerging technologies.”
Using a technology called transformer masked autoencoders, the researchers are developing techniques that enhance efficiency, adaptability and robustness. Dr. Chaaban says while there are many challenges in this research, it is expected it will play an important role in next-generation communication networks.
“We are working on ways to take content like images or video files and break them down into smaller packets in order to transport them to a recipient,” he says “The interesting thing is that we can throw away a number of packets and rely on AI to recover them at the recipient, which then links them back together to recreate the image or video.”
The experience, even today, is something users take for granted but next-generation technology—where virtual reality will be a part of everyday communications including cell phone calls—is positioned to improve wireless systems substantially, he adds. The potential is unparalleled.
“AI provides us with the power to develop complex architectures that propel communications technologies forward to cope with the proliferation of advanced technologies such as virtual reality,” says Chaaban. “By collectively tackling these intricacies, the next generation of wireless technology can usher in a new era of adaptive, efficient and secure communication networks.”
The research is published in the latest issue of IEEE Communications Magazine.
JOURNAL
IEEE Communications Magazine
METHOD OF RESEARCH
Meta-analysis
SUBJECT OF RESEARCH
Not applicable
ARTICLE TITLE
Transformer Masked Autoencoders for Next-Generation Wireless Communications: Architecture and Opportunities
Aston University AI project aims to make international health data sharing easier
- Project to improve sharing data while complying with general data protection regulation (GDPR) guidelines
- Aston Institute of Photonic Technologies awarded almost £300k to work on European-wide project
- Will develop secure data sharing system to allow access to large sets of multi-source health data via tailor-made AI tools.
Aston University is to explore the use of AI to improve sharing health data internationally.
Dr Sergei Sokolovski of the University’s Aston Institute of Photonic Technologies has been awarded €317,500 to work on a European-wide project.
Called BETTER (Better real-world health data distributed analytics research platform) the spans16 academic, medical and industrial partners.
Although data-driven medicine is currently used to improve diagnosis, treatment and medical research ethical, legal and privacy issues can prevent sharing and centralising data for analysis.
The research at Aston University’s involvement in the BETTER project aims to overcome these challenges so health data can be shared across national borders while fully complying with the general data protection regulation (GDPR) guidelines.
Dr Sergei Sokolovski will lead the development of a secure data sharing system which will allow access to large sets of multi-source health data via tailor-made AI tools.
Scientists and healthcare professionals will be able to compare, integrate and analyse data securely at a lower cost than current methods to improve people’s health.
The BETTER project will focus on three health conditions; childhood learning disabilities, inherited degenerative retina diseases and autism, involving seven medical centres across the European Union and beyond.
Dr Sergei Sokolovski said: “Data protection regulations prohibit data centralisation for analysis purposes because of privacy risks like the accidental disclosure of personal data to third parties.
“Therefore, to enable health data sharing across national borders and to fully comply with GDPR guidelines this project proposes a robust decentralised infrastructure which will empower researchers, innovators and healthcare professionals to exploit the full potential of larger sets of multi-source health data.
“As healthcare continues to evolve in an increasingly data-driven world projects like BETTER offer promising solutions to the challenges of health data sharing, research collaboration, and ultimately, improving the well-being of citizens worldwide.
“The collaboration between multiple stakeholders, including medical centres, researchers, and innovators, highlights the importance of interdisciplinary efforts in addressing these complex issues.”
The research will last 42 months.
Novel AI platform matches cardiologists in detecting rheumatic heart disease
Pilot program to deploy highly portable technology to be rolled out later this year
WASHINGTON (Jan. 16, 2024) – Artificial intelligence (AI) has the potential to detect rheumatic heart disease (RHD) with the same accuracy as a cardiologist, according to new research demonstrating how sophisticated deep learning technology can be applied to this disease of inequity. The work could prevent hundreds of thousands of unnecessary deaths around the world annually.
Developed at Children’s National Hospital and detailed in the latest edition of the Journal of the American Heart Association, the new AI system combines the power of novel ultrasound probes with portable electronic devices installed with algorithms capable of diagnosing RHD on echocardiogram. Distributing these devices could allow healthcare workers, without specialized medical degrees, to carry technology that could detect RHD in regions where it remains endemic.
RHD is caused by the body’s reaction to repeated Strep A bacterial infections and can cause permanent heart damage. If detected early, the condition is treatable with penicillin, a widely available antibiotic. In the United States and other high-income nations, RHD has been almost entirely eradicated. However, in low- and middle-income countries, it impacts the lives of 40 million people, causing nearly 400,000 deaths a year.
“This technology has the potential to extend the reach of a cardiologist to anywhere in the world,” said Kelsey Brown, M.D., a cardiology fellow at Children’s National and co-lead author on the manuscript with Staff Scientist Pooneh Roshanitabrizi, Ph.D. “In one minute, anyone trained to use our system can screen a child to find out if their heart is demonstrating signs of RHD. This will lead them to more specialized care and a simple antibiotic to prevent this degenerative disease from critically damaging their hearts.”
Millions of citizens in impoverished countries have limited access to specialized care. Yet the gold standard for diagnosing RHD requires a highly trained cardiologist to read an echocardiogram—a non-invasive and widely distributed ultrasound imaging technology. Without access to a cardiologist, the condition may remain undetected and lead to complications, including advanced cardiac disease and even death.
According to the new research, the AI algorithm developed at Children’s National identified mitral regurgitation in up to 90% of children with RHD. This tell-tale sign of the disease causes the mitral valve flaps to close improperly, leading to backward blood flow in the heart.
Beginning in March, Craig Sable, M.D., interim division chief of Cardiology, and his partners on the project will implement a pilot program in Uganda incorporating AI into the echo screening process of children being checked for RHD. The team believes that a handheld ultrasound probe, a tablet and a laptop – installed with the sophisticated, new algorithm – could make all the difference in diagnosing these children early enough to change outcomes.
“One of the most effective ways to prevent rheumatic heart disease is to find the patients that are affected in the very early stages, give them monthly penicillin for pennies a day and prevent them from becoming one of the 400,000 people a year who die from this disease,” Dr. Sable said. “Once this technology is built and distributed at a scale to address the need, we are optimistic that it holds great promise to bring highly accurate care to economically disadvantaged countries and help eradicate RHD around the world.”
To devise the best approach, two Children’s National experts in AI – Dr. Roshanitabrizi and Marius George Linguraru, D.Phil., M.A., M.Sc., the Connor Family Professor in Research and Innovation and principal investigator in the Sheikh Zayed Institute for Pediatric Surgical Innovation – tested a variety of modalities in machine learning, which mimics human intelligence, and deep learning, which goes beyond the human capacity to learn. They combined the power of both approaches to optimize the novel algorithm, which is trained to interpret ultrasound images of the heart to detect RHD.
Already, the AI algorithm has analyzed 39 features of hearts with RHD that cardiologists cannot detect or measure with the naked eye. For example, cardiologists know that the heart’s size matters when diagnosing RHD. Current guidelines lay out diagnostic criteria using two weight categories – above or below 66 pounds – as a surrogate measure for the heart’s size. Yet the size of a child’s heart can vary widely in those two groupings.
“Our algorithm can see and make adjustments for the heart’s size as a continuously fluid variable,” Dr. Roshanitabrizi said. “In the hands of healthcare workers, we expect the technology to amplify human capabilities to make calculations far more quickly and precisely than the human eye and brain, saving countless lives.”
Among other challenges, the team had to design new ways to teach the AI to handle the inherent clinical differences found in ultrasound images, along with the complexities of evaluating color Doppler echocardiograms, which historically have required specialized human skill to evaluate.
“There is a true art to interpreting this kind of information, but we now know how to teach a machine to learn faster and possibly better than the human eye and brain,” Dr. Linguraru said. “Although we have been using this diagnostic and treatment approach since World War II, we haven’t been able to share this competency globally with low- and middle-income countries, where there are far fewer cardiologists. With the power of AI, we expect that we can, which will improve equity in medicine around the world.”
Editor’s note: Images and videos from the team’s trips to Uganda are available upon request.
Media Contact: Katie Shrader | kshrader@childrensnational.org
###
About Children’s National Hospital
Children’s National Hospital, based in Washington, D.C., was established in 1870 to help every child grow up stronger. Today, it is the No. 5 children’s hospital in the nation and ranked in all specialties evaluated by U.S. News & World Report. Children’s National is transforming pediatric medicine for all children. The Children’s National Research & Innovation Campus opened in 2021, a first-of-its-kind pediatric hub dedicated to developing new and better ways to care for kids. Children’s National has been designated three times in a row as a Magnet® hospital, demonstrating the highest standards of nursing and patient care delivery. This pediatric academic health system offers expert care through a convenient, community-based primary care network and specialty care locations in the D.C. metropolitan area, including Maryland and Virginia. Children’s National is home to the Children’s National Research Institute and Sheikh Zayed Institute for Pediatric Surgical Innovation. It is recognized for its expertise and innovation in pediatric care and as a strong voice for children through advocacy at the local, regional and national levels. As a nonprofit, Children's National relies on generous donors to help ensure that every child receives the care they need.
For more information, follow us on Facebook, Instagram, Twitter and LinkedIn.
JOURNAL
Journal of the American Heart Association
ARTICLE PUBLICATION DATE
16-Jan-2024
Machine learning method speeds up discovery of green energy materials
Researchers ditch “trial and error” and turn to machine learning to identify two uniquely structured proton-conducting oxides – a key material needed in hydrogen fuel cells.
IMAGE:
THE PROTON-CONDUCTING LAYER CURRENTLY FOUND IN SOLID OXIDE FUEL CELLS IS TYPICALLY MADE FROM A PEROVSKITE STRUCTURE (LEFT). USING MACHINE LEARNING, A RESEARCH TEAM, LED BY KYUSHU UNIVERSITY, HAS IDENTIFIED TWO NEW MATERIALS, WITH DIFFERENT CRYSTAL STRUCTURES (CENTER AND RIGHT) THAT ALSO CAN CONDUCT PROTONS. |
CREDIT: KYUSHU UNIVERSITY/YAMAZAKI LAB
Fukuoka, Japan – Researchers at Kyushu University, in collaboration with Osaka University and the Fine Ceramics Center, have developed a framework that uses machine learning to speed up the discovery of materials for green energy technology. Using the new approach, the researchers identified and successfully synthesized two new candidate materials for use in solid oxide fuel cells – devices that can generate energy using fuels like hydrogen, which don’t emit carbon dioxide. Their findings, which were reported in the journal, Advanced Energy Materials, could also be used to accelerate the search for other innovative materials beyond the energy sector.
In response to a warming climate, researchers have been developing new ways to generate energy without using fossil fuels. “One path to carbon neutrality is by creating a hydrogen society. However, as well as optimizing how hydrogen is made, stored and transported, we also need to boost the power-generating efficiency of hydrogen fuel cells,” explains Professor Yoshihiro Yamazaki, of Kyushu University’s Department of Materials Science and Technology, Platform of Inter-/Transdisciplinary Energy Research (Q-PIT).
To generate an electric current, solid oxide fuel cells need to be able to efficiently conduct hydrogen ions (or protons) through a solid material, known as an electrolyte. Currently, research into new electrolyte materials has focused on oxides with very specific crystal arrangements of atoms, known as a perovskite structure.
“The first proton-conducting oxide discovered was in a perovskite structure, and new high-performing perovskites are continually being reported,” says Professor Yamazaki. “But we want to expand the discovery of solid electrolytes to non-perovskite oxides, which also have the capability of conducting protons very efficiently.”
However, discovering proton-conducting materials with alternative crystal structures via traditional “trial and error” methods has numerous limitations. For an electrolyte to gain the ability to conduct protons, small traces of another substance, known as a dopant, must be added to the base material. But with many promising base and dopant candidates - each with different atomic and electronic properties – finding the optimal combination that enhances proton conductivity becomes difficult and time-consuming.
Instead, the researchers calculated the properties of different oxides and dopants. They then used machine learning to analyze the data, identify the factors that impact the proton conductivity of a material, and predict potential combinations.
Guided by these factors, the researchers then synthesized two promising materials, each with unique crystal structures, and assessed how well they conducted protons. Remarkably, both materials demonstrated proton conductivity in just a single experiment.
One of the materials, the researchers highlighted, is the first-known proton conductor with a sillenite crystal structure. The other, which has a eulytite structure, has a high-speed proton conduction path that is distinct from the conduction paths seen in perovskites. Currently, the performance of these oxides as electrolytes is low, but with further exploration, the research team believes their conductivity can be improved.
“Our framework has the potential to greatly expand the search space for proton-conducting oxides, and therefore significantly accelerate advancements in solid oxide fuel cells. It’s a promising step forward to realizing a hydrogen society,” concludes Professor Yamazaki. “With minor modifications, this framework could also be adapted to other fields of materials science, and potentially accelerate the development of many innovative materials.”
###
For more information about this research, see "Discovery of Unconventional Proton-Conducting Inorganic Solids via Defect-Chemistry-Trained, Interpretable Machine Learning" Susumu Fujii, Yuta Shimizu, Junji Hyodo, Akihide Kuwabara, Yoshihiro Yamazaki, Advanced Energy Materials, https://doi.org/10.1002/aenm.202301892
About Kyushu University
Kyushu University is one of Japan's leading research-oriented institutes of higher education since its founding in 1911. Home to around 19,000 students and 8,000 faculty and staff, Kyushu U's world-class research centers cover a wide range of study areas and research fields, from the humanities and arts to engineering and medical sciences. Its multiple campuses—including one of the largest in Japan—are located around Fukuoka City, a coastal metropolis on the southwestern Japanese island of Kyushu that is frequently ranked among the world's most livable cities and historically known as Japan's gateway to Asia. Through its Vision 2030, Kyushu U will 'Drive Social Change with Integrative Knowledge.' Its synergistic application of knowledge will encompass all of academia and solve issues in society while innovating new systems for a better future.
JOURNAL
Advanced Energy Materials
METHOD OF RESEARCH
Experimental study
SUBJECT OF RESEARCH
Not applicable
ARTICLE TITLE
Discovery of Unconventional Proton-Conducting Inorganic Solids via Defect-Chemistry-Trained, Interpretable Machine Learning
AI-driven nutritional assessment of seed mixtures enhances sustainable farming practices
Cultivating seed mixtures for local pastures is an age-old method to produce cost-effective and balanced animal feed, enhancing agricultural autonomy and environmental friendliness in line with evolving European regulations and organic consumer demands. Despite its benefits, farmers face adoption challenges due to the asynchronous ripening of cereals and legumes and the difficulty in assessing the nutritional value of heterogeneous seeds. Current practices rely on informal, empirical methods, and a proposed solution is to develop a mobile app or online service, similar to Pl@ntNet, for automated nutritional evaluation of seed mixtures, encouraging farmer participation and database enrichment. However, this requires overcoming agricultural and computer vision challenges. Overcoming these, along with optimizing deep neural network models and loss functions, remains a critical research focus to make this sustainable agricultural practice more accessible and efficient.
In November 2023, Plant Phenomics published a research article entitled by “Estimating Compositions and Nutritional Values of Seed Mixes Based on Vision Transformers ”.
This research presents a novel approach using Artificial Intelligence to estimate the nutritional value of harvested seed mixes, aiming to assist farmers in managing crop yields and promoting sustainable cultivation. A dataset of 4,749 images covering 11 seed varieties was created to train two deep learning models: Convolutional Neural Networks (CNN) and Vision Transformers (ViT). The results significantly favored the ViT-based BeiT model, which outperformed the CNN in all metrics, including a Mean Absolute Error (MAE) of only 0.0383 and a coefficient of determination (R2) of 0.91. Data augmentation techniques and model size variations further refined performance. Although larger models offering some improvements, the base version of BeiT proved most efficient in terms of balance between performance and computational resources. The study also explored loss functions, finding that the classical KLDiv loss outperformed the Sparsemax variant. Detailed analysis by seed type revealed distinct performance across categories, with models generally excelling in recognizing barley, lupine, rye, spelt, and wheat, while facing challenges with vetch and oats. Aggregating predictions from multiple images of the same mix significantly improved robustness and accuracy. The research culminated in the development of "ESTI'METEIL" (https://c4c.inria.fr/carpeso/), an open-access web component that allows users to estimate seed composition and nutritional value from images. This tool demonstrates the practical application and potential of the research for real-world farming scenarios.
In conclusion, the study effectively applied advanced deep learning techniques, particularly the self-supervised BeiT model, to the agricultural challenge of estimating the composition of seed mixtures and their nutritional values. The research not only showed promising results with a high R2 score but also provided a practical tool for farmers, marking a significant step towards more sustainable and informed agricultural practices. Future work will aim to improve data balance and explore synthetic image generation to further improve model performance and practical applicability.
###
References
Authors
Shamprikta Mehreen1*, Hervé Goëau2, Pierre Bonnet2, Sophie Chau3, Julien Champ1, and Alexis Joly1
Affiliations
1Inria, LIRMM, University Montpellier, CNRS, Montpellier, France.
2CIRAD, UMR AMAP, Montpellier, Occitanie, France.
3Chambre d’Agriculture - Haute Vienne, Limoges, Nouvelle-Aquitaine, France.
JOURNAL
Plant Phenomics
METHOD OF RESEARCH
Experimental study
ARTICLE TITLE
Estimating Compositions and Nutritional Values of Seed Mixes Based on Vision Transformers
AI discovers that not every fingerprint is unique
Columbia engineers have built a new AI that shatters a long-held belief in forensics–that fingerprints from different fingers of the same person are unique. It turns out they are similar, only we’ve been comparing fingerprints the wrong way!
Peer-Reviewed PublicationColumbia engineers have built a new AI that shatters a long-held belief in forensics–that fingerprints from different fingers of the same person are unique. It turns out they are similar, only we’ve been comparing fingerprints the wrong way!
New York, NY—January 12, 2024—From “Law and Order” to “CSI,” not to mention real life, investigators have used fingerprints as the gold standard for linking criminals to a crime. But if a perpetrator leaves prints from different fingers in two different crime scenes, these scenes are very difficult to link, and the trace can go cold.
It’s a well-accepted fact in the forensics community that fingerprints of different fingers of the same person--”intra-person fingerprints”--are unique, and therefore unmatchable.
Research led by Columbia Engineering undergraduate
A team led by Columbia Engineering undergraduate senior Gabe Guo challenged this widely held presumption. Guo, who had no prior knowledge of forensics, found a public U.S. government database of some 60,000 fingerprints and fed them in pairs into an artificial intelligence-based system known as a deep contrastive network. Sometimes the pairs belonged to the same person (but different fingers), and sometimes they belonged to different people.
AI has potential to greatly improve forensic accuracy
Over time, the AI system, which the team designed by modifying a state-of-the-art framework, got better at telling when seemingly unique fingerprints belonged to the same person and when they didn’t. The accuracy for a single pair reached 77%. When multiple pairs were presented, the accuracy shot significantly higher, potentially increasing current forensic efficiency by more than tenfold. The project, a collaboration between Hod Lipson’s Creative Machines lab at Columbia Engineering and Wenyao Xu’s Embedded Sensors and Computing lab at University at Buffalo, SUNY, was published today in Science Advances.
Study findings challenge–and surprise–forensics community
Once the team verified their results, they quickly sent the findings to a well-established forensics journal, only to receive a rejection a few months later. The anonymous expert reviewer and editor concluded that “It is well known that every fingerprint is unique,” and therefore it would not be possible to detect similarities even if the fingerprints came from the same person.
The team did not give up. They doubled down on the lead, fed their AI system even more data, and the system kept improving. Aware of the forensics community's skepticism, the team opted to submit their manuscript to a more general audience. The paper was rejected again, but Lipson, who is the James and Sally Scapa Professor of Innovation in the Department of Mechanical Engineering and co-director of the Makerspace Facility, appealed. “I don’t normally argue editorial decisions, but this finding was too important to ignore,” he said. “If this information tips the balance, then I imagine that cold cases could be revived, and even that innocent people could be acquitted.”
While the system’s accuracy is not sufficient to officially decide a case, it can help prioritize leads in ambiguous situations. After more back and forth, the paper was finally accepted for publication by Science Advances.
Unveiled: a new kind of forensic marker to precisely capture fingerprints
One of the sticking points was the following question: What alternative information was the AI actually using that has evaded decades of forensic analysis? After careful visualizations of the AI system’s decision process, the team concluded that the AI was using a new kind of forensic marker.
“The AI was not using ‘minutiae,’ which are the branchings and endpoints in fingerprint ridges – the patterns used in traditional fingerprint comparison,” said Guo, who began the study as a first-year student at Columbia Engineering in 2021. “Instead, it was using something else, related to the angles and curvatures of the swirls and loops in the center of the fingerprint.”
Columbia Engineering senior Aniv Ray and PhD student Judah Goldfeder, who helped analyze the data, noted that their results are just the beginning. “Just imagine how well this will perform once it’s trained on millions, instead of thousands of fingerprints,” said Ray.
VIDEO: https://youtu.be/s5esfRbBc18
A need for broader datasets
The team is aware of potential biases in the data. The authors present evidence that indicates that the AI performs similarly across genders and races, where samples were available. However, they note, more careful validation needs to be done using datasets with broader coverage if this technique is to be used in practice.
Transformative potential of AI in a well-established field
This discovery is an example of more surprising things to come from AI, notes Lipson, . “Many people think that AI cannot really make new discoveries–that it just regurgitates knowledge,” he said. “But this research is an example of how even a fairly simple AI, given a fairly plain dataset that the research community has had lying around for years, can provide insights that have eluded experts for decades.”
He added, “Even more exciting is the fact that an undergraduate student, with no background in forensics whatsoever, can use AI to successfully challenge a widely held belief of an entire field. We are about to experience an explosion of AI-led scientific discovery by non-experts, and the expert community, including academia, needs to get ready.”
###
About the Study
The paper is titled “Unveiling Intra-Person Fingerprint Similarity via Deep Contrastive Learning.”
Authors are: Gabe Guo, Aniv Ray, Judah Goldfeder, and Hod Lipson, Columbia Engineering; Miles Izydorczak, Tufts University; and Wenyao Xu, University at Buffalo, SUNY.
The work is part of a joint University of Washington, Columbia and Harvard NSF AI Institute for Dynamical Systems, aimed to accelerate scientific discovery using AI.
The study was supported by NSF AI Institute for Dynamical Systems 2112085, and NSF REU Site 2050910.
The authors declare no financial or other conflicts of interest.
Media contact:
Holly Evarts, Director of Strategic Communications and Media Relations
347-453-7408 (c) | 212-854-3206 (o) | holly.evarts@columbia.edu
###
LINKS:
Paper: https://www.science.org/doi/10.1126/sciadv.adi0329
VIDEO: https://youtu.be/s5esfRbBc18
PROJECT WEBSITE: https://creativemachineslab.com/fingerprints.html
###
JOURNAL
Science Advances
ARTICLE TITLE
Unveiling Intra-Person Fingerprint Similarity via Deep Contrastive Learning
ARTICLE PUBLICATION DATE
12-Jan-2024.
Creating exam questions with ChatGPT
Hardly any difference between humans and AI
Peer-Reviewed PublicationFor the study, the UKB researchers created two sets of 25 multiple-choice questions (MCQs), each with five possible answers, one of which was correct. The first set of questions was written by an experienced medical lecturer, the second set was created by ChatGPT. 161 students answered all questions in random order. For each question, students also indicated whether they thought it was created by a human or by ChatGPT.
Matthias Laupichler, one of the study authors and research associate at the Institute for Medical Didactics at the UKB, explains: "We were surprised that the difficulty of human-generated and ChatGPT-generated questions was virtually identical. Even more surprising for us, however, was that the students were unable to correctly identify the origin of the question in almost half of the cases. Although the results obviously need to be replicated in further studies, the automated generation of exam questions using ChatGPT and co. appears to be a promising tool for medical studies.
His colleague and co-author of the study Johanna Rother adds: "Lecturers can use ChatGPT to generate ideas for exam questions, which are then checked and, if necessary, revised by the lecturers. In our opinion, however, students in particular benefit from the automated generation of medical practice questions, as it has long been known that self-testing one's own knowledge is very beneficial for learning."
Tobias Raupach, Director of the Institute of Medical Didactics, continues: "We knew from previous studies that language models such as ChatGPT can answer the questions in medical state examinations. We have now been able to show for the first time that the software can also be used to write new questions that hardly differ from those of experienced teachers."
Tizian Kaiser, who is studying human medicine in his seventh semester, comments: "When working on the mock exam, I was quite surprised at how difficult it was for me to tell the questions apart. My approach was to differentiate between the questions based on their length, the complexity of their sentence structure and the difficulty of their content. But to be honest, in some situations I simply had to guess and the evaluation showed that I was barely able to differentiate between them. This leads me to the conviction that a meaningful knowledge query, as in this exam, is also possible exclusively through questions posed by the AI."
He is convinced that ChatGPT has great potential for student learning. It allows students to repeat what they have learned in different ways and in different ways again and again. "There is the option of being quizzed by the AI on predefined topics, having mock exams designed or simulating oral exams in writing. The repetition of the material is thus tailored to the exam concept and the training possibilities are endless," says the study participant, while also qualifying: "However, I would only use Chat-GPT for this purpose and not beforehand in the learning process, in which the study topics have to be worked through and summarized. Because while Chat-GPT is excellent for repetition, I fear that errors can occur when preparing learning content. I wouldn't notice these errors without a prior overview of the topic."
It is known from other studies that regular testing – even and especially without grading - helps students to remember learning content more sustainably. Such tests can now be created with little effort. However, the current study should first be transferred to other contexts (i. e. other subjects, semesters and countries) and it should be investigated whether ChatGPT can also write questions other than the multiple choice questions commonly used in medicine.
Original publication: https://journals.lww.com/academicmedicine/abstract/9900/large_language_models_in_medical_education_.719.aspx
DOI: 10.1097/ACM.0000000000005626Translated with DeepL.com (free version)
JOURNAL
Academic Medicine
METHOD OF RESEARCH
Experimental study
SUBJECT OF RESEARCH
People
ARTICLE TITLE
Large Language Models in Medical Education: Comparing ChatGPT- to Human-Generated Exam Questions
Study from ECNU Review of Education redevelops framework for teaching artificial intelligence and robotics
Researchers redefine the “five big ideas of AI” to guide the education of preschool kids in the most rapidly developing fields.
Just like computers, the Internet, and smartphones have become commonplace in our daily lives, artificial intelligence and robotics (AIR) are the next technologies in line set to drastically change how we interact with the world and among ourselves. Various AI-driven applications are already in widespread use, such as Siri, Google Assistant, and ChatGPT, and both industrial- and consumer-grade robots are becoming increasingly capable and accessible.
In our modern societies, where people rely more and more on AIR systems to perform tasks, it’s essential to prepare children and teenagers to understand and use these tools effectively. To this end, the AI4K12 initiative was developed, which comprised a set of guidelines for teaching AI within the context of K-12 education. Notably, AI4K12 outlines “five big ideas of AI” as foundational concepts or key principles that are deemed essential to grasp AI. However, these big ideas are too complex for children younger than six years old.
Against this backdrop, a research team comprising Dr. Weipeng Yang, an Assistant Professor at the Education University of Hong Kong and Ms. Jiahong Su from the University of Hong Kong decided to revise AI4K12’s framework and identify five big ideas of AI that are better suited for young children, especially preschoolers. Their study was published online in the journal ECNU Review of Education on December 10, 2023. Notably, these authors had published another study in this journal on April 19, 2023, in which they proposed a theoretical framework to guide the use of AI tools, such as ChatGPT, in education.
The first big idea addresses the concept of AIR perception. Children should understand that robots and computers can use a variety of sensors to perceive their surroundings and make decisions accordingly. One way to teach this concept is through demonstration, using either a simple robot with an exploratory task or by having children role-play themselves as wandering robots with limited or altered sensing capabilities.
The second big idea introduces the concepts of AI representation and reasoning. Dr. Yang explains: “AI systems work on algorithms and use codes to interpret information, which is different from our understanding and thought process. Young children need to understand that AI’s process of perceiving the world is different from that of humans. They should acknowledge the unique features of AI that complement human qualities.” A hands-on activity like shape-sorting alongside a robotic friend may properly illustrate this big idea in a way children can comprehend.
The third big idea is related to AI learning. Children should understand that AIR systems can process very large amounts of data to arrive at their proposed results or solutions. Moreover, they should be aware that AI can learn from new information to help humans solve tasks.
The fourth big idea revolves around the concept of natural interactions between AIR and humans. Children should understand that AIR systems are developed by humans and lack consciousness or self-awareness.
Finally, the fifth big idea addresses the societal impact of AIR. Children must be taught that AI will have (or have already had) a profound impact on human lives and the world. “Educating children on AI right from the preschool will ensure effective application of AI tools by students,” highlights Dr. Yang.
The article also proposes several ways to engage young children in learning about the five big ideas of AI through the use of robotics. Specifically, the researchers emphasize the importance of interactive and memorable experiences, especially through acts of play and other hands-on opportunities to interact with AIR systems. “Our five big ideas of AI framework redeveloped from AI4K12 will help children better understand AI and its importance in the rapidly developing digital society,” concludes Dr. Yang.
Hopefully, children of all ages will soon be able to experience and understand AI in a healthy and responsible manner, leading to new applications and learning opportunities.
***
Reference
Authors: Jiahong Su1 and Weipeng Yang2
Title of original paper: Artificial Intelligence and Robotics for Young Children: Redeveloping the Five Big Ideas Framework
Journal: ECNU Review of Education
DOI: https://doi.org/10.1177/20965311231218013
Affiliations
1The University of Hong Kong
2The Education University of Hong Kong
About ECNU Review of Education
The ECNU Review of Education is an international peer-reviewed open access journal, established by the East China Normal University (eponymous ECNU). The journal publishes research in the field of education, with a focus on interdisciplinary perspectives and contextual sensitivity. It seeks to provide a platform for the pedagogical community to network, promote dialogue, advance knowledge, synthesize ideas, and contribute to meaningful change.
About Ms. Jiahong Su
Ms. Jiahong Su is currently a Ph.D. candidate in the Faculty of Education at the University of Hong Kong. Her areas of reasearch include technology education, AI, and STEM in early childhood education. She has published many papers in the field of Artifical Intelligence, coding, teacher education and computational thinking. She has also served as a reviewer for various journals, including Computers & Education, Education and Information Technologies, Early Child Development and Care, and Early Childhood Education Journal.
About Assistant Professor Weipeng Yang
Dr. Weipeng Yang is an Assistant Professor at the Department of Early Childhood Education in the Education University of Hong Kong. His research focuses on early childhood curriculum and pedagogy, with specialized interests in STEM education, technology integration, socio-emotional wellbeing, and culture. He holds multiple editorial positions, including Editor at Journal of Research in Childhood Education, Associate Editor at Journal for the Study of Education and Development, and Convenor of Curriculum, Assessment and Pedagogy SIG at British Educational Research Association, among others.
JOURNAL
ECNU Review of Education
METHOD OF RESEARCH
Systematic review
SUBJECT OF RESEARCH
Not applicable
ARTICLE TITLE
Artificial Intelligence and Robotics for Young Children: Redeveloping the Five Big Ideas Framework
Train your brain to overcome tinnitus
An app can change the lives of those affected by tinnitus
Peer-Reviewed PublicationAn international research team has shown that the debilitating impact of tinnitus can be effectively reduced in just weeks by a training course and sound therapy delivered via a smartphone app.
The team from Australian, New Zealand, French and Belgian universities report these findings today in Frontiers in Audiology and Otology.
It offers some hope for millions affected by tinnitus who:
- have been told that there is nothing they can do about it
- face long queues waiting for treatment, or
- can’t afford the costs of specialist support.
The initial trial worked with 30 sufferers, of whom almost two thirds experienced a ‘clinically significant improvement’. The team are now planning larger trials in the UK in collaboration with the University College London Hospital.
The app, MindEar, is available for individuals to trial for themselves on a smartphone.
Tinnitus is common, affecting up to one in four people. It is mostly experienced by older adults but can appear for children. For some, it goes away without intervention. For others, it can be debilitatingly lifechanging: affecting hearing, mood, concentration, sleep and in severe cases, causing anxiety or depression.
“About 1.5 million people in Australia, 4 million in the UK and 20 million in the USA have severe tinnitus,” says Dr Fabrice Bardy, an audiologist at Waipapa Taumata Rau, University of Auckland and lead author of the paper. Dr Bardy is also co-founder of MindEar, a company set up to commercialise the MindEar technology.
“One of the most common misconceptions about tinnitus is that there is nothing you can do about it; that you just have to live with it. This is simply not true. Professional help from those with expertise in tinnitus support can reduce the fear and anxiety attached to the sound patients experience,” he says.
“Cognitive behavioural therapy is known to help people with tinnitus, but it requires a trained psychologist. That’s expensive, and often difficult to access,” says Professor Suzanne Purdy, Professor of Psychology at Waipapa Taumata Rau, University of Auckland.
“MindEar uses a combination of cognitive behavioural therapy, mindfulness and relaxation exercises as well as sound therapy to help you train your brain’s reaction so that we can tune out tinnitus. The sound you perceive fades in the background and is much less bothersome,” she says.
“In our trial, two thirds of users of our chatbot saw improvement after 16 weeks. This was shortened to only 8 weeks when patients additionally had access to an online psychologist,” says Dr Bardy.
Why does it work?
Even before we are born, our brains learn to filter out sounds that we determine to be irrelevant, such as the surprisingly loud sound of blood rushing past our ears. As we grow, our brains further learn to filter out environmental noises such as a busy road, an air conditioner or sleeping partners.
Most alarms, such as those in smoke detectors, bypass this filter and trigger a sense of alert for people, even if they are asleep. This primes the fight-or-flight response, and is especially strong for sounds we associate with bad prior experiences.
Unlike an alarm, tinnitus occurs when a person hears a sound in the head or ears, when there is no external sound source or risk presented in the environment, and yet the mind responds with a similar alert response.
The sound is perceived as an unpleasant, irritating, or intrusive noise that can't be switched off. The brain focuses on it insistently, further training our mind to pay even more attention even though there is no risk. This offers the pathway for patients. By training and actively giving the tinnitus less attention, the easier it becomes to tunes out.
MindEar aims to help people to practice focus through a training program, equipping the mind and body to suppress stress hormones and responses and thus reducing the brain’s focus on tinnitus.
Tinnitus is not a disease in itself but is usually a symptom of another underlying health condition, such as damage to the auditory system or tensions occurring in the head and neck.
Although there is no known cure for tinnitus, there are management strategies and techniques that help many sufferers find relief. With the evidence of this trial, the MindEar team are optimistic that there is a more accessible, rapidly available and effective tool available for the many of those affected by tinnitus still awaiting support.
MindEar is based on the research work of an international multi-disciplinary team composed of audiologists (Dr Laure Jacquemin, Dr Michael Maslin), psychologists (Prof Suzanne Purdy and Dr Cara Wong) and ENTs (Prof Hung Thai Van) led by Dr Fabrice Bardy based at the University of Auckland.
The MindEar app is the world's first AI companion created to help with tinnitus
CREDIT
MindEar
Delivery of internet-based cognitive behavioral therapy combined with human-delivered telepsychology in tinnitus sufferers through a chatbot-based mobile app
Fabrice Bardy1,2*, Laure Jacquemin3,4, Cara L. Wong, Michael R. D. Maslin1,2,5, Suzanne C. Purdy1,2 and Hung Thai-Van6,7,8
1School of Psychology, Speech Science, The University of Auckland, Auckland, New Zealand
2Eisdell Moore Centre for Hearing and Balance Research, Auckland, New Zealand
3Department of Translational Neuroscience, Faculty of Medicine and Health Science, University of Antwerp, Antwerp, Belgium
4University Department of Otorhinolaryngology and Head and Neck Surgery, Antwerp University Hospital, Edegem, Belgium
5School of Psychology, Speech and Hearing, The University of Canterbury, Canterbury, New Zealand
6Service d’Audiologie & d’Explorations Otoneurologiques, Hospices Civils de Lyon, Lyon, France
7Université Claude Bernard Lyon 1, Villeurbanne, France
8Institut de l’Audition, Institut Pasteur, Inserm, Paris, France
Background: While there is no cure for tinnitus, research has shown that cognitive behavioral therapy (CBT) is effective in managing clinical sequelae. Although traditional CBT is labor-intensive and costly, new online consultations may improve accessibility. Moreover, there is promise in an engaging conversational agent, or a “chatbot,” delivering CBT in a conversation-like manner and allowing users to work through complex situations with the guidance of a virtual coach. Currently, there is little research examining a possible hybrid model using iCBT and tele consultation with a psychologist.
Methods: A randomized, 2 parallel-group trial was conducted to compare the clinical effectiveness of (1) iCBT delivered through a chatbot mobile app (i.e., Tinnibot only group) (2) Tinnibot combined with telepsychology (i.e., hybrid- intervention group). A total of 30 eligible adults with tinnitus were included. After an 8-week intervention period, participants were followed up for 2 months. The primary outcome measure, the Tinnitus Functional Index (TFI), and the secondary outcome measures, Hyperacusis Questionnaire (HQ), Generalized Anxiety Disorder 7-item (GAD-7), and Patient Health Questionnaire (PHQ-9), were assessed before treatment, post-treatment, and at follow-up.
Results: The TFI decreased significantly over time in both groups, with a trend for a larger improvement in the group that received telepsychology. At post-treatment, a clinically significant improvement was observed in 42% of the Tinnibot-only group and 64% of the hybrid-intervention group. At follow-up, this was 64% for both groups. The secondary outcome measures, PHQ-9 and GAD-7 improved significantly over time, but the HQ did not.
Discussion: Internet-based delivery of CBT is effective in decreasing tinnitus distress, and levels of anxiety and depression, which is more relevant today than ever in the context of a global pandemic that has challenged the delivery of face-to-face intervention. The addition of telepsychology might be beneficial, but not essential for the effectiveness of treatment. There is a need for further research to determine whether there is any relationship between the characteristics of tinnitus patients and the success of the different modes of delivery of therapy.
JOURNAL
Frontiers in Audiology and Otology
METHOD OF RESEARCH
Randomized controlled/clinical trial
The MindEar app includes training and education on tinnitus, helping patients better manage symptoms.
CREDIT
MindEar
SUBJECT OF RESEARCH
People
ARTICLE TITLE
Delivery of internet-based cognitive behavioral therapy combined with human-delivered telepsychology in tinnitus sufferers through a chatbot-based mobile app
ARTICLE PUBLICATION DATE
9-Jan-2024
COI STATEMENT
FB is a co-founder of Odio Tech Pty Ltd., the company that developed the MindEar/Tinnibot app. Despite this potential conflict, the author asserts that all research was conducted in the most objective and transparent manner possible, adhering to rigorous standards for scientific research. The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision. The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This research was supported by the Eisdell Moore Centre project grant.
Computer scientists makes noisy data: Can improve treatments in health care
University of Copenhagen researchers have developed software able to disguise sensitive data such as those used for Machine Learning in health care applications. The method protects privacy while making datasets available for development of better treatments.
A key element in modern healthcare is collecting and analyzing data for a large group of patients to discover patterns. Which patients benefit from a given treatment? And which patients are likely to experience side-effects? Such data must be protected, else the privacy of individuals is broken. Furthermore, breaches will harm general trust, leading to fewer people giving their consent to take part. Researchers at the Department of Computer Science, University of Copenhagen, have found a clever solution.
“We have seen several cases in which data was anonymized and then released to the public, and yet researchers managed to retrieve the identities of participants. Since many other sources of information exist in the public domain, an adversary with a good computer will often be able to deduct the identities even without names or citizen codes. We have developed a practical and economical way to protect datasets when used to train Machine Learning models,” says PhD student Joel Daniel Andersson.
The level of interest in the new algorithm can be illustrated by the fact, that Joel was invited to give a Google Tech Talk on it, one of the world’s most prestigious digital formats for computer science research. Also, he recently held a presentation at NeurIPS, one of the world’s leading conferences on Machine Learning with more than 10,000 participants.
Deliberately polluting your output
The key idea is to mask your dataset by adding “noise” to any output derived from it. Unlike encryption, where noise is added and later removed, in this case the noise stays. Once the noise is added, it cannot be distinguished from the “true” output.
Obviously, the owner of a dataset should not be happy about noising outputs derived from it.
“A lower utility of the dataset is the necessary price you pay for ensuring the privacy of participants,” says Joel Daniel Andersson.
The key task is to add an amount of noise sufficient to hide the original data points, but still maintain the fundamental value of the dataset, he notes:
“If the output is sufficiently noisy, then it becomes impossible to infer the value of an individual data point in the input, even if you know every other datapoint. By noising the output, we are in effect adding safety rails to the interaction between the analyst and the dataset. The analysts never access the raw data, they only ask queries about it and get noisy answers. Thereby, they never learn any information about individuals in the dataset. This protects against information leaks, inadvertent or otherwise, stemming from analysis of the data.”
Privacy comes with a price tag
There is no universal optimal trade-off, Joel Daniel Andersson underscores:
“You can pick the trade-off which fits your purpose. For applications where privacy is highly critical – for instance healthcare data – you can choose a very high level of privacy. This means adding a large amount of noise. Notably, this will sometimes imply that you will need to increase your number of datapoints – so include more persons in your survey for instance - to maintain the value of your dataset. In applications where privacy is less critical, you can choose a lower level. Thereby, you will maintain the utility of your dataset and reduce the costs involved in providing privacy.”
Reducing costs is exactly the prime argument behind the method developed by the research group, he adds:
“The crux is how much noise you must add to achieve a given level of privacy, and this is where our smooth mechanism offers an improvement over existing methods. We manage to add less noise and do so with fewer computational resources. In short, we reduce the costs associated with providing privacy."
Huge interest from industry
Machine Learning involves large datasets. For instance, in many healthcare disciplines a computer can find patterns that human experts cannot see. This all starts with training the computer on a dataset with real patient cases. Such training sets must be protected.
“Many disciplines depend increasingly on Machine Learning. Further, we see Machine Learning spreading beyond professionals like medical doctors to various private applications. These developments open a wealth of new opportunities, but also increases the need for protecting the privacy of the participants who provided the original data,” explains Joel Daniel Andersson, noting that interest in the groups’ new software is far from just academic:
“Besides the healthcare sector plus Google and other large tech companies, industry like consultants, auditing firms, and law firms need to be able to protect the privacy of their clients and participants in surveys.”
Public regulation is called for
The field is known as differential privacy. The term is derived from the fact that the privacy guarantee is for datasets differing in a single data point: output based on two datasets differing only in one data point will look similar. This makes it impossible for the analyst to identify the single data point.
The research group advocates for public bodies to take a larger interest in the field.
“Since better privacy protection comes with a higher price tag due to the loss of utility, it easily becomes a race to the bottom for market actors. Regulation should be in place, stating that a given sensitive application needs a certain minimum level of privacy. This is the real beauty of differential privacy. You can pick the level of privacy you need, and the framework will tell you exactly how much noise you will need to achieve that level,” says Joel Daniel Andersson. He hopes that differential privacy may serve to facilitate the use of Machine Learning:
“If we again take medical surveys as an example, they require patients giving consent to participate. For various reasons, you will always have some patients refusing – or just forgetting – to give consent, leading to a lower value of the dataset. But since it is possible to provide a strong probabilistic guarantee that the privacy of participants will not be violated, it could be morally defensible to not require consent and achieve 100 % participation to the benefit of the medical research. If the increase in participation is large enough, the loss in utility from providing privacy could be more than offset by the increased utility from the additional data. As such, differential privacy could become a win-win for society.”
The scientific article presenting the new method “A Smooth Binary Mechanism for Efficient Private Continual Observation” can be found here: [link]
ARTICLE TITLE
A Smooth Binary Mechanism for Efficient Private Continual Observation
No comments:
Post a Comment