The next evolution of AI begins with ours
Peer-Reviewed PublicationIn a sense, each of us begins life ready for action. Many animals perform amazing feats soon after they’re born. Spiders spin webs. Whales swim. But where do these innate abilities come from? Obviously, the brain plays a key role as it contains the trillions of neural connections needed to control complex behaviors. However, the genome has space for only a small fraction of that information. This paradox has stumped scientists for decades. Now, Cold Spring Harbor Laboratory (CSHL) Professors Anthony Zador and Alexei Koulakov have devised a potential solution using artificial intelligence.
When Zador first encounters this problem, he puts a new spin on it. “What if the genome’s limited capacity is the very thing that makes us so smart?” he wonders. “What if it’s a feature, not a bug?” In other words, maybe we can act intelligently and learn quickly because the genome’s limits force us to adapt. This is a big, bold idea—tough to demonstrate. After all, we can’t stretch lab experiments across billions of years of evolution. That’s where the idea of the genomic bottleneck algorithm emerges.
In AI, generations don’t span decades. New models are born with the push of a button. Zador, Koulakov, and CSHL postdocs Divyansha Lachi and Sergey Shuvaev set out to develop a computer algorithm that folds heaps of data into a neat package—much like our genome might compress the information needed to form functional brain circuits. They then test this algorithm against AI networks that undergo multiple training rounds. Amazingly, they find the new, untrained algorithm performs tasks like image recognition almost as effectively as state-of-the-art AI. Their algorithm even holds its own in video games like Space Invaders. It’s as if it innately understands how to play.
Does this mean AI will soon replicate our natural abilities? “We haven’t reached that level,” says Koulakov. “The brain’s cortical architecture can fit about 280 terabytes of information—32 years of high-definition video. Our genomes accommodate about one hour. This implies a 400,000-fold compression technology cannot yet match.”
Nevertheless, the algorithm allows for compression levels thus far unseen in AI. That feature could have impressive uses in tech. Shuvaev, the study’s lead author, explains: “For example, if you wanted to run a large language model on a cell phone, one way [the algorithm] could be used is to unfold your model layer by layer on the hardware.”
Such applications could mean more evolved AI with faster runtimes. And to think, it only took 3.5 billion years of evolution to get here.
Journal
Proceedings of the National Academy of Sciences
AI Safety Institute launched as Korea’s AI Research Hub
Located at the Pangyo Global R&D Center, the AISI begins full operations under the leadership of a director and a three-section system
National Research Council of Science & Technology
The Ministry of Science and ICT (MSIT), headed by Minister Yoo Sang-im, held the launch ceremony for the "AI Safety Institute" (AISI) on Wednesday, November 27, at the Pangyo Global R&D Center.
At the "AI Seoul Summit"last May, leaders from 10 countries recognized safety as a key component of responsible AI innovation and emphasized the importance of establishing AI safety institutes and fostering global collaboration for safe AI. President Yoon Suk Yeol also expressed his commitment, stating, "We will work towards establishing an AI safety institute in Korea and actively participate in a global network to enhance AI safety." After thorough preparations regarding the institute's organization, budget, personnel, and functions, the AI Safety Institute has now been officially launched.
The AISI is a dedicated organization established within ETRI to systematically and professionally address various AI risks, including technological limitations, human misuse, and potential loss of control over AI. As Korea's hub for AI safety research, the AISI will facilitate collaborative research and information sharing among industry, academia, and research institutes in the field of AI safety. Furthermore, as a member of the "International Network of AI Safety Institutes" (comprising 10 countries, launched on November 21), the AISI is committed to taking a responsible role in strengthening global collaboration for safe AI. Through these efforts, the AISI aims to develop competitive technologies, nurture skilled professionals in the AI safety sector, and advance AI safety policies, including their development and refinement, based on scientific research data.
The launch ceremony brought togetherkey government officials, including Yoo Sang-im, Minister of Science and ICT; Yeom Jae-ho, Vice Chair of the National AI Committee; and Lee Kyung-woo, Presidential Secretary for AI and Digital. Over 40 prominent figures from the AI industry, academia, and research sectors also attended, such as Bae Kyung-hoon, Chief of LG AI Research; Oh Hye-yeon, Director of the KAIST AI Institute; Lee Eun-ju, Director of the Center for Trustworthy AI at Seoul National University; and Bang Seung-chan, President of the Electronics and Telecommunications Research Institute (ETRI).
At the event, Professor Yoshua Bengio, a globally renowned AI scholar and Global Advisor to the National AI Committee, congratulated the Korean government on establishing the AI Safety Institute in alignment with the Seoul Declaration. He emphasized the Institute's critical roles, including (1) researching and advancing risk assessment methodologies through industry collaboration, (2) supporting the development of AI safety requirements, and (3) fostering international cooperation to harmonize global AI safety standards. Additionally, the directors of AI safety institutes from the United States, the United Kingdom, and Japan delivered congratulatory speeches, stating, "We have high expectations for Korea’s AI Safety Institute" and emphasizing "the importance of global collaboration in AI safety."
Kim Myung-joo, the inaugural Director of the AISI, outlined the Institute's vision and operational plans during the ceremony. In his presentation, he stated, "The AISI will focus on evaluating potential risks that may arise from AI utilization, developingand disseminating policies and technologies to prevent and minimize these risks, and strengthening collaboration both domestically and internationally." Director Kim emphasized, "The AISI is not a regulatory body but a collaborative organization dedicated to supporting Korean AI companies by reducing risk factors that hinder their global competitiveness."
At the signing ceremony for the "Korea AI Safety Consortium" (hereinafter referred to as the "Consortium"), 24leading Korean organizations from industry, academia, and research sectors signed a Memorandum of Understanding (MOU) to promote mutual cooperation in AI safety policy research, evaluation, and R&D. The AISI and Consortium member organizations will jointly focus on key initiatives, including the research, development, and validation of an AI safety framework (risk identification, evaluation, and mitigation), policy research to align with international AI safety norms, and technological collaboration on AI safety. Moving forward, they plan to refine the Consortium's detailed research topics and operational strategies. The member organizations also presented their expertise in AI safety research and outlined their plans for Consortium activities, affirming their strong commitment to active collaboration with the AISI.
< Participating Organizations in the "AI Safety Consortium" >
Industry | ▪ Naver (Future AI Center), KT (Responsible AI Center), Kakao (AI Safety), LG AI Research, SKT (AI Governance Task Force), Samsung Electronics, Konan Technology, Wrtn Technologies, ESTsoft, 42Maru, Crowdworks AI, Twelve Labs, Liner |
Academia | ▪Seoul National University (Center for Trustworthy AI), KAIST (AI Fairness Research Center), Korea University (School of Cybersecurity), Sungkyunkwan University (AI Reliability Research Center), Soongsil University (AI Safety Center), Yonsei University (AI Impact Research Center) |
Research Institutes | ▪Korea AISI, TTA (Center for Trustworthy AI), NIA (Department of AI Policy), KISDI (Department of Digital Society Strategy Research), IITP (AIí…±igital Convergence Division), SPRi (AI Policy Research Lab) |
Minister Yoo Sang-im of the MSIT emphasized, "AI safety is a prerequisite for sustainable AI development and one of the greatest challenges that all of us in the AI field must tackle together." He noted, "In the short span of just one year since the AI Safety Summit in November 2023 and the AI Seoul Summit in May 2024, major countries such as the United States, the United Kingdom, Japan, Singapore, and Canada have established AI safety institutes, creating an unprecedentedly swift and systematic framework for international AI safety cooperation." Minister Yoo further emphasized, "By bringing together the research capabilities of industry, academia, and research institutes through the AISI, we will rapidly secure the technological and policy expertise needed to take a leading role in the global AI safety alliance. We will actively support the AISI's growth into a research hub representing the Asia-Pacific region in AI safety."
###
About Electronics and Telecommunications Research Institute (ETRI)
ETRI is a non-profit government-funded research institute. Since its foundation in 1976, ETRI, a global ICT research institute, has been making its immense effort to provide Korea a remarkable growth in the field of ICT industry. ETRI delivers Korea as one of the top ICT nations in the World, by unceasingly developing world’s first and best technologies.
New guidance for ensuring AI safety in clinical care published in JAMA by UTHealth Houston, Baylor College of Medicine researchers
University of Texas Health Science Center at Houston
As artificial intelligence (AI) becomes more prevalent in health care, organizations and clinicians must take steps to ensure its safe implementation and use in real-world clinical settings, according to an article co-written by Dean Sittig, PhD, professor with McWilliams School of Biomedical Informatics at UTHealth Houston and Hardeep Singh, MD, MPH, professor at Baylor College of Medicine.
The guidance was published today, Nov. 27, 2024, in the Journal of the American Medical Association.
“We often hear about the need for AI to be built safely, but not about how to use it safely in health care settings,” Sittig said. “It is a tool that has the potential to revolutionize medical care, but without safeguards in place, AI could generate false or misleading outputs that could potentially harm patients if left unchecked.”
Drawing from expert opinion, literature reviews, and experiences with health IT use and safety assessment, Sittig and Singh developed a pragmatic approach for health care organizations and clinicians to monitor and manage AI systems.
“Health care delivery organizations will need to implement robust governance systems and testing processes locally to ensure safe AI and safe use of AI so that ultimately AI can be used to improve the safety of health care and patient outcomes,” Singh said. “All health care delivery organizations should check out these recommendations and start proactively preparing for AI now.”
Some of the recommended actions for health care organizations are listed below:
· Review guidance published in high-quality, peer-reviewed journals and conduct rigorous real-world testing to confirm AI’s safety and effectiveness.
· Establish dedicated committees with multidisciplinary experts to oversee AI system deployment and ensure adherence to safety protocols. Committee members should meet regularly to review requests for new AI applications, consider their safety and effectiveness before implementing them, and develop processes to monitor their performance.
· Formally train clinicians on AI usage and risk, but also be transparent with patients when AI is part of their care decisions. This transparency is key to building trust and confidence in AI’s role in health care.
· Maintain a detailed inventory of AI systems and regularly evaluate them to identify and mitigate any risks.
· Develop procedures to turn off AI systems should they malfunction, ensuring smooth transitions back to manual processes.
“Implementing AI into clinical settings should be a shared responsibility among health care providers, AI developers, and electronic health record vendors to protect patients,” Sittig said. “By working together, we can build trust and promote the safe adoption of AI in health care.”
Also providing input to the article were Robert Murphy, MD, associate professor and associate dean, and Debora Simmons, PhD, RN, assistant professor, both from the Department of Clinical and Health Informatics at McWilliams School of Biomedical Informatics; and Trisha Flanagan, RN, MSN.
Journal
JAMA
Method of Research
Commentary/editorial
Subject of Research
People
Article Title
Recommendations to Ensure Safety of AI in Real-World Clinical Care
Article Publication Date
27-Nov-2024
Mount Sinai opens the Hamilton and Amabel James Center for Artificial Intelligence and Human Health to transform health care by spearheading the AI revolution
New state-of-the-art facility is among the first of its kind at a U.S. medical school
See accompanying video here: https://youtu.be/o-opCV6oe3o
New York, NY [November 25, 2024]—Today, the Mount Sinai Health System, one of New York City’s largest academic medical systems, announced the opening of the Hamilton and Amabel James Center for Artificial Intelligence and Human Health, which is dedicated to enhancing health care delivery through the research, development, and application of innovative artificial intelligence (AI) tools and technologies.
The state-of-the-art research center solidifies Mount Sinai Health System’s leadership in delivering patient care through groundbreaking innovation and technology. As one example, Mount Sinai was among the first academic medical centers in the United States to build and operate a supercomputer, named "Minerva," which went into service in 2013.
The interdisciplinary center will combine artificial intelligence with data science and genomics in a location at the center of the campus of The Mount Sinai Hospital in Manhattan. The facility will initially house approximately 40 Principal Investigators, alongside 250 graduate students, postdoctoral fellows, computer scientists, and support staff.
Supported by a generous gift from Hamilton Evans "Tony" James, Executive Vice Chairman of the Manhattan-based investment firm Blackstone, and his wife, Amabel, the 12-story, 65,000-square-foot facility will be housed in a repurposed Mount Sinai building at 3 East 101st Street.
“By integrating AI technology across genomics, imaging, pathology, electronic health records, and beyond, Mount Sinai is revolutionizing doctors’ capacity to diagnose and treat patients, reshaping the future of health care. Mount Sinai has been at the forefront of AI research and development in health care, and now we stand as one of the first medical schools to establish a dedicated AI research center,” says Eric J. Nestler, MD, PhD, Director of the Friedman Brain Institute, Dean for Academic and Scientific Affairs and Nash Family Professor in the Nash Family Department of Neuroscience at Icahn School of Medicine at Mount Sinai and Chief Scientific Officer at Mount Sinai Health System. “As AI technology is evolving rapidly, this moment is critical for maintaining leadership in digital health. The Hamilton and Amabel James Center for Artificial Intelligence and Human Health will cultivate an optimal environment for researchers to deepen their understanding, diagnosis, and treatment of human diseases—including the most debilitating—and to advance overall health and well-being.”
“If we want to use artificial intelligence for the greater good and make significant progress in health care, investing in AI research and development within academic institutions is essential,” says Dennis S. Charney, MD, Anne and Joel Ehrenkranz Dean at Icahn Mount Sinai and President for Academic Affairs of the Mount Sinai Health System. “While large tech companies possess substantial funding and resources to access high-performance equipment, they lack access to a health care system, limiting their progress in the field. This new AI research center at Icahn Mount Sinai will yield transformative discoveries in human health by the integration of research and data, fostering collaboration across multiple programs under one roof.”
To construct the new AI center, Mount Sinai modernized an existing building to meet contemporary standards, including updating the facade to align with the aesthetic of other campus buildings. Within the 12 floors of the center, eight will be dedicated to Mount Sinai’s AI initiatives. These core facilities include:
- The Windreich Department of AI and Human Health, which focuses on creating an “AI Fabric” that will integrate machine learning and AI-driven decision-making throughout the Health System’s eight hospitals.
- The Hasso Plattner Institute for Digital Health at Mount Sinai (HPI•MS), formed in 2019 through a collaboration with the Hasso Plattner Institute for Digital Engineering in Germany, which aims to enhance capabilities in data science, biomedical and digital engineering, machine learning, AI, and wearable technology. In 2024, the Hasso Plattner Foundation renewed its generous support of HPI•MS for the next five years.
- The Institute for Genomic Health and Division of Medical Genetics, which leads the effort to harness the power of genomic discovery to develop new ways to prevent and treat diseases, including cancers, heart problems, and genetic disorders.
- The Biomedical Engineering and Imaging Institute, focused on the use of multimodality imaging for brain, heart, and cancer research, along with research in nanomedicine for precision imaging and drug delivery.
- The Institute for Personalized Medicine, which launched the human genome sequencing research project called the Mount Sinai Million Health Discoveries Program, which aims to enroll 1 million racially and ethnically diverse patients, advance precision medicine research, and improve patient care.
-####-
About Mount Sinai's Windreich Department of AI and Human Health
Mount Sinai's Windreich Department of AI and Human Health, the first such department in a U.S. medical school, is committed to advancing and optimizing artificial intelligence and human health. The department is dedicated to harnessing the power of leading-edge tools to revolutionize scientific research and discovery. This commitment is realized through the creation of an "intelligent fabric," seamlessly integrating machine learning and AI-driven decision-making throughout Mount Sinai’s entire health system. It includes the distinguished Icahn School of Medicine at Mount Sinai, serving as a central hub for innovative learning. This integration facilitates robust partnerships spanning all research institutes, academic departments, hospitals, and outpatient centers. Through this strategic approach, the Department is accelerating progress in disease prevention, treating severe illnesses, and enhancing the overall quality of life for all.
In 2024, the Department's innovative NutriScan AI application, designed to facilitate faster identification and treatment of malnutrition in hospitalized patients, earned Mount Sinai Health System the prestigious Hearst Health Prize. This machine learning tool improves malnutrition diagnosis rates and resource utilization, demonstrating the impactful application of AI in health care. For more information, visit ai.mssm.edu.
About the Icahn School of Medicine at Mount Sinai
The Icahn School of Medicine at Mount Sinai is internationally renowned for its outstanding research, educational, and clinical care programs. It is the sole academic partner for the eight- member hospitals* of the Mount Sinai Health System, one of the largest academic health systems in the United States, providing care to a large and diverse patient population.
Ranked 11th nationwide in National Institutes of Health (NIH) funding and among the 99th percentile in research dollars per investigator according to the Association of American Medical Colleges, Icahn Mount Sinai has a talented, productive, and successful faculty. More than 4,560 full-time scientists, educators, and clinicians work within and across 45 academic departments and 38 multidisciplinary institutes, a structure that facilitates tremendous collaboration and synergy. Our emphasis on translational research and therapeutics is evident in such diverse areas as genomics/big data, virology, neuroscience, cardiology, geriatrics, as well as gastrointestinal and liver diseases.
Icahn Mount Sinai offers highly competitive MD, PhD, and Master’s degree programs, with current enrollment of more than 1,200 students. It has the largest graduate medical education program in the country, with more than 2,685 clinical residents and fellows training throughout the Health System. In addition, more than 560 postdoctoral research fellows are in training within the Health System.
A culture of innovation and discovery permeates every Icahn Mount Sinai program. Mount Sinai’s technology transfer office, one of the largest in the country, partners with faculty and trainees to pursue optimal commercialization of intellectual property to ensure that Mount Sinai discoveries and innovations translate into healthcare products and services that benefit the public.
Icahn Mount Sinai’s commitment to breakthrough science and clinical care is enhanced by academic affiliations that supplement and complement the School’s programs.
Through the Mount Sinai Innovation Partners (MSIP), the Health System facilitates the real-world application and commercialization of medical breakthroughs made at Mount Sinai. Additionally, MSIP develops research partnerships with industry leaders such as Merck & Co., AstraZeneca, Novo Nordisk, and others.
The Icahn School of Medicine at Mount Sinai is located in New York City on the border between the Upper East Side and East Harlem, and classroom teaching takes place on a campus facing Central Park. Icahn Mount Sinai’s location offers many opportunities to interact with and care for diverse communities. Learning extends well beyond the borders of our physical campus, to the eight hospitals of the Mount Sinai Health System, our academic affiliates, and globally.
-------------------------------------------------------
* Mount Sinai Health System member hospitals: The Mount Sinai Hospital; Mount Sinai Beth Israel; Mount Sinai Brooklyn; Mount Sinai Morningside; Mount Sinai Queens; Mount Sinai South Nassau; Mount Sinai West; and New York Eye and Ear Infirmary of Mount Sinai.
COI Statement
Dr Singh reported receiving grants from the Houston Veterans Administration (VA) Health Services Research and Development (HSR&D) Center for Innovations in Quality, Effectiveness, and Safety (CIN13–413), the VA National Center for Patient Safety, and the Agency for Healthcare Research and Quality (R01HS028595 and R18HS029347) and receiving personal fees from Informatics-Review LLC and Leapfrog Group. Dr Sittig reported receiving grants from the National Library of Medicine and the VA HSR&D Center and receiving personal fees from Informatics-Review LLC.
No comments:
Post a Comment