Tuesday, October 17, 2023

 

Thermosensation is critical for the survival of animals, but the mechanisms by which this is modulated by nutritional status remain unclear


Behavioral and live brain imaging studies reveal why food-sated fruit flies prefer to stay at relatively higher temperatures compared to hungry flies

Peer-Reviewed Publication

PLOS

Thermosensation is critical for the survival of animals, but the mechanisms by which this is modulated by nutritional status remain unclear 

IMAGE: 

DIFFERENT INTERNAL FEEDING STATES ALTER Α´Β´ MUSHROOM BODY’S (GREEN, UPPER PANEL) NEURONAL ACTIVITIES IN THE BRAIN OF FRUIT FLY, WHICH CONTRIBUTE TO MODERATE AND STRONG HOT AVOIDANCE BEHAVIORS DURING SATIETY AND HUNGER, RESPECTIVELY. THE BOTTOM PANEL SHOWING THE NEUROPIL OF A WHOLE FLY BRAIN (MAGENTA, BRAIN WAS STAINED WITH ANTI-DISCS LARGE ANTIBODY) WHERE THE Α´Β´ MUSHROOM BODY NEURONS ARE GENETICALLY LABELED WITH GREEN FLUORESCENT PROTEIN (GREEN).

view more 

CREDIT: MENG-HSUAN CHIANG & CHIA-LIN WU, CHIANG M-H ET AL., 2023, PLOS BIOLOGY, CC-BY 4.0 (HTTPS://CREATIVECOMMONS.ORG/LICENSES/BY/4.0/)




Thermosensation is critical for the survival of animals, but the mechanisms by which this is modulated by nutritional status remain unclear; here, behavioral and live brain imaging studies reveal why food-sated fruit flies prefer to stay at relatively higher temperatures compared to hungry flies.

#####

In your coverage, please use this URL to provide access to the freely available paper in PLOS Biologyhttp://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3002332

Article Title: Independent insulin signaling modulators govern hot avoidance under different feeding states

Author Countries: Taiwan

Funding: This work was supported by grants from the National Science and Technology Council (112-2311-B-182-002-MY3 and 109-2326-B-182-001-MY3) to C-LW, Chang Gung Memorial Hospital (CMRPD1M0301-3, CMRPD1M0761-3, and BMRPC75) to C-LW. The funders had no role in the study design, data collection and analysis, decision to publish, or manuscript preparation.

 

Smart brain-wave cap recognizes stroke before the patient reaches the hospital


Amsterdam UMC designs brain-wave cap that can send patients to the correct hospital directly from the ambulance


Peer-Reviewed Publication

AMSTERDAM UNIVERSITY MEDICAL CENTERS

Smart brain-wave cap recognizes stroke before patient reaches the hospital 

VIDEO: 

A SPECIAL BRAIN-WAVE CAP CAN DIAGNOSE STROKE IN THE AMBULANCE, ALLOWING THE PATIENT TO RECEIVE APPROPRIATE TREATMENT FASTER. NEUROLOGIST JONATHAN COUTINHO, TECHNICAL PHYSICIAN WOUTER POTTERS AND PROFESSOR OF RADIOLOGY HENK MARQUERING, ALL FROM AMSTERDAM UMC, INVENTED THE BRAIN-WAVE CAP, WHICH ALLOWS AN EEG (BRAIN WAVE TEST) TO BE CARRIED OUT IN THE AMBULANCE. THIS BRAIN WAVE TEST SHOWS WHETHER THERE IS AN ISCHEMIC STROKE AND WHETHER THE BLOCKED CEREBRAL BLOOD VESSEL IS LARGE OR SMALL. THIS DISTINCTION DETERMINES THE TREATMENT: IN CASE OF A SMALL ISCHEMIC STROKE, THE PATIENT RECEIVES A BLOOD THINNER, AND IN CASE OF A LARGE ISCHEMIC STROKE, THE BLOOD CLOT MUST BE REMOVED MECHANICALLY IN A SPECIALIZED HOSPITAL. THIS IS VERY GOOD NEWS, BECAUSE THE CAP CAN ULTIMATELY SAVE LIVES BY ROUTING THESE PATIENTS DIRECTLY TO THE RIGHT HOSPITAL.

view more 

CREDIT: AMSTERDAM UMC




A special brain-wave cap can diagnose stroke in the ambulance, allowing the patient to receive appropriate treatment faster. Jonathan Coutinho, neurologist at Amsterdam UMC, is one of the inventors the swimming cap: "Our research shows that the brain-wave cap can recognize patients with large ischemic stroke with great accuracy. This is very good news, because the cap can ultimately save lives by routing these patients directly to the right hospital." The research is published today in Neurology. 

Every year, millions of people worldwide suffer an ischemic stroke, the most common type of stroke. An ischemic stroke occurs when a blood clot blocks a blood vessel of the brain, causing a part of the brain to receive no or insufficient blood. Prompt treatment is crucial to prevent permanent disability or death.  

Neurologist Jonathan Coutinho, Technical Physician Wouter Potters and professor of Radiology Henk Marquering, all from Amsterdam UMC, invented the brain-wave cap, which allows an EEG (brain wave test) to be carried out in the ambulance. This brain wave test shows whether there is an ischemic stroke and whether the blocked cerebral blood vessel is large or small. This distinction determines the treatment: in case of a small ischemic stroke, the patient receives a blood thinner, and in case of a large ischemic stroke, the blood clot must be removed mechanically in a specialized hospital. "When it comes to stroke, time is literally brain. The sooner we start the right treatment, the better the outcome. If the diagnosis is already clear in the ambulance, the patient can be routed directly to the right hospital, which saves valuable time," says Coutinho. 

Between 2018 and 2022, the smart brain-wave cap was tested in twelve Dutch ambulances, with data collected from almost 400 patients. The study shows that the brain-wave cap can recognize patients with a large ischemic stroke with great accuracy. "This study shows that the brain-wave cap performs well in an ambulance setting. For example, with the measurements of the cap, we can distinguish between a large or small ischemic stroke," adds Coutinho. 

In order to develop the brain-wave cap into a product and bring it to the market, TrianecT, an Amsterdam UMC, spin-off company was founded in 2022. In addition, a follow-up study (AI-STROKE) is currently ongoing in which even more measurements are collected in order to develop an algorithm for improved recognition of a large ischemic stroke in the ambulance. The Dutch Heart Foundation has also recognised the importance of this research and has made 4 million euros available for large-scale research into faster treatment of ischemic stroke. 

 

Disclaimer: AAAS and 

 

New specimen collection system enhances assisted reproductive technologies


Peer-Reviewed Publication

TEXAS TECH UNIVERSITY HEALTH SCIENCES CENTER

Prien_Penrose_TTUHSC.jpeg 

IMAGE: 

IN THEIR ONGOING RESEARCH, TTUHSC’S SAMUEL PRIEN, PH.D., AND LINDSAY PENROSE, PH.D., COMPLETED A STUDY THAT PROVIDED PRELIMINARY RESULTS OF THEIR NEWEST COLLECTION TECHNOLOGY SYSTEM DESIGN FOR A ONE-STEP METHOD FOR HEALTHY SPERM SELECTION. 

view more 

CREDIT: TTUHSC




New Specimen Collection System Enhances Assisted Reproductive Technologies

Considered an experimental procedure during the late 1970s when it made headlines with each birth it produced, in vitro fertilization (IVF) has helped many couples overcome infertility issues for more than four decades. In several nations, IVF is responsible for up to 3% of the babies born. 

The procedure, now a cornerstone of infertility treatments, has expanded to include other assisted reproductive technologies (ARTs). One of those ARTs, known as intracytoplasmic sperm injection (ICSI), involves using a microscopic needle to inject a single sperm into an egg. 

Though viewed as an almost universal means for egg insemination, ICSI does have limitations. It requires expensive equipment and well-trained individuals. And while many other technologies for selecting viable eggs have been developed over the past few years, there remain few options for selecting healthy sperm beyond assessing their movement and morphology (structure). 

In an ongoing effort to improve healthy sperm selection, Samuel Prien, Ph.D., and Lindsay Penrose, Ph.D., from the Department of Obstetrics and Gynecology at the Texas Tech University Health Sciences Center (TTUHSC) School of Medicine have worked for years to develop a specimen collection cup that creates a more favorable environment for sperm.

In 2022, they received a patent (“Method and Apparatus for Collection of Fluid Samples”) to produce a second-generation collection cup that made important improvements to the first such patented device invented at TTUHSC several years prior by Prien and Dustie Johnson, Ph.D. That second-generation device, known as the DISC (Device for Improved Semen Collection), is marketed today by Reproductive Solutions under the brand name ProteX.

In their latest study (“A Simple One-Step System Enhances the Availability of High-Quality Sperm for Assisted Reproductive Procedures”) that was published Oct. 10 by the Open Journal of Obstetrics and Gynecology, Prien and Penrose provided preliminary results of their latest collection technology system design, which provides a simple, one-step method of sperm selection for ICSI which also may prove useful for conventional IVF and IUI (intrauterine insemination).

The system, defined as the sperm isolation device that uses a barrier mesh between fluids, is known commercially as the NovaSort and also is marketed by Reproductive Solutions. The basket-like device was designed to work in any properly shaped holding vessel, but ideally in tandem with the ProteX collection cup.

“This new design allows us to process the sample and very simply recover mobile sperm,” Prien said. “In a one-step process, we put the little basket in, we wait the appropriate amount of time and then we have mobile sperm which have not been through the rigors of all the other ways we process sperm right now, such as with centrifuges and different kinds of gradients that can damage the DNA. We can isolate the sperm without ever having exposed them to anything that might damage them.”

Prien said the NovaSort is designed to prevent the mixing of seminal fluid and media while allowing the motile (moving) sperm to move out of the seminal fluid and into the media while keeping unwanted debris out. This provides a clean highly motile population of sperm to use in ARTs.

John Smothers, co-founder and executive vice president of Operations and Technology for RSI, said the new system design helps address two issues clinics face in collecting viable samples: chain of custody and time.

“Any potential chain of custody issue goes away because the sample never leaves the collection cup,” Smothers said. “It also provides efficiency because the clinics can conduct the process in about 15 minutes, whereas before it was taking an hour or more. And they complete the process with much healthier sperm, which increases the chances of having healthy pregnancies.”

Prien, Penrose and TTUHSC have filed patent applications for the new system design, which was demonstrated live at the American Society for Reproductive Medicine Scientific Congress and Expo Oct. 14-18 in New Orleans. Penrose said she’s most excited to display the overall efficiency the new system provides to practitioners and the hope it provides patients.

“We're going to be able to help our lab colleagues be more efficient in their work, and we're going to be able to help more patients have healthy pregnancies,” Penrose added. “That’s always our goal.”

And while the new system is exciting for RSI from a commercial standpoint, Smothers said it also demonstrates how the successful collaboration between TTUHSC and RSI is helping patients. 

“We wouldn't be at this point if we didn't have partners like TTUHSC that would collaborate and work with us,” Smothers added. “It makes a huge difference for us, TTUHSC and patients.”

###

 

Q&A: Researchers aim to improve accessibility with augmented reality


Reports and Proceedings

UNIVERSITY OF WASHINGTON

RASSAR 

IMAGE: 

RASSAR IS AN APP THAT SCANS A HOME, HIGHLIGHTS ACCESSIBILITY AND SAFETY ISSUES, AND LETS USERS CLICK ON THEM TO FIND OUT MORE.

view more 

CREDIT: SU ET AL./ASSETS ‘23




Big Tech’s race into augmented reality (AR) grows more competitive by the day. This month, Meta released the latest iteration of its headset, the Quest 3. Early next year, Apple plans to drop its first headset, the Vision Pro. The announcements for each platform emphasize games and entertainment that merge the virtual and physical worlds: a digital board game imposed on a coffee table, a movie screen projected above airplane seats.

Some researchers, though, are more curious about other uses for AR. The University of Washington’s Makeability Lab is applying these budding technologies to assist people with disabilities. This month, researchers from the lab will introduce multiple projects that deploy AR — through headsets and phone apps — to make the world more accessible.

Researchers from the lab will first present RASSAR, an app that can scan homes to highlight accessibility and safety issues, on Oct. 23 at the ASSETS ‘23 conference in New York.

Shortly after, on Oct. 30, other teams in the lab will present early research at the UIST ‘23 conference in San Francisco. One app lets the headsets better understand natural language and the other aims to make tennis and other ball sports accessible for low-vision users.

UW News spoke with the three studies’ lead authors, Xia Su and Jae (Jaewook) Lee, both UW doctoral students in the Paul G. Allen School of Computer Science & Engineering, about their work and the future of AR for accessibility.

What is AR and how is it typically used right now?

Jae Lee: I think one commonly accepted answer is that you use a wearable headset or a phone to superimpose virtual objects in a physical environment. A lot of people probably know AR from “Pokémon Go,” where you're superimposing these Pokémon into the physical world. Now Apple and Meta are introducing “mixed reality” or passthrough AR, which further blends the physical and virtual worlds through cameras.

Xia Su: Something I have also been observing lately is people are trying to expand the definition beyond goggles and phone screens. There could be AR audio, which is manipulating your hearing, or devices trying to manipulate your smell or touch.

A lot of people associate AR with virtual reality, and it gets wrapped up in discussion of the metaverse and gaming. How is it being applied for accessibility?

JL: AR as a concept has been around for several decades. But in Jon Froehlich’s lab, we’re combining AR with accessibility research. A headset or a phone can be capable of knowing how many people are in front of us, for example. For people who are blind or low vision, that information could be critical to how they perceive the world.

XS: There are really two different routes for AR accessibility research. The more prevalent one is trying to make AR devices more accessible to people. The other, less common approach is asking: How can we use AR or VR as tools to improve the accessibility of the real world? That’s what we're focused on.

JL: As AR glasses become less bulky and cheaper, and as AI and computer vision advance, this research will become increasingly important. But widespread AR, even for accessibility, brings up a lot of questions. How do you deal with bystander privacy? We, as a society, understand that vision technology can be beneficial to blind and low-vision people. But we also might not want to include facial recognition technology in apps for privacy reasons, even if that helps someone recognize their friends.

Let’s talk about the papers you have coming out. First, can you explain your app RASSAR?

XS: It's an app that people can use to scan their indoor spaces and help them detect possible accessibility safety issues in homes. It’s possible because some iPhones now have lidar (light detection and ranging) scanners that tell the depth of a space, so we can reconstruct the space in 3D. We combined this with computer vision models to highlight ways to improve safety and accessibility. To use it, someone — perhaps a parent who’s childproofing a home, or a caregiver — scans a room with their smartphone and RASSAR spots accessibility problems. For example, if a desk is too high, a red button will pop up on the desk. If the user clicks the button, there will be more information about why that desk’s height is an accessibility issue and possible fixes.

JL: Ten years ago, you would have needed to go through 60 pages of PDFs to fully check a house for accessibility. We boiled that information down into an app.

And this is something that anyone will be able to download to their phones and use?

XS: That’s the eventual goal. We already have a demo. This version relies on lidar, which is only on certain iPhone models right now. But if you have such a device, it’s very straightforward.

JL: This is an example of these advancements in hardware and software that let us create apps quickly. Apple announced RoomPlan, which creates a 3D floor plan of a room, when they added the lidar sensor. We’re using that in RASSAR to understand the general layout. Being able to build on that lets us come up with a prototype very quickly.

So RASSAR is nearly deployable now. The other areas of research you’re presenting are earlier in their development. Can you tell me about GazePointAR?

JL:  It’s an app deployed on an AR headset to enable people to speak more naturally with voice assistants like Siri or Alexa. There are all these pronouns we use when we speak that are difficult for computers to understand without visual context. I can ask “Where'd you buy it from?” But what is “it”? A voice assistant has no idea what I’m talking about. With GazePointAR, the goggles are looking at the environment around the user and the app is tracking the user’s gaze and hand movements. The model then tries to make sense of all these inputs — the word, the hand movements, the user’s gaze. Then, using a large language model, GPT, it attempts to answer the question.

How does it sense what the motions are?

JL: We’re using a headset called HoloLens 2 developed by Microsoft. It has a gaze tracker that’s watching your eyes and trying to guess what you’re looking at. It has hand tracking capability as well. In a paper that we submitted building on this, we noticed that we have a lot of problems with this. For example, people don't just use one pronoun at a time — we use multiple. We’ll say, “What's more expensive, this or this?” To answer that, we need information over time. But, again, you can run into privacy issues if you want to track someone's gaze or someone's visual field of view over time: What information are you storing and where is it being stored? As technology improves, we certainly need to watch out for these privacy concerns, especially in computer vision.

This is difficult even for humans, right? I can ask, “Can you explain that?” while pointing at several equations on a whiteboard and you won’t know which I’m referring to. What applications do you see for this?

JL: Being able to use natural language would be major. But if you expand this to accessibility, there’s the potential for a blind or low-vision person to use this to describe what’s around them. The question “Is anything dangerous in front of me?” is also ambiguous for a voice assistant. But with GazePointAR, ideally, the system could say, “There are possibly dangerous objects, such as knives and scissors.” Or low-vision people might make out a shape, point at it, then ask the system what “it” is more specifically.

And finally you’re working on a system called ARTennis. What is it and what prompted this research?

JL: This is going even more into the future than GazePointAR. ARTennis is a prototype that uses an AR headset to make tennis balls more salient for low vision players. The ball in play is marked by a red dot and has a crosshair of green arrows around it. Professor Jon Froehlich has a family member that wants to play sports with his children but doesn't have the residual vision necessary to do so. We thought if it works for tennis, it's going to work for a lot of other sports, since tennis has a small ball that shrinks as it gets further away. If we can track a tennis ball in real time, we can do the same with a bigger, slower basketball.

One of the co-authors on the paper is low vision himself, and he plays a lot of squash, and he wanted to try this application and give us feedback. We did a lot of brainstorming sessions with him, and he tested the system. The red dot and green crosshairs is the design that he came up with, to improve the sense of depth perception.

What’s keeping this from being something people can use right away?

JL: Well, like GazePointAR, it’s relying on a HoloLens 2 headset that’s $3,500. So that’s a different accessibility issue. It’s also running at roughly 25 frames per second and for humans to perceive in real time it needs to be about 30 frames per second. Sometimes we can’t capture the speed of the tennis ball. We're going to expand the paper and include basketball to see if there are different designs people prefer for different sports. The technology will certainly get faster. So our question is: What will the best design be for the people using it?

For more information, contact Jon Froehlich at jonf@cs.washington.edu, Lee at jaewook4@cs.washington.edu and Su at xiasu@cs.washington.edu.

 

New technique helps robots pack objects into a tight space


Researchers coaxed a family of generative AI models to work together to solve multistep robot manipulation problems


Reports and Proceedings

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

Diffusion CCSP 

IMAGE: 

MIT RESEARCHERS ARE USING GENERATIVE AI MODELS TO HELP ROBOTS MORE EFFICIENTLY SOLVE COMPLEX OBJECT MANIPULATION PROBLEMS, SUCH AS PACKING A BOX WITH DIFFERENT OBJECTS.

view more 

CREDIT: COURTESY OF ZHUTIAN YANG, ET. AL



CAMBRIDGE, Mass. -- Anyone who has ever tried to pack a family-sized amount of luggage into a sedan-sized trunk knows this is a hard problem. Robots struggle with dense packing tasks, too. 

For the robot, solving the packing problem involves satisfying many constraints, such as stacking luggage so suitcases don’t topple out of the trunk, heavy objects aren’t placed on top of lighter ones, and collisions between the robotic arm and the car’s bumper are avoided. 

Some traditional methods tackle this problem sequentially, guessing a partial solution that meets one constraint at a time and then checking to see if any other constraints were violated. With a long sequence of actions to take, and a pile of luggage to pack, this process can be impractically time consuming.   

MIT researchers used a form of generative AI, called a diffusion model, to solve this problem more efficiently. Their method uses a collection of machine-learning models, each of which is trained to represent one specific type of constraint. These models are combined to generate global solutions to the packing problem, taking into account all constraints at once. 

Their method was able to generate effective solutions faster than other techniques, and it produced a greater number of successful solutions in the same amount of time. Importantly, their technique was also able to solve problems with novel combinations of constraints and larger numbers of objects, that the models did not see during training.

Due to this generalizability, their technique can be used to teach robots how to understand and meet the overall constraints of packing problems, such as the importance of avoiding collisions or a desire for one object to be next to another object. Robots trained in this way could be applied to a wide array of complex tasks in diverse environments, from order fulfillment in a warehouse to organizing a bookshelf in someone’s home.

“My vision is to push robots to do more complicated tasks that have many geometric constraints and more continuous decisions that need to be made — these are the kinds of problems service robots face in our unstructured and diverse human environments. With the powerful tool of compositional diffusion models, we can now solve these more complex problems and get great generalization results,” says Zhutian Yang, an electrical engineering and computer science graduate student and lead author of a paper on this new machine-learning technique.

Her co-authors include MIT graduate students Jiayuan Mao and Yilun Du; Jiajun Wu, an assistant professor of computer science at Stanford University; Joshua B. Tenenbaum, a professor in MIT’s Department of Brain and Cognitive Sciences and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); Tomás Lozano-Pérez, an MIT professor of computer science and engineering and a member of CSAIL; and senior author Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering at MIT and a member of CSAIL. The research will be presented at the Conference on Robot Learning.

Constraint complications

Continuous constraint satisfaction problems are particularly challenging for robots. These problems appear in multistep robot manipulation tasks, like packing items into a box or setting a dinner table. They often involve achieving a number of constraints, including geometric constraints, such as avoiding collisions between the robot arm and the environment; physical constraints, such as stacking objects so they are stable; and qualitative constraints, such as placing a spoon to the right of a knife. 

There may be many constraints, and they vary across problems and environments depending on the geometry of objects and human-specified requirements.

To solve these problems efficiently, the MIT researchers developed a machine-learning technique called Diffusion-CCSP. Diffusion models learn to generate new data samples that resemble samples in a training dataset by iteratively refining their output.

To do this, diffusion models learn a procedure for making small improvements to a potential solution. Then, to solve a problem, they start with a random, very bad solution and then gradually improve it. 

For example, imagine randomly placing plates and utensils on a simulated table, allowing them to physically overlap. The collision-free constraints between objects will result in them nudging each other away, while qualitative constraints will drag the plate to the center, align the salad fork and dinner fork, etc.

Diffusion models are well-suited for this kind of continuous constraint-satisfaction problem because the influences from multiple models on the pose of one object can be composed to encourage the satisfaction of all constraints, Yang explains. By starting from a random initial guess each time, the models can obtain a diverse set of good solutions.

Working together

For Diffusion-CCSP, the researchers wanted to capture the interconnectedness of the constraints. In packing for instance, one constraint might require a certain object to be next to another object, while a second constraint might specify where one of those objects must be located. 

Diffusion-CCSP learns a family of diffusion models, with one for each type of constraint. The models are trained together, so they share some knowledge, like the geometry of the objects to be packed. 

The models then work together to find solutions, in this case locations for the objects to be placed, that jointly satisfy the constraints.

“We don’t always get to a solution at the first guess. But when you keep refining the solution and some violation happens, it should lead you to a better solution. You get guidance from getting something wrong,” she says.

Training individual models for each constraint type and then combining them to make predictions greatly reduces the amount of training data required, compared to other approaches.

However, training these models still requires a large amount of data that demonstrate solved problems. Humans would need to solve each problem with traditional slow methods, making the cost to generate such data prohibitive, Yang says.

Instead, the researchers reversed the process by coming up with solutions first. They used fast algorithms to generate segmented boxes and fit a diverse set of 3D objects into each segment, ensuring tight packing, stable poses, and collision-free solutions. 

“With this process, data generation is almost instantaneous in simulation. We can generate tens of thousands of environments where we know the problems are solvable,” she says. 

Trained using these data, the diffusion models work together to determine locations objects should be placed by the robotic gripper that achieve the packing task while meeting all of the constraints. 

They conducted feasibility studies, and then demonstrated Diffusion-CCSP with a real robot solving a number of difficult problems, including fitting 2D triangles into a box, packing 2D shapes with spatial relationship constraints, stacking 3D objects with stability constraints, and packing 3D objects with a robotic arm. 

Their method outperformed other techniques in many experiments, generating a greater number of effective solutions that were both stable and collision-free. 

In the future, Yang and her collaborators want to test Diffusion-CCSP in more complicated situations, such as with robots that can move around a room. They also want to enable Diffusion-CCSP to tackle problems in different domains without the need to be retrained on new data.

“Diffusion-CCSP is a machine-learning solution that builds on existing powerful generative models,” says Danfei Xu, an assistant professor in the School of Interactive Computing at the Georgia Institute of Technology and a Research Scientist at NVIDIA AI, who was not involved with this work. “It can quickly generate solutions that simultaneously satisfy multiple constraints by composing known individual constraint models. Although it’s still in the early phases of development, the ongoing advancements in this approach hold the promise of enabling more efficient, safe, and reliable autonomous systems in various applications.”

This research was funded, in part, by the National Science Foundation, the Air Force Office of Scientific Research, the Office of Naval Research, the MIT-IBM Watson AI Lab, the MIT Quest for Intelligence, the Center for Brains, Minds, and Machines, Boston Dynamics Artificial Intelligence Institute, the Stanford Institute for Human-Centered Artificial Intelligence, Analog Devices, JPMorgan Chase and Co., and Salesforce.

###

Written by Adam Zewe, MIT News

Paper: "Compositional Diffusion-Based Continuous Constraint Solvers"

https://arxiv.org/pdf/2309.00966.pdf

New computing hardware needs a theoretical basis


Peer-Reviewed Publication

UNIVERSITY OF GRONINGEN

Prof. Dr. Herbert Jaeger 

IMAGE: 

THIS IS HERBERT JAEGER, PROFESSOR OF COMPUTING IN COGNITIVE MATERIALS AT COGNIGRON, UNIVERSITY OF GRONINGEN, THE FIRST AUTHOR OF THE PAPER IN NATURE COMMUNICATIONS.

view more 

CREDIT: MARLEEN ANNEMA




There is an intense, worldwide search for novel materials to build computer microchips with that are not based on classic transistors but on much more energy-saving, brain-like components. However, whereas the theoretical basis for classic transistor-based digital computers is solid, there are no real theoretical guidelines for the creation of brain-like computers. Such a theory would be absolutely necessary to put the efforts that go into engineering new kinds of microchips on solid ground, argues Herbert Jaeger, Professor of Computing in Cognitive Materials at the University of Groningen.

Computers have, so far, relied on stable switches that can be off or on, usually transistors. These digital computers are logical machines and their programming is also based on logical reasoning. For decades, computers have become more powerful by further miniaturization of the transistors, but this process is now approaching a physical limit. That is why scientists are working to find new materials to make more versatile switches, which could use more values than just the digitals 0 or 1.

Dangerous pitfall

Jaeger is part of the Groningen Cognitive Systems and Materials Center (CogniGron), which aims to develop neuromorphic (i.e. brain-like) computers. CogniGron is bringing together scientists who have very different approaches: experimental materials scientists and theoretical modelers from fields as diverse as mathematics, computer science, and AI. Working closely with materials scientists has given Jaeger a good idea of the challenges that they face when trying to come up with new computational materials, while it has also made him aware of a dangerous pitfall: there is no established theory for the use of non-digital physical effects in computing systems.

Our brain is not a logical system. We can reason logically, but that is only a small part of what our brain does. Most of the time, it must work out how to bring a hand to a teacup or wave to a colleague on passing them in a corridor. ‘A lot of the information-processing that our brain does is this non-logical stuff, which is continuous and dynamic. It is difficult to formalize this in a digital computer,’ explains Jaeger. Furthermore, our brains keep working despite fluctuations in blood pressure, external temperature, or hormone balance, and so on. How is it possible to create a computer that is as versatile and robust? Jaeger is optimistic: ‘The simple answer is: the brain is proof of principle that it can be done.’

Neurons

The brain is, therefore, an inspiration for materials scientists. Jaeger: ‘They might produce something that is made from a few hundred atoms and that will oscillate, or something that will show bursts of activity. And they will say: “That looks like how neurons work, so let’s build a neural network".’ But they are missing a vital bit of knowledge here. ‘Even neuroscientists don’t know exactly how the brain works. This is where the lack of a theory for neuromorphic computers is problematic. Yet, the field doesn’t appear to see this.’

In a paper published in Nature Communications on 16 August, Jaeger and his colleagues Beatriz Noheda (scientific director of CogniGron) and Wilfred G. van der Wiel (University of Twente) present a sketch of what a theory for non-digital computers might look like. They propose that instead of stable 0/1 switches, the theory should work with continuous, analogue signals. It should also accommodate the wealth of non-standard nanoscale physical effects that the materials scientists are investigating.

Sub-theories

Something else that Jaeger has learned from listening to materials scientists is that devices from these new materials are difficult to construct. Jaeger: ‘If you make a hundred of them, they will not all be identical.’ This is actually very brain-like, as our neurons are not all exactly identical either. Another possible issue is that the devices are often brittle and temperature-sensitive, continues Jaeger. ‘Any theory for neuromorphic computing should take such characteristics into account.’

Importantly, a theory underpinning neuromorphic computing will not be a single theory but will be constructed from many sub-theories (see image below). Jaeger: ‘This is in fact how digital computer theory works as well, it is a layered system of connected sub-theories.’ Creating such a theoretical description of neuromorphic computers will require close collaboration of experimental materials scientists and formal theoretical modellers. Jaeger: ‘Computer scientists must be aware of the physics of all these new materials and materials scientists should be aware of the fundamental concepts in computing.’

Blind spots

Bridging this divide between materials science, neuroscience, computing science, and engineering is exactly why CogniGron was founded at the University of Groningen: it brings these different groups together. ‘We all have our blind spots,’ concludes Jaeger. ‘And the biggest gap in our knowledge is a foundational theory for neuromorphic computing. Our paper is a first attempt at pointing out how such a theory could be constructed and how we can create a common language.’

Reference: Herbert Jaeger, Beatriz Noheda & Wilfred G. van der Wiel: Toward a formal theory for computing machines made out of whatever physics offers. Nature Communications, 16 August 2023