Tuesday, October 17, 2023

 

Q&A: Researchers aim to improve accessibility with augmented reality


Reports and Proceedings

UNIVERSITY OF WASHINGTON

RASSAR 

IMAGE: 

RASSAR IS AN APP THAT SCANS A HOME, HIGHLIGHTS ACCESSIBILITY AND SAFETY ISSUES, AND LETS USERS CLICK ON THEM TO FIND OUT MORE.

view more 

CREDIT: SU ET AL./ASSETS ‘23




Big Tech’s race into augmented reality (AR) grows more competitive by the day. This month, Meta released the latest iteration of its headset, the Quest 3. Early next year, Apple plans to drop its first headset, the Vision Pro. The announcements for each platform emphasize games and entertainment that merge the virtual and physical worlds: a digital board game imposed on a coffee table, a movie screen projected above airplane seats.

Some researchers, though, are more curious about other uses for AR. The University of Washington’s Makeability Lab is applying these budding technologies to assist people with disabilities. This month, researchers from the lab will introduce multiple projects that deploy AR — through headsets and phone apps — to make the world more accessible.

Researchers from the lab will first present RASSAR, an app that can scan homes to highlight accessibility and safety issues, on Oct. 23 at the ASSETS ‘23 conference in New York.

Shortly after, on Oct. 30, other teams in the lab will present early research at the UIST ‘23 conference in San Francisco. One app lets the headsets better understand natural language and the other aims to make tennis and other ball sports accessible for low-vision users.

UW News spoke with the three studies’ lead authors, Xia Su and Jae (Jaewook) Lee, both UW doctoral students in the Paul G. Allen School of Computer Science & Engineering, about their work and the future of AR for accessibility.

What is AR and how is it typically used right now?

Jae Lee: I think one commonly accepted answer is that you use a wearable headset or a phone to superimpose virtual objects in a physical environment. A lot of people probably know AR from “Pokémon Go,” where you're superimposing these Pokémon into the physical world. Now Apple and Meta are introducing “mixed reality” or passthrough AR, which further blends the physical and virtual worlds through cameras.

Xia Su: Something I have also been observing lately is people are trying to expand the definition beyond goggles and phone screens. There could be AR audio, which is manipulating your hearing, or devices trying to manipulate your smell or touch.

A lot of people associate AR with virtual reality, and it gets wrapped up in discussion of the metaverse and gaming. How is it being applied for accessibility?

JL: AR as a concept has been around for several decades. But in Jon Froehlich’s lab, we’re combining AR with accessibility research. A headset or a phone can be capable of knowing how many people are in front of us, for example. For people who are blind or low vision, that information could be critical to how they perceive the world.

XS: There are really two different routes for AR accessibility research. The more prevalent one is trying to make AR devices more accessible to people. The other, less common approach is asking: How can we use AR or VR as tools to improve the accessibility of the real world? That’s what we're focused on.

JL: As AR glasses become less bulky and cheaper, and as AI and computer vision advance, this research will become increasingly important. But widespread AR, even for accessibility, brings up a lot of questions. How do you deal with bystander privacy? We, as a society, understand that vision technology can be beneficial to blind and low-vision people. But we also might not want to include facial recognition technology in apps for privacy reasons, even if that helps someone recognize their friends.

Let’s talk about the papers you have coming out. First, can you explain your app RASSAR?

XS: It's an app that people can use to scan their indoor spaces and help them detect possible accessibility safety issues in homes. It’s possible because some iPhones now have lidar (light detection and ranging) scanners that tell the depth of a space, so we can reconstruct the space in 3D. We combined this with computer vision models to highlight ways to improve safety and accessibility. To use it, someone — perhaps a parent who’s childproofing a home, or a caregiver — scans a room with their smartphone and RASSAR spots accessibility problems. For example, if a desk is too high, a red button will pop up on the desk. If the user clicks the button, there will be more information about why that desk’s height is an accessibility issue and possible fixes.

JL: Ten years ago, you would have needed to go through 60 pages of PDFs to fully check a house for accessibility. We boiled that information down into an app.

And this is something that anyone will be able to download to their phones and use?

XS: That’s the eventual goal. We already have a demo. This version relies on lidar, which is only on certain iPhone models right now. But if you have such a device, it’s very straightforward.

JL: This is an example of these advancements in hardware and software that let us create apps quickly. Apple announced RoomPlan, which creates a 3D floor plan of a room, when they added the lidar sensor. We’re using that in RASSAR to understand the general layout. Being able to build on that lets us come up with a prototype very quickly.

So RASSAR is nearly deployable now. The other areas of research you’re presenting are earlier in their development. Can you tell me about GazePointAR?

JL:  It’s an app deployed on an AR headset to enable people to speak more naturally with voice assistants like Siri or Alexa. There are all these pronouns we use when we speak that are difficult for computers to understand without visual context. I can ask “Where'd you buy it from?” But what is “it”? A voice assistant has no idea what I’m talking about. With GazePointAR, the goggles are looking at the environment around the user and the app is tracking the user’s gaze and hand movements. The model then tries to make sense of all these inputs — the word, the hand movements, the user’s gaze. Then, using a large language model, GPT, it attempts to answer the question.

How does it sense what the motions are?

JL: We’re using a headset called HoloLens 2 developed by Microsoft. It has a gaze tracker that’s watching your eyes and trying to guess what you’re looking at. It has hand tracking capability as well. In a paper that we submitted building on this, we noticed that we have a lot of problems with this. For example, people don't just use one pronoun at a time — we use multiple. We’ll say, “What's more expensive, this or this?” To answer that, we need information over time. But, again, you can run into privacy issues if you want to track someone's gaze or someone's visual field of view over time: What information are you storing and where is it being stored? As technology improves, we certainly need to watch out for these privacy concerns, especially in computer vision.

This is difficult even for humans, right? I can ask, “Can you explain that?” while pointing at several equations on a whiteboard and you won’t know which I’m referring to. What applications do you see for this?

JL: Being able to use natural language would be major. But if you expand this to accessibility, there’s the potential for a blind or low-vision person to use this to describe what’s around them. The question “Is anything dangerous in front of me?” is also ambiguous for a voice assistant. But with GazePointAR, ideally, the system could say, “There are possibly dangerous objects, such as knives and scissors.” Or low-vision people might make out a shape, point at it, then ask the system what “it” is more specifically.

And finally you’re working on a system called ARTennis. What is it and what prompted this research?

JL: This is going even more into the future than GazePointAR. ARTennis is a prototype that uses an AR headset to make tennis balls more salient for low vision players. The ball in play is marked by a red dot and has a crosshair of green arrows around it. Professor Jon Froehlich has a family member that wants to play sports with his children but doesn't have the residual vision necessary to do so. We thought if it works for tennis, it's going to work for a lot of other sports, since tennis has a small ball that shrinks as it gets further away. If we can track a tennis ball in real time, we can do the same with a bigger, slower basketball.

One of the co-authors on the paper is low vision himself, and he plays a lot of squash, and he wanted to try this application and give us feedback. We did a lot of brainstorming sessions with him, and he tested the system. The red dot and green crosshairs is the design that he came up with, to improve the sense of depth perception.

What’s keeping this from being something people can use right away?

JL: Well, like GazePointAR, it’s relying on a HoloLens 2 headset that’s $3,500. So that’s a different accessibility issue. It’s also running at roughly 25 frames per second and for humans to perceive in real time it needs to be about 30 frames per second. Sometimes we can’t capture the speed of the tennis ball. We're going to expand the paper and include basketball to see if there are different designs people prefer for different sports. The technology will certainly get faster. So our question is: What will the best design be for the people using it?

For more information, contact Jon Froehlich at jonf@cs.washington.edu, Lee at jaewook4@cs.washington.edu and Su at xiasu@cs.washington.edu.

 

New technique helps robots pack objects into a tight space


Researchers coaxed a family of generative AI models to work together to solve multistep robot manipulation problems


Reports and Proceedings

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

Diffusion CCSP 

IMAGE: 

MIT RESEARCHERS ARE USING GENERATIVE AI MODELS TO HELP ROBOTS MORE EFFICIENTLY SOLVE COMPLEX OBJECT MANIPULATION PROBLEMS, SUCH AS PACKING A BOX WITH DIFFERENT OBJECTS.

view more 

CREDIT: COURTESY OF ZHUTIAN YANG, ET. AL



CAMBRIDGE, Mass. -- Anyone who has ever tried to pack a family-sized amount of luggage into a sedan-sized trunk knows this is a hard problem. Robots struggle with dense packing tasks, too. 

For the robot, solving the packing problem involves satisfying many constraints, such as stacking luggage so suitcases don’t topple out of the trunk, heavy objects aren’t placed on top of lighter ones, and collisions between the robotic arm and the car’s bumper are avoided. 

Some traditional methods tackle this problem sequentially, guessing a partial solution that meets one constraint at a time and then checking to see if any other constraints were violated. With a long sequence of actions to take, and a pile of luggage to pack, this process can be impractically time consuming.   

MIT researchers used a form of generative AI, called a diffusion model, to solve this problem more efficiently. Their method uses a collection of machine-learning models, each of which is trained to represent one specific type of constraint. These models are combined to generate global solutions to the packing problem, taking into account all constraints at once. 

Their method was able to generate effective solutions faster than other techniques, and it produced a greater number of successful solutions in the same amount of time. Importantly, their technique was also able to solve problems with novel combinations of constraints and larger numbers of objects, that the models did not see during training.

Due to this generalizability, their technique can be used to teach robots how to understand and meet the overall constraints of packing problems, such as the importance of avoiding collisions or a desire for one object to be next to another object. Robots trained in this way could be applied to a wide array of complex tasks in diverse environments, from order fulfillment in a warehouse to organizing a bookshelf in someone’s home.

“My vision is to push robots to do more complicated tasks that have many geometric constraints and more continuous decisions that need to be made — these are the kinds of problems service robots face in our unstructured and diverse human environments. With the powerful tool of compositional diffusion models, we can now solve these more complex problems and get great generalization results,” says Zhutian Yang, an electrical engineering and computer science graduate student and lead author of a paper on this new machine-learning technique.

Her co-authors include MIT graduate students Jiayuan Mao and Yilun Du; Jiajun Wu, an assistant professor of computer science at Stanford University; Joshua B. Tenenbaum, a professor in MIT’s Department of Brain and Cognitive Sciences and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); Tomás Lozano-Pérez, an MIT professor of computer science and engineering and a member of CSAIL; and senior author Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering at MIT and a member of CSAIL. The research will be presented at the Conference on Robot Learning.

Constraint complications

Continuous constraint satisfaction problems are particularly challenging for robots. These problems appear in multistep robot manipulation tasks, like packing items into a box or setting a dinner table. They often involve achieving a number of constraints, including geometric constraints, such as avoiding collisions between the robot arm and the environment; physical constraints, such as stacking objects so they are stable; and qualitative constraints, such as placing a spoon to the right of a knife. 

There may be many constraints, and they vary across problems and environments depending on the geometry of objects and human-specified requirements.

To solve these problems efficiently, the MIT researchers developed a machine-learning technique called Diffusion-CCSP. Diffusion models learn to generate new data samples that resemble samples in a training dataset by iteratively refining their output.

To do this, diffusion models learn a procedure for making small improvements to a potential solution. Then, to solve a problem, they start with a random, very bad solution and then gradually improve it. 

For example, imagine randomly placing plates and utensils on a simulated table, allowing them to physically overlap. The collision-free constraints between objects will result in them nudging each other away, while qualitative constraints will drag the plate to the center, align the salad fork and dinner fork, etc.

Diffusion models are well-suited for this kind of continuous constraint-satisfaction problem because the influences from multiple models on the pose of one object can be composed to encourage the satisfaction of all constraints, Yang explains. By starting from a random initial guess each time, the models can obtain a diverse set of good solutions.

Working together

For Diffusion-CCSP, the researchers wanted to capture the interconnectedness of the constraints. In packing for instance, one constraint might require a certain object to be next to another object, while a second constraint might specify where one of those objects must be located. 

Diffusion-CCSP learns a family of diffusion models, with one for each type of constraint. The models are trained together, so they share some knowledge, like the geometry of the objects to be packed. 

The models then work together to find solutions, in this case locations for the objects to be placed, that jointly satisfy the constraints.

“We don’t always get to a solution at the first guess. But when you keep refining the solution and some violation happens, it should lead you to a better solution. You get guidance from getting something wrong,” she says.

Training individual models for each constraint type and then combining them to make predictions greatly reduces the amount of training data required, compared to other approaches.

However, training these models still requires a large amount of data that demonstrate solved problems. Humans would need to solve each problem with traditional slow methods, making the cost to generate such data prohibitive, Yang says.

Instead, the researchers reversed the process by coming up with solutions first. They used fast algorithms to generate segmented boxes and fit a diverse set of 3D objects into each segment, ensuring tight packing, stable poses, and collision-free solutions. 

“With this process, data generation is almost instantaneous in simulation. We can generate tens of thousands of environments where we know the problems are solvable,” she says. 

Trained using these data, the diffusion models work together to determine locations objects should be placed by the robotic gripper that achieve the packing task while meeting all of the constraints. 

They conducted feasibility studies, and then demonstrated Diffusion-CCSP with a real robot solving a number of difficult problems, including fitting 2D triangles into a box, packing 2D shapes with spatial relationship constraints, stacking 3D objects with stability constraints, and packing 3D objects with a robotic arm. 

Their method outperformed other techniques in many experiments, generating a greater number of effective solutions that were both stable and collision-free. 

In the future, Yang and her collaborators want to test Diffusion-CCSP in more complicated situations, such as with robots that can move around a room. They also want to enable Diffusion-CCSP to tackle problems in different domains without the need to be retrained on new data.

“Diffusion-CCSP is a machine-learning solution that builds on existing powerful generative models,” says Danfei Xu, an assistant professor in the School of Interactive Computing at the Georgia Institute of Technology and a Research Scientist at NVIDIA AI, who was not involved with this work. “It can quickly generate solutions that simultaneously satisfy multiple constraints by composing known individual constraint models. Although it’s still in the early phases of development, the ongoing advancements in this approach hold the promise of enabling more efficient, safe, and reliable autonomous systems in various applications.”

This research was funded, in part, by the National Science Foundation, the Air Force Office of Scientific Research, the Office of Naval Research, the MIT-IBM Watson AI Lab, the MIT Quest for Intelligence, the Center for Brains, Minds, and Machines, Boston Dynamics Artificial Intelligence Institute, the Stanford Institute for Human-Centered Artificial Intelligence, Analog Devices, JPMorgan Chase and Co., and Salesforce.

###

Written by Adam Zewe, MIT News

Paper: "Compositional Diffusion-Based Continuous Constraint Solvers"

https://arxiv.org/pdf/2309.00966.pdf

New computing hardware needs a theoretical basis


Peer-Reviewed Publication

UNIVERSITY OF GRONINGEN

Prof. Dr. Herbert Jaeger 

IMAGE: 

THIS IS HERBERT JAEGER, PROFESSOR OF COMPUTING IN COGNITIVE MATERIALS AT COGNIGRON, UNIVERSITY OF GRONINGEN, THE FIRST AUTHOR OF THE PAPER IN NATURE COMMUNICATIONS.

view more 

CREDIT: MARLEEN ANNEMA




There is an intense, worldwide search for novel materials to build computer microchips with that are not based on classic transistors but on much more energy-saving, brain-like components. However, whereas the theoretical basis for classic transistor-based digital computers is solid, there are no real theoretical guidelines for the creation of brain-like computers. Such a theory would be absolutely necessary to put the efforts that go into engineering new kinds of microchips on solid ground, argues Herbert Jaeger, Professor of Computing in Cognitive Materials at the University of Groningen.

Computers have, so far, relied on stable switches that can be off or on, usually transistors. These digital computers are logical machines and their programming is also based on logical reasoning. For decades, computers have become more powerful by further miniaturization of the transistors, but this process is now approaching a physical limit. That is why scientists are working to find new materials to make more versatile switches, which could use more values than just the digitals 0 or 1.

Dangerous pitfall

Jaeger is part of the Groningen Cognitive Systems and Materials Center (CogniGron), which aims to develop neuromorphic (i.e. brain-like) computers. CogniGron is bringing together scientists who have very different approaches: experimental materials scientists and theoretical modelers from fields as diverse as mathematics, computer science, and AI. Working closely with materials scientists has given Jaeger a good idea of the challenges that they face when trying to come up with new computational materials, while it has also made him aware of a dangerous pitfall: there is no established theory for the use of non-digital physical effects in computing systems.

Our brain is not a logical system. We can reason logically, but that is only a small part of what our brain does. Most of the time, it must work out how to bring a hand to a teacup or wave to a colleague on passing them in a corridor. ‘A lot of the information-processing that our brain does is this non-logical stuff, which is continuous and dynamic. It is difficult to formalize this in a digital computer,’ explains Jaeger. Furthermore, our brains keep working despite fluctuations in blood pressure, external temperature, or hormone balance, and so on. How is it possible to create a computer that is as versatile and robust? Jaeger is optimistic: ‘The simple answer is: the brain is proof of principle that it can be done.’

Neurons

The brain is, therefore, an inspiration for materials scientists. Jaeger: ‘They might produce something that is made from a few hundred atoms and that will oscillate, or something that will show bursts of activity. And they will say: “That looks like how neurons work, so let’s build a neural network".’ But they are missing a vital bit of knowledge here. ‘Even neuroscientists don’t know exactly how the brain works. This is where the lack of a theory for neuromorphic computers is problematic. Yet, the field doesn’t appear to see this.’

In a paper published in Nature Communications on 16 August, Jaeger and his colleagues Beatriz Noheda (scientific director of CogniGron) and Wilfred G. van der Wiel (University of Twente) present a sketch of what a theory for non-digital computers might look like. They propose that instead of stable 0/1 switches, the theory should work with continuous, analogue signals. It should also accommodate the wealth of non-standard nanoscale physical effects that the materials scientists are investigating.

Sub-theories

Something else that Jaeger has learned from listening to materials scientists is that devices from these new materials are difficult to construct. Jaeger: ‘If you make a hundred of them, they will not all be identical.’ This is actually very brain-like, as our neurons are not all exactly identical either. Another possible issue is that the devices are often brittle and temperature-sensitive, continues Jaeger. ‘Any theory for neuromorphic computing should take such characteristics into account.’

Importantly, a theory underpinning neuromorphic computing will not be a single theory but will be constructed from many sub-theories (see image below). Jaeger: ‘This is in fact how digital computer theory works as well, it is a layered system of connected sub-theories.’ Creating such a theoretical description of neuromorphic computers will require close collaboration of experimental materials scientists and formal theoretical modellers. Jaeger: ‘Computer scientists must be aware of the physics of all these new materials and materials scientists should be aware of the fundamental concepts in computing.’

Blind spots

Bridging this divide between materials science, neuroscience, computing science, and engineering is exactly why CogniGron was founded at the University of Groningen: it brings these different groups together. ‘We all have our blind spots,’ concludes Jaeger. ‘And the biggest gap in our knowledge is a foundational theory for neuromorphic computing. Our paper is a first attempt at pointing out how such a theory could be constructed and how we can create a common language.’

Reference: Herbert Jaeger, Beatriz Noheda & Wilfred G. van der Wiel: Toward a formal theory for computing machines made out of whatever physics offers. Nature Communications, 16 August 2023 

 

Enlightening insects: Morpho butterfly nanostructure inspires technology for bright, balanced lighting


Researchers at Osaka University use randomly arranged self-cleaning nanopatterns to realize a new-type optical diffuser based on diffraction, which might be useful in visual displays and energy-saving windows

Peer-Reviewed Publication

OSAKA UNIVERSITY

Figure 

IMAGE: 

DESIGN AND DIFFUSED LIGHT FOR THE ANISOTROPIC (LEFT) AND ISOTROPIC (RIGHT) MORPHO-TYPE DIFFUSERS. IT HAS HIGH OPTICAL FUNCTIONALITIES AND ANTI-FOULING PROPERTIES, WHICH UNTIL NOW HAVE NOT BEEN REALIZED IN ONE DEVICE.

view more 

CREDIT: K.YAMASHITA, A.SAITO




Osaka, Japan – As you watch Morpho butterflies wobble in flight, shimmering in vivid blue color, you’re witnessing an uncommon form of structural color that researchers are only beginning to use in lighting technologies such as optical diffusers. Furthermore, imparting a self-cleaning capability to such diffusers would minimize soiling and staining and maximize practical utility.

Now, in a study recently published in Advanced Optical Materials, researchers at Osaka University have developed a water-repelling nanostructured light diffuser that surpasses the functionality of other common diffusers. This work might help solve common lighting dilemmas in modern technologies.

Standard lighting can eventually become tiring because it’s unevenly illuminating. Thus, many display technologies use optical diffusers to make the light output more uniform. However, conventional optical diffusers reduce the light output, don’t work well for all emitted colors, or require special effort to clean. Morpho butterflies are an inspiration for improved optical diffusers. Their randomly arranged multilayer architecture enables structural color: in this case, selective reflection of blue light over a ≥±40° angle from the direction of illumination. The goal of the present work is to use this inspiration from nature to design a simplified optical diffuser that has both high transmittance and wide angular spread, works for a range of colors without dispersion, cleans by a simple water rinse, and can be shaped with standard nanofabrication tools.

“We create two-dimensional nanopatterns—in common transparent polydimethylsiloxane elastomer—of binary height yet random width, and the two surfaces have different structural scales,” explains Kazuma Yamashita, lead author of the study. “Thus, we report an effective optical diffuser for short- and long-wavelength light.”

The researchers tailored the patterns of the diffuser surfaces to optimize the performance for blue and red light, and their self-cleaning properties. The experimentally measured light transmittance was >93% over the entire visible light spectrum, and the light diffusion was substantial and could be controlled into anisotropic shape: 78° in the x-direction and 16° in the y-direction (similar to values calculated by simulations). Furthermore, the surfaces both strongly repelled water in contact angle and self-cleaning experiments.

“Applying protective cover glass layers on either side of the optical diffuser largely maintains the optical properties, yet protects against scratching,” says Akira Saito, senior author. “The glass minimizes the need for careful handling, and indicates our technology’s utility to daylight-harvesting windows.”

This work emphasizes that studying the natural world can provide insights for improved everyday devices; in this case, lighting technologies for visual displays. The fact that the diffuser consists of a cheap material that essentially cleans itself and can be easily shaped with common tools might inspire other researchers to apply the results of this work to electronics and many other fields.

###

The article, “Development of a high-performance, anti-fouling optical diffuser inspired by Morpho butterfly's nanostructure,” was published in Advanced Optical Materials at DOI: https://doi.org/10.1002/adom.202301086

About Osaka University

Osaka University was founded in 1931 as one of the seven imperial universities of Japan and is now one of Japan's leading comprehensive universities with a broad disciplinary spectrum. This strength is coupled with a singular drive for innovation that extends throughout the scientific process, from fundamental research to the creation of applied technology with positive economic impacts. Its commitment to innovation has been recognized in Japan and around the world, being named Japan's most innovative university in 2015 (Reuters 2015 Top 100) and one of the most innovative institutions in the world in 2017 (Innovative Universities and the Nature Index Innovation 2017). Now, Osaka University is leveraging its role as a Designated National University Corporation selected by the Ministry of Education, Culture, Sports, Science and Technology to contribute to innovation for human welfare, sustainable development of society, and social transformation.

Website: https://resou.osaka-u.ac.jp/e

AMERIKA

National Poll: Parents of elementary-aged children may engage in more helicopter parenting than they think


Report suggests gap between what parents say about fostering children’s independence and what tasks they actually let their kids do without them


Reports and Proceedings

MICHIGAN MEDICINE - UNIVERSITY OF MICHIGAN

Do children have enough independence? 

IMAGE: 

REPORTS SUGGESTS SIZABLE GAP BETWEEN PARENT ATTITUDES ABOUT PROMOTING CHILDREN’S INDEPENDENCE AND WHAT THEY ACTUALLY ALLOW OR ENCOURAGE THEIR CHILDREN TO DO WITHOUT SUPERVISION.

view more 

CREDIT: UNIVERSITY OF MICHIGAN HEALTH C.S. MOTT CHILDREN’S HOSPITAL NATIONAL POLL ON CHILDREN’S HEALTH




ANN ARBOR, Mich. –  As they grow, children start doing certain activities without their parents watching over them, including trick-or-treating with friends, staying home alone or biking to a friend’s house.

And while most parents agree that kids benefit from opportunities to be independent, they may be engaging in more “helicopter parenting” than they realize, suggests a new University of Michigan Health C.S. Mott Children’s Hospital National Poll on Children’s Health.

“There’s a sizable gap between parent attitudes about promoting children’s independence and what they actually allow or encourage their children to do without supervision,” said Mott Poll co-director Sarah Clark, M.P.H.

“This suggests some parents may be missing opportunities to guide their children in tasks of autonomy and unintentionally hindering kids’ development of independence and problem-solving skills.”

Four in five parents of children ages 9-11 agree that it’s good for children to have free time without adult supervision. But fewer report their child actually does certain things without an adult present, the poll suggests.

About three in five parents have let their tween-aged child stay home for 30-60 minutes while half say their child has separated from them to find an item at another aisle in the store. Less than half say their child has waited in the car while the parent runs a quick errand, walked or biked to a friend’s house or played at the park with a friend, and less than a sixth of parents have let their child trick-or-treat with friends.

The top reason behind parents’ hesitancy to promote such independent milestones was safety. Yet, while a little more than half worried someone might scare or follow their child, just 17% of parents say their neighborhood is not safe for children to be alone.

“To some extent, worrying about your child is natural. But some parents are limiting their child’s independent activities due to highly publicized media reports, even if those outcomes are very unlikely to occur or cannot be prevented,” Clark said.

“Parents can ease in with small steps such as letting their child spend time with a friend at a familiar public place. Discussions before and after can help parents assess if their kids understand the importance of following safety rules.”

Other parents say they keep children from taking on such tasks alone because they don’t believe they’re ready while some parents believe state or local laws don’t allow children that age to be alone and that someone might call the police. A little more than one in 10 parents also think others will think they are a bad parent if their child is seen unsupervised.

Over half of parents say that unsupervised children cause trouble while a quarter have criticized another parent, and 13% have been criticized for not adequately supervising their child.

“Parents may be affected by ‘blame culture’ – the expectation that they will be criticized if something happens to their child,” Clark said.  

The poll report also suggests a disconnect between what parents of younger children ages 5-8 say and what they do in fostering independence.

Nearly three quarters say they make it a point to have their child do things themselves. But less than half of these parents say their child regularly engages in actions such as talking with the doctor or nurse at health visits, deciding how to spend allowance or gift money, speaking to unfamiliar adults in business situations, such as ordering at a restaurant, or preparing their own meal or snack.

Among reasons were safety, getting stuck in habits, the parent belief that their child doesn’t want to do things themselves or isn’t mature enough, thinking it will take too long or that it won’t be done in the parent’s preferred way.

The elementary school years, Clark notes, is an important phase for developing independence with parental guidance.

“Becoming independent is a gradual process of allowing children increasing amounts of freedom, with parents there to teach skills and help the child understand the consequences of their choices,” Clark said.

“As children become more experienced and comfortable with tasks, they can assume responsibility for doing them regularly. Research shows encouraging independence fosters a child’s self-confidence, resilience, problem-solving ability, and mental health.”

The nationally representative poll is based on responses from 1,044 parents of children 5-11 years surveyed in August.

 

Treating high-risk drinking, alcohol use disorder: new Canadian guideline


CANADIAN MEDICAL ASSOCIATION JOURNAL




A new Canadian guideline for treating high-risk drinking and alcohol use disorder (AUD) with 15 evidence-based recommendations to reduce harms associated with high-risk drinking and to support people’s treatment and recovery from AUD is published in CMAJ (Canadian Medical Association Journalhttps://www.cmaj.ca/lookup/doi/10.1503/cmaj.230715.

High-risk drinking, AUD and alcohol-related harms are common in Canada. Nearly 18% of people aged 15 years or older in Canada will meet the clinical criteria for an AUD in their lifetime, and over 50% of people in Canada aged 15 years or older currently drink more than the amount recommended in Canada’s Guidance on Alcohol and Health.

Despite the high prevalence of high-risk drinking and AUD, these conditions frequently go unrecognized and untreated in the health care system. Even if recognized, AUD does not receive evidence-based interventions. It’s estimated that less than 2% of eligible patients receive evidence-based alcohol treatment in the form of evidence-based pharmacotherapies, likely owing to low awareness. Conversely, according to the guideline, many Canadian patients receive medications that may be ineffective and potentially harmful.

Guideline developed in partnership with Canadian Research Initiative on Substance Misuse

To address this health issue, Health Canada funded the Canadian Research Initiative on Substance Misuse (CRISM) and the BC Centre on Substance Use (BCCSU) to develop the “Canadian Guideline for the Clinical Management of High-Risk Drinking and Alcohol Use Disorder.” The guideline provides recommendations for the clinical management of high-risk drinking and AUD to support primary health care providers to implement evidence-based screening and treatment interventions.

The guideline, developed by a 36-member committee, is based on the latest evidence, expert consensus, and lived and living experience, as well as clinical experience from across Canada. It makes 15 recommendations for care providers about how to ask about alcohol, diagnose AUD, manage alcohol withdrawal, and create treatment plans based on the individual’s goals. These treatment plans can include medications, counselling, harm reduction or a combination.

“High-risk drinking and alcohol use disorder frequently go unrecognized and untreated in our health care system, leaving individuals without access to effective treatments that can improve their health and well-being,” says Dr. Jürgen Rehm, co-chair of the guideline writing committee and senior scientist in the Institute for Mental Health Policy Research at the Centre for Addiction and Mental Health (CAMH), Toronto, Ontario. “These guidelines give primary care providers the tools to support early detection and treatment, and connect patients and families with specialized care services and recovery-oriented supports in their communities.”

The website Helpwithdrinking.ca will be available to raise awareness of resources and treatments available to people in Canada based on the new guidelines.

Practice article highlights potential harms of prescribing medications not recommended in guideline

A related practice article https://www.cmaj.ca/lookup/doi/10.1503/cmaj.231015 highlights the complexity of providing treatment to patients with AUD and the possible negative effects of selective serotonin reuptake inhibitor (SSRI) therapy, which can worsen the disease in some people.

“Although the initiation of an SSRI appeared to be a likely explanation for the escalation in this patient’s alcohol use, other factors may also have played an important role,” writes Dr. Nikki Bozinoff, associate scientist at CAMH, with co-authors. “This case illustrates that although it may be common practice to prescribe SSRIs for people with AUD, SSRIs may not be effective for depressive symptoms in people with concurrent active AUD, and may worsen alcohol use in some.”

The guideline recommends against SSRI antidepressants in patients with AUD, or AUD and concurrent anxiety or depression.

“Despite the burden of illness, there remains a tremendous gap between what we know is effective treatment and the care Canadians are actually receiving,” says Dr. Evan Wood, co-chair of the guideline writing committee and an addiction medicine specialist. “Unfortunately, in the absence of effective care, people are being routinely prescribed potentially harmful medications that can, unknown to most prescribers, actually increase alcohol use in some patients. These guidelines seek to close that gap and ensure Canadians are accessing the safest and most effective treatments that meet their needs.”

 

Local retail outlets for legal marijuana may be associated with alcohol co-use among high school students: Study


Peer-Reviewed Publication

JOURNAL OF STUDIES ON ALCOHOL AND DRUGS

Retail marijuana outlet 

IMAGE: 

RETAIL MARIJUANA OUTLET

view more 

CREDIT: JOURNAL OF STUDIES ON ALCOHOL AND DRUGS




PISCATAWAY, NJ—Given the increasing trend toward legalizing marijuana in many states, there is growing concern that underage youth may find the drug easier to access. In fact, a recent study reported in the Journal of Studies on Alcohol and Drugs suggests that in areas with local retail availability of legalized marijuana, high school students are more likely to use marijuana and alcohol together, as well as alcohol alone.

“Greater retail availability may ‘normalize’ marijuana use for young people, even if they are unable to purchase marijuana directly from retail businesses, and retail sales may introduce greater access through social sources,” says study lead author Sharon O’Hara, Dr.P.H., lecturer at the University of California Berkeley School of Public Health and associate research scientist at the Pacific Institute for Research and Evaluation.

For their research, O’Hara and colleagues used info from the 2010-2011 and 2018-2019 California Healthy Kids Surveys of 9th and 11th graders in 554 public high schools in 38 California cities. Students were asked how often they used marijuana and alcohol over the previous 30 days.

The researchers also calculated the density of marijuana and cannabis retail outlets in each area (the number of outlets per square mile within the city limits).

Among the full sample, O’Hara and colleagues found a significant interaction between recreational marijuana legalization and marijuana outlet density, indicating a greater increase in the likelihood of alcohol use and co-use of alcohol and marijuana in cities with higher retail availability of cannabis after the passage of recreational marijuana legalization. A positive association between recreational marijuana legalization and marijuana use was found in cities at all levels of marijuana outlet density.

That outcome was expected, but a closer look at the data found some surprising results.

“We were most surprised by the effects of recreational marijuana legalization on the co-use of alcohol and marijuana by subgroups of alcohol users versus cannabis users,” says O’Hara. “We found significant positive associations between recreational marijuana legalization and co-use for past-30-day drinkers but significant inverse associations between recreational marijuana legalization and co-use among past-30-day marijuana users.”

The researchers hypothesize that, since its legalization, marijuana use has been increasing in the general population of California adolescents, while alcohol use continues to decrease.

Given that, among the full sample of high school students, the effect of recreational marijuana legalization was strongest in the cities with relatively high marijuana outlet density, attention should be paid to policies that limit the retail availability of marijuana, says O’Hara.

“Regulatory policies can be considered at the state level and in local jurisdictions with zoning authority over retail marijuana businesses,” she says. “So, even if your state legalizes recreational marijuana, you may have the ability to regulate the number and location of retail marijuana businesses using local land use authority.”

As marijuana outlets and retail sales become as commonplace as alcohol stores, O’Hara and her co-researchers have concerns about the effects on high school students.

The researchers hope their findings help to inform future research on the possible effects of recreational marijuana legalization and marijuana retail outlet density on alcohol and marijuana co-use and guide investigations into the mechanisms underlying these associations.

-----

O’Hara, S. E., Paschall, M. J., & Grube, J. W. (2023). Recreational marijuana legalization, local retail availability, and alcohol and marijuana use and co-use among California high school students. Journal of Studies on Alcohol and Drugs, 84, 734–743. doi: 10.15288/jsad.22-00277

-----

To arrange an interview with Sharon O’Hara, Dr.P.H., please contact her at sohara@prev.org.

-----

The Journal of Studies on Alcohol and Drugs (jsad.com) is published by the Center of Alcohol & Substance Use Studies (alcoholstudies.rutgers.edu) at Rutgers, The State University of New Jersey. It is the oldest substance-related journal published in the United States.

-----

The Journal of Studies on Alcohol and Drugs considers this press release to be in the public domain. Editors may publish this press release in print or electronic form without legal restriction. Please include a byline and citation.

-----

To view the public domain, stock-photo database of alcohol, tobacco and other drug-related images compiled by the Journal of Studies on Alcohol and Drugs, please visit www.jsad.com/photos.