Thursday, May 23, 2024

 A.I.

AI poised to usher in new level of concierge services to the public


Researchers explore how intelligent systems can upgrade hospitality sector



OHIO STATE UNIVERSITY





COLUMBUS, Ohio – Concierge services built on artificial intelligence have the potential to improve how hotels and other service businesses interact with customers, a new paper suggests. 

In the first work to introduce the concept, researchers have outlined the role an AI concierge, a technologically advanced assistant, may play in various areas of the service sector as well as the different forms such a helper might embody. 

Their paper envisions a virtual caretaker that, by combining natural language processing, behavioral data and predictive analytics, would anticipate a customer’s needs, suggest certain actions, and automate routine tasks without having to be explicitly commanded to do so. 

Though such a skilled assistant is still years away, Stephanie Liu, lead author of the paper and an associate professor of hospitality management at The Ohio State University, and her colleagues drew insight from several contemporary fields, including service management, psychology, human-computer interaction and ethics research, to detail what opportunities and challenges might arise from having an AI concierge manage human encounters.  

“The traditional service industry uses concierges for high-end clients, meaning that only a few people have access to them,” Liu said. “Now with the assistance of AI technology, everybody can have access to a concierge providing superior experiences.”

On that premise, the benefits of incorporating AI into customer service are twofold: It would allow companies to offer around-the-clock availability and consistency in their operations as well as improve how individuals engage with professional service organizations, she said. 

Moreover, as the younger workforce gravitates to more tech-oriented jobs and global travel becomes more common, generative AI could be an apt solution to deal with the escalating demands of evolving hospitality trends, said Liu. 

“The development of AI technology for hotels, restaurants, health care, retail and tourism has a lot of potential,” she said. 

The paper was published recently in the Journal of Service Management

Despite the social and economic benefits associated with implementing such machines, how effective AI concierges may be at completing a task is dependent on both the specific situation and the type of interface consumers use, said Liu. 

There are four primary forms a smart aide might take, each with distinctive attributes that would provide consumers with different levels of convenience, according to Liu. 

The first type is a dialogue interface that uses only text or speech to communicate, such as ChatGPT, a conversational agent often used to make inquiries and garner real-time assistance. Many of these interactive devices are already used in hotels and medical buildings for contactless booking or to connect consumers with other services and resources. 

The second is a virtual avatar that employs a vivid digital appearance and a fully formed persona to foster a deeper emotional connection with the consumer. This method is often utilized for telehealth consultations and online learning programs.  

The third iteration is a holographic projection wherein a simulated 3D image is brought into the physical world. According to the paper, this is ideally suited for scenarios where the visual impact is desired, but physical assistance itself is not necessary. 

The paper rounds out the list by suggesting an AI concierge that would present as a tangible, or touchable robot. This form would offer the most human-like sensory experiences and would likely be able to execute multiple physical tasks, like transporting heavy luggage. 

Some international companies have already developed these cutting-edge tools for use in a limited capacity. One robotic concierge, known as Sam, was designed to aid those in senior living communities by helping them check in, make fall risk assessments and support staff with non-medical tasks. Another deployed at South Korea’s Incheon International Airport helped consumers navigate paths to their destination and offered premier shopping and dining recommendations. 

Yet as advanced computing algorithms become more intertwined in our daily lives, industry experts will likely have to consider consumer privacy concerns when deciding when and where to implement these AI systems. One way to deal with these issues would be to create the AI concierge with limited memory or other safewalls to protect stored personal data, such as identity and financial information, said Liu.  

“Different companies are at different stages with this technology,” said Liu. “Some have robots that can detect customers’ emotions or take biometric inputs and others have really basic ones. It opens up a totally different level of service that we have to think critically about.”

What’s more, the paper notes that having a diversity of concierge options available for consumers to choose from is also advantageous from a mental health standpoint.

Because AI is viewed as having less agency than their human counterparts, it might help mitigate psychologically uncomfortable service situations that could arise because of how consumers feel they might be perceived by a human concierge. This reduced apprehension regarding the opinion of a machine may encourage heightened comfort levels and result in more favorable responses about the success of the AI concierge, said Liu. 

Ultimately, there’s still much multidisciplinary testing to be done to ensure these technologies can be applied in a widespread and equitable manner. Liu adds that future research should seek to determine how certain design elements, such as the perceived gender, ethnicity or voice of these robotic assistants, would impact overall consumer satisfaction. 

#

Contact: Stephanie Liu, Liu.6225@osu.edu

Written by: Tatyana Woodall, Woodall.52@osu.edu

Artificial intelligence resolves conflicts impeding animal behavior research



Algorithm automates research and reconciles differing results that often arise between various studies.



UNIVERSITY OF WASHINGTON SCHOOL OF MEDICINE/UW MEDICINE

Neurobehavior lab 

IMAGE: 

NEUROBIOLOGY RESEARCHERS SAM GOLDEN AND NASTACIA GOODWIN REVIEW LIGHT SHEET FLUORESCENT MICROSCOPY BRAIN IMAGES REVEALING THE ACTIVITY OF INDIVIDUAL NEURONS DURING DIFFERENT BEHAVIORS. THEY ARE IN A RESEARCH LABORATORY IN THE DEPARTMENT OF BIOLOGICAL STRUCTURE AT THE UNIVERSITY OF WASHINGTON SCHOOL OF MEDICINE IN SEATTLE. 

view more 

CREDIT: MICHAEL MCCARTHY/UW MEDICINE




Artificial intelligence software has been developed to rapidly analyze animal behavior so that behaviors can be more precisely linked to the activity of individual brain circuits and neurons, researchers in Seattle report.

The program promises not only to speed research into the neurobiology of behavior, but also to enable comparison and reconcile results that disagree due to differences in how individual laboratories observe, analyze and classify behaviors, said Sam Golden, assistant professor of biological structure at the University of Washington School of Medicine. 

“The approach allows labs to develop behavioral procedures however they want and makes it possible to draw general comparisons between the results of studies that use different behavioral approaches,” he said.

A paper describing the program appears in the journal Nature Neuroscience. Golden and Simon Nilsson, a postdoctoral fellow in the Golden lab, are the paper’s senior authors. The first author is Nastacia Goodwin, a graduate student in the lab.

The study of the neural activity behind animal behavior has led to major advances in the understanding and treatment of such human disorders as addiction, anxiety and depression. 

Much of this work is based on observations painstakingly recorded by individual researchers who watch animals in the lab and note their physical responses to different situations, then correlate that behavior with changes in brain activity. 

For example, to study the neurobiology of aggression, researchers might place two mice in an enclosed space and record signs of aggression. These would typically include observations of the animals’ physical proximity to one another, their posture, and physical displays such as rapid twitching, or rattling, of the tail. 

Annotating and classifying such behaviors is an exacting, protracted task. It can be difficult to accurately recognize and chronicle important details, Golden said. “Social behavior is very complicated, happens very fast and often is nuanced, so a lot of its components can be lost when an individual is observing it.” 

To automate this process, researchers have developed AI-based systems to track components of an animal’s behavior and automatically classify the behavior, for example, as aggressive or submissive. 

Because these programs can also record details more rapidly than a human, it is much more likely that an action can be closely correlated with neural activity, which typically occurs in milliseconds.

One such program, developed by Nilsson and Goodwin, is called SimBA, for Simple Behavioral Analysis. The open-source program features an easy-to-use graphical interface and requires no special computer skills to use. It has been widely adopted by behavioral scientists. 

“Although we built SimBA for a rodent lab, we immediately started getting emails from all kinds of labs: wasp labs, moth labs, zebrafish labs,” Goodwin said.

But as more labs used these programs, the researchers found that similar experiments were yielding vastly different results.

 “It became apparent that how any one lab or any one person defines behavior is pretty subjective, even when attempting to replicate well-known procedures,” Golden said.

Moreover, accounting for these differences was difficult because it is often unclear how AI systems arrive at their results, their calculations occurring in what is often characterized as “a black box.”

Hoping to explain these differences, Goodwin and Nilsson incorporated into SimBA a machine-learning explainability approach that produces what is called the Shapely Additive exPlanations (SHAP) score. 

Essentially what this explainability approach does is determine how removing one feature used to classify a behavior, say tail rattling, changes the probability of an accurate prediction by the computer. 

By removing different features from thousands of different combinations, SHAP can determine how much predictive strength is provided by any individual feature used in the algorithm that is classifying the behavior. The combination of these SHAP values then quantitatively defines the behavior, removing the subjectivity in behavioral descriptions.

“Now we can compare (different labs’) respective behavioral protocols using SimBA and see whether we’re looking, objectively, at the same or different behavior,” Golden said.

“This approach allows labs to design experiments however they like, but because you can now directly compare behavioral results from labs that are using different behavioral definitions, you can draw clearer conclusions between their results. Previously, inconsistent neural data could have been attributed to many confounds, and now we can cleanly rule out behavioral differences as we strive for cross-lab reproducibility and interpretability” Golden said.

This research was supported by grants from the National Institutes of Health (K08MH123791), the National Institute on Drug Abuse (R00DA045662, R01DA059374, P30DA048736), National Institute of Mental Health (1F31MH125587, F31AA025827, F32MH125634), National Institute of General Medicine Sciences (R35GM146751), Brain & Behavior Research Foundation, Burroughs Wellcome Fund, Simons Foundation, and Washington Research Foundation.

 

A video frame of two mice whose behavior is being analyzed by SimBA. The dots represent the body parts being tracked by the program.

CREDT

Nastacia Goodwi

No comments: