Wednesday, January 31, 2024

 

Robot trained to read braille at twice the speed of humans


Peer-Reviewed Publication

UNIVERSITY OF CAMBRIDGE

Robot trained to read braille at twice the speed of humans 

IMAGE: 

RESEARCHERS HAVE DEVELOPED A ROBOTIC SENSOR THAT INCORPORATES ARTIFICIAL INTELLIGENCE TECHNIQUES TO READ BRAILLE AT SPEEDS ROUGHLY DOUBLE THAT OF MOST HUMAN READERS.

view more 

CREDIT: UNIVERSITY OF CAMBRIDGE




Researchers have developed a robotic sensor that incorporates artificial intelligence techniques to read braille at speeds roughly double that of most human readers.

The research team, from the University of Cambridge, used machine learning algorithms to teach a robotic sensor to quickly slide over lines of braille text. The robot was able to read the braille at 315 words per minute at close to 90% accuracy.

Although the robot braille reader was not developed as an assistive technology, the researchers say the high sensitivity required to read braille makes it an ideal test in the development of robot hands or prosthetics with comparable sensitivity to human fingertips. The results are reported in the journal IEEE Robotics and Automation Letters.

Human fingertips are remarkably sensitive and help us gather information about the world around us. Our fingertips can detect tiny changes in the texture of a material or help us know how much force to use when grasping an object: for example, picking up an egg without breaking it or a bowling ball without dropping it.

Reproducing that level of sensitivity in a robotic hand, in an energy-efficient way, is a big engineering challenge. In Professor Fumiya Iida’s lab in Cambridge’s Department of Engineering, researchers are developing solutions to this and other skills that humans find easy, but robots find difficult.

“The softness of human fingertips is one of the reasons we’re able to grip things with the right amount of pressure,” said Parth Potdar from Cambridge’s Department of Engineering and an undergraduate at Pembroke College, the paper’s first author. “For robotics, softness is a useful characteristic, but you also need lots of sensor information, and it’s tricky to have both at once, especially when dealing with flexible or deformable surfaces.”

Braille is an ideal test for a robot ‘fingertip’ as reading it requires high sensitivity, since the dots in each representative letter pattern are so close together. The researchers used an off-the-shelf sensor to develop a robotic braille reader that more accurately replicates human reading behaviour.

“There are existing robotic braille readers, but they only read one letter at a time, which is not how humans read,” said co-author David Hardman, also from the Department of Engineering. “Existing robotic braille readers work in a static way: they touch one letter pattern, read it, pull up from the surface, move over, lower onto the next letter pattern, and so on. We want something that’s more realistic and far more efficient.”

The robotic sensor the researchers used has a camera in its ‘fingertip’, and reads by using a combination of the information from the camera and the sensors. “This is a hard problem for roboticists as there’s a lot of image processing that needs to be done to remove motion blur, which is time and energy-consuming,” said Potdar.

The team developed machine learning algorithms so the robotic reader would be able to ‘deblur’ the images before the sensor attempted to recognise the letters. They trained the algorithm on a set of sharp images of braille with fake blur applied. After the algorithm had learned to deblur the letters, they used a computer vision model to detect and classify each character.

Once the algorithms were incorporated, the researchers tested their reader by sliding it quickly along rows of braille characters. The robotic braille reader could read at 315 words per minute at 87% accuracy, which is twice as fast and about as accurate as a human Braille reader.

“Considering that we used fake blur the train the algorithm, it was surprising how accurate it was at reading braille,” said Hardman. “We found a nice trade-off between speed and accuracy, which is also the case with human readers.”

“Braille reading speed is a great way to measure the dynamic performance of tactile sensing systems, so our findings could be applicable beyond braille, for applications like detecting surface textures or slippage in robotic manipulation,” said Potdar.

In future, the researchers are hoping to scale the technology to the size of a humanoid hand or skin. The research was supported in part by the Samsung Global Research Outreach Program.

VIDEO

https://www.eurekalert.org/multimedia/1013206

Researchers have developed a robotic sensor that incorporates artificial intelligence techniques to read braille at speeds roughly double that of most human readers. (Note: this video is at 2x speed)

CREDIT

University of Cambridge

 

Drexel researchers propose AI-guided system for robotic inspection of buildings, roads and bridges

Peer-Reviewed Publication

DREXEL UNIVERSITY

Multi-scale approach for robotic crack measurement in concrete structures 

IMAGE: 

RESEARCHERS FROM DREXEL UNIVERSITY HAVE DEVELOPED A MACHINE LEARNING AND COMPUTER VISION SYSTEM THAT CAN IDENTIFY CRACKS IN CONCRETE INFRASTRUCTURE AND GUIDE A ROBOTIC SCANNER TO CREATE A MODEL FOR ASSESSMENT AND MONITORING.

view more 

CREDIT: DREXEL UNIVERSITY




Our built environment is aging and failing faster than we can maintain it. Recent building collapses and structural failures of roads and bridges are indicators of a problem that’s likely to get worse, according to experts, because it’s just not possible to inspect every crack, creak and crumble to parse dangerous signs of failure from normal wear and tear. In hopes of playing catch-up, researchers in Drexel University’s College of Engineering are trying to give robotic assistants the tools to help inspectors with the job.

Augmenting visual inspection technologies — that have offered partial solutions to speed damage assessment in recent years — with a new machine learning approach, the researchers have created a system that they believe could enable efficient identification and inspection of problem areas by autonomous robots. Reported in the journal Automation in Construction, their multi-scale system combines computer vision with a deep-learning algorithm to pinpoint problem areas of cracking before directing a series of laser scans of the regions to create a “digital twin” computer model that can be used to assess and monitor the damage.

The system represents a strategy that would significantly reduce the overall inspection workload and enable the focused consideration and care needed to prevent structural failures.

“Cracks can be regarded as a patient’s medical symptoms that should be screened in the early stages,” the authors, Arvin Ebrahimkhanlou, PhD, an assistant professor, and Ali Ghadimzadeh Alamdari, a research assistant, both in Drexel’s College of Engineering, wrote. “Consequently, early and accurate detection and measurement of cracks are essential for timely diagnosis, maintenance, and repair efforts, preventing further deterioration and mitigating potential hazards.”

But right now, they note, so many of the nation’s buildings, bridges, tunnels and dams are among the walking wounded that the first priority should be setting up a triage system. Before the Bipartisan Infrastructure Law, the American Society of Civil Engineers estimated a backlog of $786 billion in repairs to roads and bridges. Adding to the challenge is a growing shortage of skilled infrastructure workers — including inspectors and those who would repair aging structures.

“Civil infrastructures include large-scale structures and bridges, but their defects are often small in scale,” Ebrahimkhanlou said. “We believe taking a multi-scale robotic approach will enable efficient pre-screening of problem areas via computer vision and precise robotic scanning of defects using nondestructive, laser-based scans.”

Instead of a physical measurement interpreted subjectively by human eyes, the system uses a high-resolution stereo-depth camera feed of the structure into a deep-learning program called a convolutional neural network. These programs, which are being used for facial recognition, drug development and deepfake detection, are gaining attention for their ability to spot the finest of patterns and discrepancies in massive volumes of data.

Training the algorithms on datasets of concrete structure images turns them into crack crack-spotters.

“The neural network has been trained on a dataset of sample cracks, and it can identify crack-like patterns in the images that the robotic system collects from the surface of a concrete structure. We call regions containing such patterns, regions of interest,” said Ebrahimkhanlou, who leads research on robotic and artificial-intelligence based assessment of infrastructure, mechanical and aerospace structures in Drexel’s Department of Civil, Architectural, and Environmental Engineering.

Once the “region of interest” — the cracked or damaged area — is identified, the program directs a robotic arm to scan over it with a laser line scanner, which creates a three-dimensional image of the damaged area. At the same time a LiDAR (Light Detection and Ranging) camera scans the structure surrounding the crack. Stitching both plots together creates a digital model of the area that shows the width and dimensions of the crack and allows tracking changes in between inspections.

“Tracking crack growth is one of the advantages of producing a digital twin model,” Alamdari said. “In addition, it allows bridge owners to have a better understanding of the condition of their bridge, and plan maintenance and repair.”

The team tested the system in the lab on a concrete slab with a variety of cracks and deterioration. In a test of its ability to detect and measure small cracks, the system was sensitive enough to pinpoint and accurately size up the smallest of fissures — less than a hundredth of a millimeter wide — outperforming top-of-the-line cameras, scanners and fiber optic sensors by a respectable margin.

While human inspectors would still make the final call on when and how to repair the damages, the robotic assistants could greatly reduce their workload, according to the researchers. In addition, an automated inspection process would reduce oversights and subjective judgement errors that can happen when overworked human inspectors take the first look.

“This approach significantly reduces unnecessary data collection from areas that are in good structural condition while still providing comprehensive and reliable data necessary for condition assessment,” they wrote.

The researchers envision incorporating the multi-scale monitoring system as part of a larger autonomous monitoring framework including drones and other autonomous vehicles — like the one proposed by the Federal Highway Administration’s Nondestructive Evaluation Laboratory, which would use an array of tools and sensing technologies to autonomously monitor and repair infrastructure.

“Moving forward, we aim to integrate this work with an unmanned ground vehicle, enhancing the system's ability to autonomously detect, analyze, and monitor cracks,” Alamdari said. “The goal is to create a more comprehensive, intelligent and efficient system for maintaining structural integrity across various types of infrastructure. Additionally, real-world testing and collaboration with industry and regulatory bodies will be critical for practical application and continuous improvement of the technology.”

Robotic concret crack scanning [VIDEO] | 

 

Machine sentience and you: what happens when machine learning goes too far


Peer-Reviewed Publication

TSINGHUA UNIVERSITY PRESS



There’s always some truth in fiction, and now is about the time to get a step ahead of sci-fi dystopias and determine what the risk in machine sentience can be for humans.

 

Although people have long pondered the future of intelligent machinery, such questions have become all the more pressing with the rise of artificial intelligence (AI) and machine learningT. These machines resemble human interactions: they can help problem solve, create content, and even carry on conversations. For fans of science fiction and dystopian novels, a looming issue could be on the horizon: what if these machines develop a sense of consciousness?

 

Researchers published their results in the Journal of Social Computing on December 31, 2023.

 

While there is no quantifiable data presented in this discussion on artificial sentience (AS) in machines, there are many parallels drawn between human language development and the factors needed for machines to develop language in a meaningful way.

 

“Many of the people concerned with the possibility of machine sentience developing worry about the ethics of our use of these machines, or whether machines, being rational calculators, would attack humans to ensure their own survival,” said John Levi Martin, author and researcher. “We here are worried about them catching a form of self-estrangement by transitioning to a specifically linguistic form of sentience.”

 

The main characteristics making such a transition possible appear to be: unstructured deep learning, such as in neural networks (computer analysis of data and training examples to provide better feedback), interaction between both humans and other machines, and a wide range of actions to continue self-driven learning. An example of this would be self-driving cars. Many forms of AI check these boxes already, leading to the concern of what the next step in their “evolution” might be. 

 

This discussion states that it’s not enough to be concerned with just the development of AS in machines, but raises the question of if we’re fully prepared for a type of consciousness to emerge in our machinery. Right now, with AI that can generate blog posts, diagnose an illness, create recipes, predict diseases or tell stories perfectly tailored to its inputs, it’s not far off to imagine having what feels like a real connection with a machine that has learned of its state of being. However, researchers of this study warn, that is exactly the point at which we need to be wary of the outputs we receive.

 

“Becoming a linguistic being is more about orienting to the strategic control of information, and introduces a loss of wholeness and integrity…not something we want in devices we make responsible for our security,” said Martin. As we’ve already put AI in charge of so much of our information, essentially relying on it to learn much in the way a human brain does, it has become a dangerous game to play when entrusting it with so much vital information in an almost reckless way.

 

Mimicking human responses and strategically controlling information are two very separate things. A “linguistic being” can have the capacity to be duplicitous and calculated in their responses. An important element of this is, at what point do we find out we’re being played by the machine?

 

What’s to come is in the hands of computer scientists to develop strategies or protocols to test machines for linguistic sentience. The ethics behind using machines that have developed a linguistic form of sentience or sense of “self” are yet to be fully established, but one can imagine it would become a social hot topic. The relationship between a self-realized person and a sentient machine is sure to be complex, and the uncharted waters of this type of kinship would surely bring about many concepts regarding ethics, morality and the continued use of this “self-aware” technology.

 

Maurice Bokanga, Alessandra Lembo and John Levi Martin of the Department of Sociology at the University of Chicago contributed to this research.

 


About Journal of Social Computing

Journal of Social Computing (JSC) is an open access, peer-reviewed scholarly journal which aims to publish high-quality, original research that pushes the boundaries of thinking, findings, and designs at the dynamic interface of social interaction and computation. This will include research in (1)computational social science—the use of computation to learn from the explosion of social data becoming available today; (2) complex social systems or the analysis of how dynamic, evolving social collectives constitute emergent computers to solve their own problems; and (3) human computer interaction whereby machines and persons recursively combine to generate unique knowledge and collective intelligence, or the intersection of these areas. The editorial board welcomes research from fields ranging across the social sciences, computer and information sciences, physics and ecology, communications and linguistics, and, indeed, any field or approach that can challenge and advance our understanding of the interface and integration of computation and social life. We seek to take risks, avoid boredom and court failure on the path to transformative new paradigms, insights, and possibilities.  The journal is open to a diversity of theoretic paradigms, methodologies and applications.

 

About SciOpen 

SciOpen is a professional open access resource for discovery of scientific and technical content published by the Tsinghua University Press and its publishing partners, providing the scholarly publishing community with innovative technology and market-leading capabilities. SciOpen provides end-to-end services across manuscript submission, peer review, content hosting, analytics, and identity management and expert advice to ensure each journal’s development by offering a range of options across all functions as Journal Layout, Production Services, Editorial Services, Marketing and Promotions, Online Functionality, etc. By digitalizing the publishing process, SciOpen widens the reach, deepens the impact, and accelerates the exchange of ideas.

Speaking in a local accent might make social robots seem more trustworthy and competent


People who speak the Berlin dialect trust social robots speaking the same dialect more than social robots speaking standard German


Peer-Reviewed Publication

FRONTIERS

An image of the robot used for the experiments. 

IMAGE: 

AN IMAGE OF THE ROBOT USED FOR THE EXPERIMENTS.

view more 

CREDIT: THIS IMAGE IS OWNED AND PROVIDED BY THE AUTHORS AND SHOULD BE CREDITED TO THEM.




Social robots can help us with many things: teaching, learning, caring. Because they’re designed to interact with humans, they’re designed to make us comfortable — and that includes the way they talk. But how should they talk? Some research suggests that people like robots to use a familiar accent or dialect, while other research suggests the opposite. 

“Surprisingly, people have mixed feelings about robots speaking in a dialect — some like it, while others prefer standard language,” said Katharina Kühne of the University of Potsdam, lead author of the study in Frontiers in Robotics and AI. “This made us think: maybe it's not just the robot, but also the people involved that shape these preferences.”

Talking the talk

Many factors affect people’s comfort levels with social robots. The robots work best when they appear more trustworthy and competent, and a human-like speaking voice contributes to this. But whether that speaking voice uses a dialect or a standard form of a language could impact the perception of its trustworthiness or competence. Standard language use is often viewed as more intelligent, but speaking in a dialect which is considered friendly or familiar can be comforting. 

“Imagine a robot that can switch to a dialect,” said Kühne. “Now, consider what's more critical in your interaction with a robot: feeling a connection (think of a friendly chat in an elderly home) or perceiving it as competent (like in a service setting where standard language matters).”

Ich bin ein Berliner

To test the impact of dialect use on robot acceptance, the scientists recruited 120 people living in Berlin or Brandenburg to take an online survey. They asked participants to watch videos in which a robot using a male human voice spoke in either standard German or the Berlin dialect, which is considered working-class and is sometimes used by media to give an informal, friendly impression.

“The Berlin dialect is generally understandable to most German speakers, including those who are not native German speakers but are fluent in the language,” explained Kühne. 

The scientists asked participants to rate the robot’s trustworthiness and competence, and to fill out a demographic questionnaire including age, gender, how long they’d lived in Berlin, how well they spoke the Berlin dialect, and how often they used it. The survey automatically recorded the type of device that participants used to view the videos — a phone, a tablet, or a computer. 

Speaking the same language

There was a clear link between trustworthiness and competence, with higher perceived competence predicting higher perceived trustworthiness. In general, the respondents preferred a robot speaking standard German. However, respondents who were more comfortable with the Berlin dialect preferred the robot speaking dialect. 

“If you're good at speaking a dialect, you're more likely to trust a robot that talks the same way,” said Kühne. “It seems people trust the robot more because they find a similarity.”

Respondents who were using a phone or tablet rather than a computer to view the videos also tended to give lower ratings to the robot speaking standard German. The scientists speculate that this may be because small, portable devices meant the respondents had more distractions from the videos and a higher cognitive load, so the trust signal of the standard German had less of an impact.

“This leaves us without clear evidence for or against the idea that people facing challenges might find more comfort in social robots speaking in a familiar dialect,” said Kühne. “But if a robot is using the standard language and it's essential for people to perceive it as competent in the interaction, it might be beneficial to minimize cognitive load. We plan to dive deeper by testing cognitive load during conversations.”

The scientists pointed out that speaking or understanding a dialect can be part of an in-group identity, allowing the robots to take advantage of in-group bias: people tend to prefer robots that are somehow like them. However, the prestige of a dialect may affect how it’s received by people hearing it. 

“Context matters a lot in our conversations, and that's why we're planning to conduct more studies in real-life situations,” said Kühne.


BERLIN ACCENT

Video of the robot speaking th [VIDEO] |

STANDARD GERMAN

Video of the robot speaking st [VIDEO] |