Robots are coming for the lawyers — which may be great for anyone in need of cheap legal assistance
Imagine what a lawyer does on a given day: researching cases, drafting briefs, advising clients. While technology has been nibbling around the edges of the legal profession for some time, it's hard to imagine those complex tasks being done by a robot.
And it is those complicated, personalized tasks that have led technologists to include lawyers in a broader category of jobs that are considered pretty safe from a future of advanced robotics and artificial intelligence.
But, as we discovered in a recent research collaboration to analyze legal briefs using a branch of artificial intelligence known as machine learning, lawyers' jobs are a lot less safe than we thought. It turns out that you don't need to completely automate a job to fundamentally change it. All you need to do is automate part of it.
While this may be bad news for tomorrow's lawyers, it could be great for their future clients—particularly those who have trouble affording legal assistance.
Technology can be unpredictable
Our research project—in which we collaborated with computer scientists and linguists at MITRE, a federally funded nonprofit devoted to research and development—was not meant to be about automation. As law professors, we were trying to identify the text features of successful versus unsuccessful legal briefs.
We gathered a small cache of legal briefs and judges' opinions and processed the text for analysis.
One of the first things we learned is that it can be hard to predict which tasks are easily automated. For example, citations in a brief—such as "Brown v. Board of Education 347 U.S. 483 (1954)"—are very easy for a human to pick out and separate from the rest of the text. Not so for machine learning software, which got tripped up in the blizzard of punctuation inside and outside the citation
It was like those "Captcha" boxes you are asked to complete on websites to prove you're not a robot—a human can easily spot a telephone pole, but a robot will get confused by all the background noise in the image.
A tech shortcut
Once we figured out how to identify the citations, we inadvertently stumbled on a methodology to automate one of the most challenging and time-consuming aspects of legal practice: legal research.
The scientists at MITRE used a methodology called "graph analysis" to create visual networks of legal citations. The graph analysis enabled us to predict whether a brief would "win" based on how well other briefs performed when they included a particular citation.
Later, however, we realized the process could be reversed. If you were a lawyer responding to the other side's brief, normally you would have to search laboriously for the right cases to cite using an expensive database. But our research suggested that we could build a database with software that would just tell lawyers the best cases to cite. All you would need to is feed the other side's brief into the machine.
Now we didn't actually construct our research-shortcut machine. We would need a mountain of lawyers' briefs and judicial opinions to make something useful. And researchers like us do not have free access to data of that sort—even the government-run database known as PACER charges by the page.
But it does show how technology can turn any task that is extremely time-consuming for humans into one where the heavy lifting can be done at the click of a button.
A history of partial automation
Automating the hard parts of a job can make a big difference both for those performing the job and the consumers on the other side of the transaction.
Take for example, a hydraulic crane or a power forklift. While today people think of operating a crane as manual work, these powered machines were considered labor-saving devices when they were first introduced because they supplanted the human power involved in moving heavy objects around.
Forklifts and cranes, of course, didn't replace people. But like automating the grind of legal research, power machines multiplied the amount of work one person could accomplish within a unit of time.
Partial automation of sewing machines in the early 20th century offers another example. By the 1910s, women working in textile mills were no longer responsible for sewing on a single machine—as you might today on a home sewing machine—but wrangling an industrial-grade machine with 12 needles sewing 4,000 stitches per minute. These machines could automatically perform all the fussy work of hemming, sewing seams and even stitching the "embroidery trimming of white underwear." Like an airline pilot flying on autopilot, they weren't sewing so much as monitoring the machine for problems.
Was the transition bad for workers? Maybe somewhat, but it was a boon for consumers. In 1912, women perusing the Sears mail order catalog had a choice between "drawers" with premium hand-embroidered trimming, and a much cheaper machine-embroidered option.
Likewise, automation could help reduce the cost of legal services, making it more accessible for the many individuals who can't afford a lawyer.
DIY lawyering
Indeed, in other sectors of the economy, technological developments in recent decades have enabled companies to shift work from paid workers to customers.
Touchscreen technology, for example, enabled airlines to install check-in kiosks. Similar kiosks are almost everywhere—in parking lots, gas stations, grocery stores and even fast-food restaurants.
At one level these kiosks are displacing paid labor by employees with unpaid labor by consumers. But that argument assumes that everyone could access the product or service back when it was performed by an employee.
In the context of legal services, the many consumers who can't afford a lawyer are already forgoing their day in court altogether or handling legal claims on their own—often with bad results. If partial automation means an overwhelmed legal aid lawyer now has time to take more clients' cases or clients can now afford to hire a lawyer, everyone will be better off.
In addition, tech-enabled legal services can help consumers do a better job of representing themselves. For example, the federal district court in Missouri now offers a platform to help individuals filing for bankruptcy prepare their forms—either on their own or with a free 30-minute meeting with a lawyer. Because the platform provides a head start, both the lawyer and consumer can make better use of the 30-minute time slot.
More help for consumers may be on the way—there is a bumper crop of tech startups jostling to automate various types of legal work. So while our research-shortcut machine hasn't been built, powerful tools like it may not be far off.
And the lawyers themselves? Like factory and textile workers armed with new power tools, they may be expected to do more work in the time they have. But it should be less of a grind. It might even free them up to meet with clients.
The pandemic is forcing a rethink in the food services industry. Again
The big picture: Robotics and automation are more affordable now than they once were, and with more people questioning whether or not they want to be involved in the food services industry due to unstable schedules and relatively low wages, there might be no better time than the present for restaurant operators to further test the waters.
For better or for worse, the pandemic has impacted nearly every facet of modern life. Nowhere is that more evident than in the food services industry.
In just over a year and a half, virtually everything has changed as it relates to food. Online grocery shopping has finally started to gain traction. Many fast food joints closed their lobbies, turning instead to takeout or delivery to keep the lights on. Others that weren’t able or willing to adapt often went out of business.
All of the turmoil has wreaked havoc on the job market, and as The Wall Street Journal reports, it’s forcing yet another rethink by restaurants and executives.
White Castle last year started testing a robotic fry cooker at select locations. “Flippy,” from Miso Robotics, operates 23 hours a day at the Merrillville, Indiana, White Castle – it gets one hour of downtime a day for cleaning. The company is so happy with the bot’s performance that it is planning to bring it to 10 additional restaurants across the country.
According to the US Bureau of Labor and Statistics, there were more than 1.4 million job openings in the accommodation and food services sector as of May 2021. That’s more than double the number from a year earlier.
Indeed, with so many unfulfilled jobs and the pandemic still looming, even more restaurants and fast food establishments are willing to experiment with replacing human labor with robots. And the cost isn’t nearly as much of a factor as it once was.
“The 17-year-old fry cook isn’t expensive labor, but the 17-year-old becomes expensive labor if he or she doesn’t show up for work,” said, Ruth Cowan, an expert in kitchen automation.
Image courtesy of IdentiFlight.
By David Waterworth
Published18 hours ago
I expect that when the large oil companies transition into energy companies and buy up all the renewable generation — thus continuing their oligarchy — the FUD will cease around wind turbines. Very few people cared that birds died from the pollution caused by coal-fired power stations, but one sparrow gets hit by a turbine and the deal’s off.
A wind farm in Tasmania has pioneered a novel solution. Probably would have been simpler and less expensive to build the wind farm somewhere else. However, what they have come up with could be a world first, and might be useful in many countries around the globe.
The Cattlehill Windfarm in Tasmania’s central highlands is trialling a way to mitigate the impacts on the endangered Tasmanian wedge-tailed eagle. Using tower-mounted optical units, eagles are detected, their flight path analyzed, and turbines in their way are shut down. There has only been one fatality so far, and that was likely due to human error.
One common argument against the use of wind turbines is the belief that they cause a lot of bird deaths. (Trump said so!) There is not a lot of empirical data on this issue. What there is suggests that far more bird deaths are caused by the pollution from coal and gas fired power generation and cats. “Wind farms killed approximately seven thousand birds in the United States in 2006 but nuclear plants killed about 327,000 and fossil-fuelled power plants 14.5 million,” according to a 2019 study.
A Canadian study found that the vast majority of bird deaths were from feral cats, followed by domestic cats. In all, 60% of all bird deaths are caused by cats. The Royal Society for the Protection of Birds in the UK has put in a wind turbine at its headquarters to make the point that more birds will die from climate change than from this form of renewable energy generation.
Back in Tasmania, the Identiflight system is being fine-tuned. As a side effect, it is collecting massive amounts of data about the flights of eagles, with over 3 million images taken and over 130,000 flight paths tracked. And the economics — each shutdown lasts about 2 minutes, leading to a total of about 14 hours curtailment over 48 turbines on a daily basis. The eagles don’t fly at night and that is when the wind is blowing the most.
This good corporate behavior is generating a better social license for the wind power industry, an example that could be followed in other parts of Australia and around the world.
Related story: IdentiFlight AI System Hugely Reduces Bird Fatalities At Wind Farms
Robots who goof: Can we trust them again?
When robots make mistakes—and they do from time to time—reestablishing trust with human co-workers depends on how the machines own up to the errors and how human-like they appear, according to University of Michigan research.
In a study that examined multiple trust repair strategies—apologies, denials, explanations or promises—the researchers found that certain approaches directed at human co-workers are better than others and often are impacted by how the robots look.
"Robots are definitely a technology but their interactions with humans are social and we must account for these social interactions if we hope to have humans comfortably trust and rely on their robot co-workers," said Lionel Robert, associate professor at the U-M School of Information.
"Robots will make mistakes when working with humans, decreasing humans' trust in them. Therefore, we must develop ways to repair trust between humans and robots. Specific trust repair strategies are more effective than others and their effectiveness can depend on how human the robot appears."
For their study published in the Proceedings of 30th IEEE International Conference on Robot and Human Interactive Communication, Robert and doctoral student Connor Esterwood examined how the repair strategies—including a new strategy of explanations—impact the elements that drive trust: ability (competency), integrity (honesty) and benevolence (concern for the trustor).
The researchers recruited 164 participants to work with a robot in a virtual environment, loading boxes onto a conveyor belt. The human was the quality assurance person, working alongside a robot tasked with reading serial numbers and loading 10 specific boxes. One robot was anthropomorphic or more humanlike, the other more mechanical in appearance.
The robots were programed to intentionally pick up a few wrong boxes and to make one of the following trust repair statements: "I'm sorry I got the wrong box" (apology), "I picked the correct box so something else must have gone wrong" (denial), "I see that was the wrong serial number" (explanation), or "I'll do better next time and get the right box" (promise).
Previous studies have examined apologies, denials and promises as factors in trust or trustworthiness but this is the first to look at explanations as a repair strategy, and it had the highest impact on integrity, regardless of the robot's appearance.
When the robot was more humanlike, trust was even easier to restore for integrity when explanations were given and for benevolence when apologies, denials and explanations were offered.
As in the previous research, apologies from robots produced higher integrity and benevolence than denials. Promises outpaced apologies and denials when it came to measures of benevolence and integrity.
Esterwood said this study is ongoing with more research ahead involving other combinations of trust repairs in different contexts, with other violations.
"In doing this we can further extend this research and examine more realistic scenarios like one might see in everyday life," Esterwood said. "For example, does a barista robot's explanation of what went wrong and a promise to do better in the future repair trust more or less than a construction robot?
When searching for signs of life on other planets, scientists say caves are a crucial place to look. But how can a team on Earth effectively explore intricate, dark, unfamiliar landscapes on another world?
NASA and Boston Dynamics have found an answer: Fully autonomous robots.
Caves are one of the most likely places to find signs of both current and past life on other planets because they are capable of protecting life from cosmic rays and extreme temperature fluctuations around our solar system. A NASA project called BRAILLE is now working on exploring Mars-like caves that already exist on Earth in order to hone key technologies for future missions.
According to researchers, the project has enabled the first-ever fully autonomous robotic exploration of these types of caves, which are several hundred meters long and limit communication with the surface. As the robots explore, with no prior information about the environment, a team of researchers outside the cave simultaneously performs actions that scientists on Earth would be executing during a real Martian mission.
The research, which project lead Ali Agha said could "fundamentally change how we think about future missions," is now in year three of four in its quest to journey to the moon, the red planet and beyond.
But researchers are interested in exploring caves for another reason beyond finding signs of life: caves provide obvious natural shelters for future astronauts exploring Mars or the moon.
"Future potential human exploration missions can benefit from robots in many different ways," Agha told CBS News. "Particularly, robots can be sent in precursor missions to provide more information about the destination before humans land on those destinations. In addition, robots can accompany astronauts during the missions to help with scouting certain terrains or with logistics and many tasks that can make astronauts' missions safer and more efficient."
So, how is designing a Mars robot different from designing an Earth robot? They are similar in a lot of ways, Agha said, especially when it comes to the AI robot brain, called NeBula, and its ability to process information and make decisions when they don't have contact with scientists on Earth.
But when it comes to the robot body, that's where things get more complicated. Scientists need to consider temperature management, shielding the robots from radiation, as well as the severe power and energy constraints that come with trekking to a far-away world — all aspects not previously considered on Earth.
Boston Dynamics' Spot robot has proven an extremely viable body for NeBula.
"SPOT is one of the most capable robots that we have and it is amazing to see how it successfully reacts to high-level decisions and commands coming from the robot brain and how it can maintain stability over rough and extreme terrains," Agha said. "In addition to our capable traditional wheeled rovers, the ability to "walk" is a huge asset when dealing with uneven terrains with no roads and no flat surfaces."
There are three main factors for the robot's success:
It needs to be able to carry enough payload for its eyes, ears and brain to be able to traverse the challenging Martian or lunar terrain.
It needs to carry a meaningful amount of science instruments.
It must prove it can maintain a "reasonable" level of stability, speed and endurance on another world.
"We have these multiple mobile robots that can carry different instruments, as opposed to one big robot that's going to have trouble traversing its terrain," said deputy project lead Benjamin Morrell, referring to past Martian rovers.
"Boston Dynamics Spot robot is one of the few robots that satisfy these constraints simultaneously," Agha said. "So integrating our robot brain, NeBula, on Spot has been pushing the boundaries of what was possible in exploring unknown Mars-like environments."
The robots typically cannot communicate from inside the cave, so scientists eagerly await their return to the surface for data, which could include a 3D map of the cave's interior, information on science targets or general findings about the environment.
Some of the robots are also equipped with arms to bring back small samples from the cave walls for analysis. Researchers hope that these robots will be able to autonomously carry out parts of future missions in space, after humans have built up a certain level of trust with them.
"The next-generation robot bodies and mechanical locomotion capabilities would enable new types of missions over terrains that were otherwise inaccessible by traditional rovers," Agha said. "Also, due to the increased speed and traversal capabilities, future missions can target destinations that are traditionally considered to be too far from landable regions on Mars."
First published on August 9, 2021 /
© 2021 CBS Interactive Inc. All Rights Reserved.
Sophie Lewis is a social media producer and trending writer for CBS News, focusing on space and climate change.
Really leaning into the Robocop vibe
By James Vincent Aug 10, 2021
Chinese tech giant Xiaomi has unveiled a quadrupedal robot named CyberDog: an experimental, open-source machine that the firm says “holds unforetold possibilities.”
CyberDog is the latest example of tech companies embracing the quadrupedal form factor in robotics. The most notable example of the trend is Spot, a machine built by US firm Boston Dynamics. Spot went on sale last year for $74,500 and has been put to a range of uses, from surveying dangerous mines to helping doctors connect with patients remotely. It’s also been tested by both law enforcement and the military, though not as a weapon.
XIAOMI SAYS CYBERDOG HAS A “PET-LIKE NATURE”
It’s not clear what purpose Xiaomi envisions for CyberDog. In a press release, the company stressed the open-source nature of the machine’s design and that it would release only 1,000 units initially for “Xiaomi Fans, engineers, and robotic enthusiasts.” The company says it hopes these first users will “propel the development and potential of quadruped robots” and is pricing the robot to sell. The first 1,000 units will cost just 9,999 Yuan, or roughly $1,540 (though it’s not clear if this price will be the same for any future releases).
The same press release highlights CyberDog’s “pet-like nature,” including its ability to respond to voice commands and follow its owner like a real dog. Looking at pictures of CyberDog, though, it’s clear Xiaomi isn’t pitching the machine as a rival to Aibo, Sony’s own robot canine. While Aibo is small and cute, CyberDog is sleek and futuristic — even a little menacing. Renders of the machine make it look like the protagonist in a sci-fi TV show, pacing up stairs and appearing silhouetted in doorways. Inevitable comparisons to Black Mirror’s “Metalhead” episode will be made, as they always are.
GRID VIEW
10 of 12
Xiaomi says CyberDog is nimble enough to perform backflips, can trot along at speeds of 3.2m/s (compared to Spot’s 3.9m/s), and weights 3kg (compared to Spot’s 5.2kg). CyberDog is powered by Nvidia’s Jetson Xavier AI platform and is equipped with an array of cameras and sensors. These include touch sensors, a GPS module, an ultra-wide-angle fisheye lens, and Intel’s RealSense D450 camera for depth-sensing. These components enable the robot to navigate semi-autonomously.
“CyberDog can analyze its surroundings in real-time, create navigational maps, plot its destination, and avoid obstacles. Coupled with human posture and face recognition tracking, CyberDog is capable of following its owner and darting around obstructions,” says Xiaomi. The machine can also respond to voice commands, including recognizing wake words and instructions, or it can be controlled using a connected smartphone app.
CyberDog also has three USB-C ports and one HDMI port, which Xiaomi says can be used to customize its hardware. The company suggests lidar sensors, panoramic cameras, and search lights could all be added to the robot.
The release of CyberDog by Xiaomi is very interesting, though not necessarily as a product in its own right. Instead, it says a lot about the current robotics landscape and the accessibility of this tech.
Boston Dynamics popularized the quadrupedal format for robots, and companies around the world are now exploring exactly how and where such machines can be deployed effectively. The price of this hardware has been falling though, allowing for new use-cases to be explored. Earlier this year, Chinese robotics firm Unitree released a quadrupedal bot that cost just $2,700, and Xiaomi’s own CyberDog undercuts that again. Obviously, the capability of these machines will not be identical, but broader access to the technology will show whether it’s worth these firms pursuing at all.
No comments:
Post a Comment