Monday, February 24, 2020

Protesters block rail crossing in East Vancouver in support of Wet'suwet'en for hours Sunday

CBC February 23, 2020

Protesters block rail crossing in East Vancouver in support 
of Wet'suwet'en for hours Sunday

A group of demonstrators who had set up a rail blockade at a major train crossing in East Vancouver Sunday have ended their protest, hours after it began.

Approximately 40 people gathered just before noon at the CN rail lines near Glen Dr. and Venables St., violating an injunction the rail company was granted earlier this month.

Protest organizer Natalie Knight said the blockade was in solidarity with Wet'suwet'en hereditary chiefs opposing the Coastal GasLink pipeline project, as well as the Mohawk barricading a rail line near Tyendinaga in Ontario.

"We will be here as long as we can," said Knight.

Vancouver Police said they were monitoring the protesters and had informed them they are in violation of the injunction.

"The protestors are off to the side of the tracks and at this time are not blocking the rail lines," said Sergeant Aaron Roed with the Vancouver Police Department.

Police told Knight there would be no arrests if protesters remained on the sidewalk. Despite what police said however, Knight said some protesters were on the rail tracks.

She said protesters would stand down once the demands of the Wet'suwet'en hereditary chiefs were met.

"We've gotten a lot of honks of support, a lot of fists in the air from drivers showing support for our action," she said.

CN said it is monitoring the protest closely.

"Trespassing on railway property and/or tampering with railway equipment is not only illegal, but also exceedingly dangerous," said Jonathan Abecassis, a media spokesperson with CN.

Hereditary chiefs from the Wet'suwet'en First Nation were expected to return to B.C. on Sunday after visiting Mohawk communities in Eastern Canada, with no signs that blockades crippling the country's rail network will come down.

Prime Minister Trudeau said Friday that while the government is ready to talk, blockades that began two weeks ago must come down and that the situation is "unacceptable and untenable.''

Hereditary chiefs have said they are ready for discussions with the B.C. and federal governments after the RCMP and Coastal GasLink leave their traditional territory.
Coronavirus email hoax led to violent protests in Ukraine

Protestors blocked the arrival of evacuees from China

By Jay Peters@jaypeters Feb 21, 2020
Photo by MAKSYM MYKHAILYK/AFP via Getty Images

An email that appeared to come from Ukraine’s ministry of health containing false information about coronavirus cases in the country led to a number of violent protests and standoffs with police, reports BuzzFeed News.

The email originated from outside Ukraine, according to a government statement, and it falsely claimed there were five cases of coronavirus in the country. In reality, there have been zero reported cases of the virus in Ukraine. But the email was sent the same day evacuees from China landed in the country, and some Ukranian residents protested the evacuees’ arrival by blocking roads that led to medical facilities and, in some cases, by smashing the windows of the buses carrying those evacuees.

UKRAINIAN AUTHORITIES RELEASED A STATEMENT SAYING THE REPORTS WERE NOT TRUE

To try and calm citizens, Ukraine’s Center for Public Health released a statement saying reports of five cases of coronavirus were false, and Ukraine President Volodymyr Zelensky published a Facebook post saying the evacuees were all healthy and that they would be quarantined for two weeks out of extra caution. Zelensky also urged citizens not to block their arrival.

There have only been two confirmed cases of Ukrainians with the coronavirus, and they are among many who have been infected while on the cruise ship that was docked off the coast of Japan. They have fully recovered, reports BuzzFeed News.

With so many people searching for information online about coronavirus, there is continued risk that people may come across misinformation and hoaxes about the disease, especially on social networks like Facebook and Twitter that are ill-equipped to handle fast-changing global news events and the flood of user-generated posts that accompany them. Recode has put together a good summary of some of the most pervasive corona virus hoaxes that have been widely shared. And we also have a guide about how to investigate information online and determine if it’s false or misleading.
Donald Trump ads will take over YouTube for Election Day

SINCE YOUTU
BE IS GLOBAL, DO WE ALL GET A VOTE, FOR THE SO CALLED 
GREAT LEADER OF THE FREE WORLD

It’s not the first time he’s made a major ad buy on YouTube

By Makena Kelly@kellymakena Feb 22, 2020
Illustration by Alex Castro / The Verge

YouTube’s homepage will feature ads in support of Donald Trump’s reelection campaign on election day 2020, Bloomberg reported on Thursday.

Trump’s reelection campaign bought out YouTube’s masthead ad space for “early November” including election day, November 3, according to Bloomberg. By purchasing this highly trafficked ad space, the Trump campaign is betting that the visibility could aid his candidacy in some of the most important days leading up to the election.

This isn’t the first time that the Trump campaign purchased the coveted masthead ads ahead of important election events. Last summer, Trump’s reelection campaign spent between $500,000 to $1 million on the ad space ahead of the first Democratic presidential debate in June. The debate was livestreaming on YouTube, and the ads blasted Trump’s face and message to potential future Democratic voters. An unusually high number of candidates qualified for that debate, so it was split into two days.

Bloomberg reported Thursday that the YouTube banner is “more akin to a Super Bowl ad” than a traditional targeted ad buy. It’s not clear how much Trump’s reelection campaign spent on the ad space, but it can cost campaigns more than $1 million a day.

In October 2019, YouTube introduced a specialized tool called Instant Reserve that allowed campaigns to reserve ad space in particular regions on particular days, with an eye towards state primary contests. Starting the following November, that initial release gave candidates the option to buy any times through the end of 2020.
KETTLE CALLS POT BLACK

President and COO of AT&T, a a huge tech company, worries about tech companies’ power

He says Facebook needs to consider editorial integrity
Photo by John Lamparski / Getty Images for Advertising Week New York

John Stankey, president and COO of AT&T, a mega-corporation forged from multiple big acquisitions, is worried about tech companies’ power. In an interview with Yahoo Finance’s Influencers with Andy Serwer, Stankey said he’s “really concerned about the concentration of economic power” in big tech companies and how they approach their “platforms’ influence on society.”

Stankey’s concern about concentrated economic power is particularly funny given that AT&T now owns Time Warner, which controls HBO, Turner, and Warner Bros. It already operated DirecTV. It, too, has a lot of concentrated economic power.

Stankey also took the interview as an opportunity to jab Facebook. He called on the social networking company to exercise “editorial integrity” or make sure news on the platform is legitimate and not fake. “If that’s where people are consuming facts and information ... then you probably need to think about what the editorial integrity of your platform is,” he said.

Facebook has routinely said it doesn’t consider itself to be a media company. “News and media are not the primary things people do on Facebook,” Mark Zuckerberg wrote in a 2016 Facebook post, “so I find it odd when people insist we call ourselves a news or media company in order to acknowledge its importance.”

Stankey clearly wants Facebook to behave more like a media company, which AT&T now is, but worrying about “economic power” is slightly hypocritical given AT&T’s own dominance in the telecom and media space. The company has often bought its way into markets and industries and is now a massive entity.

With its Time Warner acquisition, AT&T now competes for advertising dollars and people’s attention, meaning it competes with Facebook more than ever.

More and more older adults are using marijuana

It’s been a trend for about a decade

By Nicole Wetsman Feb 24, 2020
Photo by Katie Falkenberg / Los Angeles Times via GImages

etty More and more older adults are using some form of marijuana, according to new survey data, and their doctors aren’t prepared to talk with them about it.

The percentage of adults over the age of 65 who said they’d used some form of cannabis in the past year was 75 percent higher in 2018 — when 4.2 percent said they’d used it — than it was in 2015, according to new data published in JAMA Internal Medicine. Cannabis use in this group has been increasing dramatically since around 2006 when less than 0.5 percent of older adults said they’d used the drug.

The trend mirrors what study author and geriatrician Benjamin Han says he sees from patients. “Ten years ago, no one asked me about cannabis use ever. Now, it’s a very common question when I’m in the clinic,” says Han, an assistant professor in the division of geriatric medicine and palliative care at the New York University School of Medicine. “I probably get asked about once a week. There’s a lot of interest.”

The study used results from the National Survey on Drug Use and Health and included answers from over 14,000 people. The survey didn’t ask the older adults who used marijuana why they chose to use it, Han says. “But generally, a lot of it is probably the legalization of medical and recreational cannabis, and destigmatization. And, that we have more and more information about the use of cannabis for chronic illnesses.”

According to the survey data, people with diabetes had particularly large jumps in their reported cannabis use. That might be because some evidence shows that cannabis may help with nerve pain, which chronic diabetes can trigger, Han says. Use in patients with cancer also went up.

But the survey didn’t ask people if they had conditions like arthritis, Parkinson’s disease, or chronic pain, all of which are qualifying conditions for medical marijuana in many states and tend to be conditions people are interested in treating with the drug. A study from 2019 that held focus groups with older adults found that many used cannabis for pain management. That’s a limitation of this study, Han says. “It doesn’t capture many of the chronic conditions that affect older adults.”

Despite increasing cannabis use in this group, many doctors aren’t comfortable talking with their patients about it, Han says. That’s partly because there’s still limited evidence on its medical use, which is particularly true for older adults who are often excluded from clinical studies more generally. But it may be particularly important for this age group to talk to a doctor before they use cannabis. “They’re more vulnerable to anything with psychoactive properties, like alcohol,” Han says. “These substances can also interact negatively with other prescribed drugs, and older patients with more chronic conditions more likely take more medications.”

Health care providers might not traditionally think of people over 65 as drug users, but this study data highlights the need to revisit that assumption. “We do a very poor job screening and talking to older patients about drug and alcohol use,” Han says. “That may have negative health effects.”
Tesla Autopilot’s role in fatal 2018 crash will be decided this week

Here’s what to expect as the NTSB wraps its investigation

By Sean O'Kane@sokane1 Feb 24, 2020
 
Illustration by Alex Castro / The Verge


Tesla’s Autopilot is about to be in the government’s spotlight once again. On February 25th in Washington, DC, investigators from the National Transportation Safety Board (NTSB) will present the findings of its nearly two-year probe into the fatal crash of Wei “Walter” Huang. Huang died on March 23rd, 2018, when his Tesla Model X hit the barrier between a left exit and the HOV lane on US-101, just outside of Mountain View, California. He was using Tesla’s advanced driver assistance feature, Autopilot, at the time of the crash.

It will be the second time the NTSB has held a public meeting about an investigation into an Autopilot-related crash; in 2017, the NTSB found that a lack of “safeguards” helped contribute to the death of 40-year-old Joshua Brown in Florida. Tomorrow’s event also comes just a few months after the NTSB held a similar meeting where it said Uber was partially at fault for the death of Elaine Herzberg, the pedestrian who was killed in 2018 after being hit by one of the company’s self-driving test vehicles.

After investigators lay out their findings, the board’s members will then vote on any recommendations proposed and issue a final ruling on the probable cause of the crash. While the NTSB doesn’t have the legal authority to implement or enforce those recommendations, they can be adopted by regulators. The entire meeting will be live-streamed on the NTSB’s website starting at 9:30AM ET.

THE MEETING STARTS AT 9:30AM ET, AND IT WILL BE LIVE-STREAMED

In the days before the meeting, the NTSB opened the public docket for the investigation, putting the factual information collected by the NTSB investigators on display. (Among the findings: Huang had experienced problems with Autopilot in the same spot where he crashed, and he was possibly playing a mobile game before the crash.) The NTSB also released a preliminary report back in June 2018 that spelled out some of its earliest findings, including that Huang’s car steered itself toward the barrier and sped up before impact.

Tesla admitted shortly after Huang’s death that Autopilot was engaged during the crash, but it pointed out that he “had received several visual and one audible hands-on warning earlier in the drive,” and claimed that Huang’s hands “were not detected on the wheel for six seconds prior to the collision,” which is why “no action was taken” to avoid the barrier.

That announcement led to the NTSB removing Tesla from the investigation for releasing information “before it was vetted and confirmed by” the board — a process that all parties have to agree to when they sign onto NTSB investigations.

The newly released documents show that a constellation of factors likely contributed to Huang’s death, and Autopilot was just one among them. Determining what role Tesla’s advanced driver assistance feature played is likely to be just one part of what the investigators and the board will discuss. But since this is just the second time the NTSB is wrapping up an investigation into a crash that involves Autopilot, its conclusions could carry weight.

With all that said, here’s what we’re expecting from tomorrow’s meeting. 
Image: NTSB


THE CRASH

One of the first things that will happen after NTSB chairman Robert Sumwalt opens tomorrow’s meeting (and introduces the people who are there) is that the lead investigator will run through an overview of the crash.

Most of the details of the crash are well-established, especially after the preliminary report was released in 2018. But the documents released last week paint a fuller picture.

At 8:53AM PT on March 23rd, 2018, Huang dropped his son off at preschool in Foster City, California, just like he did most days. Huang then drove his 2017 Tesla Model X to US-101 and began the 40-minute drive south to his job at Apple in Mountain View, California.
HUANG WAS VERY INTERESTED IN HOW AUTOPILOT WORKED AND USED IT FREQUENTLY, INVESTIGATORS FOUND

Along the way, he engaged Tesla’s driver assistance system, Autopilot. Huang had used Autopilot a lot since he bought the Model X in late 2017, investigators found. His wife told them that he became “very familiar” with the feature, as investigators put it, and he even watched YouTube videos about it. He talked to co-workers about Autopilot, too, and his supervisor said Huang — who was a software engineer — was fascinated by the software behind Autopilot.

Huang turned Autopilot on four times during that drive to work. The last time he activated it, he left it on for 18 minutes, right up until the crash.

Tesla instructs owners (in its cars’ owners manuals and also via the cars’ infotainment screens) to keep their hands on the wheel whenever they’re using Autopilot. When Autopilot is active, the car also constantly monitors whether a driver is applying torque to the steering wheel in an effort to make sure their hands are on the wheel. (Tesla CEO Elon Musk rejected more complex driver monitoring systems because he said they were “ineffective.”)

If the car doesn’t measure enough torque input on the wheel, it will flash an increasing series of visual, and then audible, warnings at the driver.

During that final 18-minute Autopilot engagement, Huang received a number of those warnings, according to data collected by investigators. Less than two minutes after he activated Autopilot for the final time, the system issued a visual and then an audible warning for him to put his hands on the steering wheel, which he did. One minute later, he got another visual warning. He didn’t receive any more warnings during the final 13 to 14 minutes before the crash. But the data shows the car didn’t measure any steering input for about 34.4 percent of that final 18-minute Autopilot session.

Autopilot was still engaged when Huang approached a section of US-101 south where a left exit lane allows cars to join State Route 85. As that exit lane moves farther to the left, a “gore area” develops between it and the HOV lane. Eventually, a concrete median rises up to act as a barrier between the two lanes.

Huang was driving in the HOV lane, thanks to the clean air sticker afforded by owning an electric vehicle. Five seconds before the crash, as the exit lane split off to the left, Huang’s Model X “began following the lines” into the gore area between the HOV lane and the exit lane. Investigators found that Autopilot initially lost sight of the HOV lane lines, then quickly picked up the lines of the gore area as if it were its own highway lane.
HUANG’S TESLA STEERED HIM TOWARD THE BARRIER AND SPED UP BEFORE THE CRASH

Huang had set his cruise control to 75 miles per hour, but he was following cars that were traveling closer to 62 miles per hour. As his Model X aimed him toward the barrier, it also no longer registered any cars ahead of it and started speeding back up to 75 miles per hour.

Huang crashed into the barrier a few seconds later. The large metal crash attenuator in front of the barrier, which is supposed to help deflect some of the kinetic energy of a moving car, had been fully crushed 11 days before during a different crash. Investigators found that California’s transit department hadn’t fixed the attenuator, despite an estimated repair time of “15 to 30 minutes” and an average cost of “less than $100.” This meant Huang’s Model X essentially crashed into the concrete median behind the attenuator with most of his car’s kinetic energy intact. 
Image: NTSB

Despite the violent hit, Huang initially survived the crash. (He was also hit by one car from behind.) A number of cars stopped on the highway. Multiple people called 911. A few drivers and one motorcyclist approached Huang’s car and helped pull him out because they noticed the Model X’s batteries were starting to sizzle and pop. After struggling to remove his jacket, they were able to pull Huang to relative safety before the Model X’s battery caught fire.

Paramedics performed CPR on Huang and performed a blood transfusion as they brought him to a nearby hospital. He was treated for cardiac arrest and blunt trauma to the pelvis, but he died a few hours later.

POSSIBLE CONTRIBUTING FACTORS

One of the new details that emerged in the documents released last week is that Huang may have been playing a mobile game called Three Kingdoms while driving to work that day.

Investigators obtained Huang’s cellphone records from AT&T, and because he primarily used a development model iPhone provided by Apple, they were also able to pull diagnostic data from his phone with help from the company. Looking at this data, investigators say they were able to determine that there was a “pattern of active gameplay” every morning around the same time in the week leading up to Huang’s death, though they point out that the data didn’t provide enough information to “ascertain whether [Huang] was holding the phone or how interactive he was with the game at the time of the crash.”

Another possible contributing factor to Huang’s death is the crash attenuator itself and the fact that it had gone 11 days without being repaired. The NTSB even issued an early recommendation to California officials back in September 2019, telling them to move faster when it comes to repairing attenuators.

THE NTSB WILL LIKELY POINT OUT A NUMBER OF CONTRIBUTING FACTORS TO HUANG’S DEATH

The design of the left exit and the gore area in front of the attenuator is also something that the board is likely to find contributed to Huang’s death. In fact, Huang struggled with Autopilot at this same section of the highway a number of times before his death, as new information made public last week shows.

Huang’s family has said since his death that he previously complained about how Autopilot would pull him left at the spot where he eventually crashed. Investigators found two examples of this in the month before his death in the data.

On February 27th, 2018, data shows that Autopilot turned Huang’s wheel 6 degrees to the left, aiming him toward the gore area between the HOV lane and the left exit. Huang’s hands were on the wheel, though, and two seconds later, he turned the wheel and kept himself in the HOV lane. On March 19th, 2018 — the Monday before he died — Autopilot turned Huang’s wheel by 5.1 degrees and steered him toward that same gore area. Huang turned the car back into the HOV lane one second later.

Huang had complained about this problem to one of his friends who was a fellow Tesla owner. The two were commiserating about a new software update from Tesla five days before the crash when Huang told the friend how Autopilot “almost led me to hit the median again this morning.”
AUTOPILOT’S ROLE

Whether Autopilot played a role in Huang’s death (and if so, to what extent) is likely to get a lot of attention during Tuesday’s meeting.

The NTSB has already completed one investigation into a fatal crash that involved the use of Autopilot, and it issued recommendations based on that case. But the circumstances of that crash were much different than Huang’s. In 2016, Joshua Brown was using Autopilot on a divided highway in Florida when a tractor-trailer crossed in front of him. Autopilot was not able to recognize the broad side of the trailer before Brown crashed into it, and Brown did not take evasive action. 
 
Image: NTSB

The design of Autopilot “permitted the car driver’s overreliance on the automation,” the NTSB wrote in its findings in 2017. (Tesla has said that overconfidence in Autopilot is at the root of many crashes that happen while the feature is engaged, though it continues to profess that driving with Autopilot reduces the chance of a crash.) The board wrote that Autopilot “allowed prolonged disengagement from the driving task and enabled the driver to use it in ways inconsistent with manufacturer guidance and warnings.”

In turn, the NTSB recommended that Tesla (and any other automaker working on similar advanced driver assistance systems) should add new safeguards that limit the misuse of features like Autopilot. The NTSB also recommended that companies like Tesla should develop better ways to sense a driver’s level of engagement while using features like Autopilot.

TESLA HAS INCREASED THE FREQUENCY OF WARNINGS SINCE HUANG’S DEATH

Since Huang’s death, Tesla has increased the frequency and reduced the lag time of the warnings to drivers who don’t appear to have their hands on the wheel while Autopilot is active. Whether the company has gone far enough is likely to be discussed on Tuesday.
WHY TESLA WAS KICKED OFF THE PROBE

On March 30th, 2018, one week after Huang died, Tesla announced that Autopilot was engaged during the crash. The company also claimed that Huang did not have his hands on the steering wheel and that he had received multiple warnings in the minutes leading up to the crash.

The NTSB was not happy that Tesla shared this information while the investigation was still ongoing. Sumwalt called Musk on April 6th, 2018, to tell him this was a violation of the agreement Tesla signed in order to be a party to the investigation. Tesla then issued another statement to the press on April 10th, which the NTSB considered to be “incomplete, analytical in nature, and [speculative] as to the cause” of the crash. So Sumwalt called Musk again and told him Tesla was being removed from the investigation.

Tesla claimed it withdrew from the probe because, as The Wall Street Journal put it at the time, the company felt that “restrictions on disclosures could jeopardize public safety.” Whether this comes up in tomorrow’s meeting will be another thing to watch for.
WHAT COMES NEXT?

One thing that won’t be resolved on Tuesday is the lawsuit Huang’s family filed against Tesla in 2019. The family’s lawyer argued last year that Huang died because “Tesla is beta testing its Autopilot software on live drivers.” That case is still ongoing.

The NTSB is likely to make a set of recommendations at the end of tomorrow’s hearings based on the findings of the probe. If it feels the need, it can label a recommendation as “urgent,” too. It’s possible that the board will comment on whether it thinks Tesla has made progress on the recommendations it laid out in the 2017 meeting about Brown’s fatal crash. Those recommendations included:

Crash data should be “captured and available in standard formats on new vehicles equipped with automated vehicle control systems”

Manufacturers should “incorporate system safeguards to limit the use of automated control systems to conditions for which they are designed” (and that there should be a standard method to verify those safeguards)

Automakers should develop ways to “more effectively sense a driver’s level of engagement and alert when engagement is lacking”

Automakers should “report incidents, crashes, and exposure numbers involving vehicles equipped with automated vehicle control systems”

While Tesla helped the NTSB recover and process the data from Huang’s car, the company is still far more protective of its crash data than other manufacturers. In fact, some owners have sued to gain access to that data. And while Tesla increased the frequency of Autopilot alerts after Huang’s crash, it largely hasn’t changed how it monitors drivers who use the feature. (Other companies, like Cadillac, use methods like eye-tracking tech to ensure drivers are paying attention to the road while using driver assistance features.)

The NTSB’s recommendations could hone this original guidance or even go beyond it. While it won’t change the fact that Walter Huang died in 2018, the agency’s actions on Tuesday could help further shape the experience of Autopilot moving forward. The NTSB also recently opening the docket of another investigation into an Autopilot-related death, and Autopilot is starting to face scrutiny from lawmakers. So whatever comes from Tuesday’s meeting, it seems the spotlight on Autopilot is only going to get brighter from here on out.
Snow and Ice Pose a Vexing Obstacle for Self-Driving Cars

Most testing of autonomous vehicles until now has been in sunny, dry climates. That will have to change before the technology will be useful everywhere.

When a Canadian professor used a standard data set to test a self-driving car, it became calamity on wheels. PHOTOGRAPH: R. TSUBIN/GETTY IMAGES

In late 2018, Krzysztof Czarnecki, a professor at Canada’s University of Waterloo, built a self-driving car and trained it to navigate surrounding neighborhoods with an annotated driving data set from researchers in Germany.

The vehicle worked well enough to begin with, recognizing Canadian cars and pedestrians just as well as German ones. But then Czarnecki took the autonomous car for a spin in heavy Ontarian snow. It quickly became a calamity on wheels, with the safety driver forced to grab the wheel repeatedly to avert disaster.

The incident highlights a gap in the development of self-driving cars: maneuvering in bad weather. To address the problem, Czarnecki and Steven Waslander, a professor at the University of Toronto, compiled a data set of images from snowy and rainy Canadian roads. It includes footage of foggy camera views, blizzard conditions, and cars sliding around, captured over two winters. The individual frames are annotated so that a machine can interpret what the scene conveys. Autonomous driving systems typically use annotated images to inform algorithms that track a car's position and plan its route.

The Canadian data should help researchers develop and test algorithms against challenging conditions. But the team also hopes it will prompt carmakers and startups to think more about bad-weather driving. “It’s in the interest of everyone to think about this,” Czarnecki says.


The Canadian driving data set includes lane markings and vehicles that are covered with snow.COURTESY OF JEFF HILNBRAND

A few companies, including Alphabet’s Waymo and Argo, backed by Ford and Volkswagen, are testing self-driving cars in winter conditions. But as a whole, the industry is far more focused on demonstrating and deploying vehicles in fair-weather locations such as California, Arizona, Texas, and Florida.

As even optimists about self-driving cars temper forecasts on their arrival, testing in sunnier climates is seen as a way to move the technology out of first gear. But the warm-weather bias could limit where autonomous vehicles can be deployed, or cause problems if it is rolled out in colder climates too quickly.

“It’s a very noticeable blind spot,” says Alexandr Wang, CEO of Scale AI, which annotated Czarnecki’s data and works with other autonomous-driving companies. “Deploying autonomous vehicles in bad conditions is not really tackled, or really talked about.”


Self-driving cars will need to identify obstacles even when their sensors are impaired by snow or rain.COURTESY OF JEFF HILNBRAND

Inclement conditions are challenging for autonomous vehicles for several reasons. Snow and rain can obscure and confuse sensors, hide markings on the road, and make a car perform differently. Beyond this, bad weather represents a difficult test for artificial intelligence algorithms. Programs trained to pick out cars and pedestrians in bright sunshine will struggle to make sense of vehicles topped with piles of snow and people bundled up under layers of clothing.

“Your AI will be erratic,” Czarnecki says of the typical self-driving car faced with snow. “It’s going to see things that aren’t there and also miss things.”

Matthew Johnson-Roberson, a professor at the University of Michigan who is developing a delivery robot optimized for difficult weather conditions, believes tackling bad weather may offer a way to gain a competitive edge. Troubling conditions are a major source of accidents, he says, so they arguably should be a priority.



A busy intersection from the poor-weather data set.COURTESY OF JEFF HILNBRAND

“The really big players are not focused on this,” says Johnson-Roberson. “There’s still a lot of work to be done on self-driving cars in general, but [driving in bad weather] is going to be a big differentiator, and also important to scaling this.”

Waymo declined to comment, but a spokesperson pointed out that the company’s newest sensors and software are better suited to challenging weather. A spokesperson for Argo said initial deployments of the company’s technology should be able to handle light rain, but “for heavier rains and snow, there still needs to be advancements in both hardware and software.”

When industry players decide to tackle bad weather, they’ll gather lots of training data of their own. But in the meantime, Czarnecki’s data should help the field advance.

“Adverse weather presents tremendous challenges for automated driving technology, and I applaud these researchers for releasing a challenging-weather data set,” says John Leonard, a professor at MIT who is also affiliated with the Toyota Research Institute. “Publicly available data sets can have a huge positive impact on research in the field.”

“The complexity of winter weather is going to take an incredible amount of work for automation technology to tackle,” says Bryan Reimer, a research scientist at MIT specializing in autonomous driving. “Ice-weather conditions are incredibly difficult.”

As for tackling the worst conditions on, say, Interstate 70 through the Rocky Mountains, where 24-hour snow crews are required, Reimer thinks self-driving cars won’t be there for a while. “The only way you’re going to drive that autonomously is to heat the road,” he says.

Chess champion Garry Kasparov who was replaced by AI says most US jobs are next

‘Humans still have the monopoly on evil’

OF COURSE WE DO WE INVENTED IT 

By Thomas Ricker@Trixxy Feb 24, 2020
Garry Kasparov struggling with Deep Blue in 1997.
 Image: STAN HONDA/AFP via Getty Images

Garry Kasparov dominated chess until he was beaten by an IBM supercomputer called Deep Blue in 1997. The event made “man loses to computer” headlines the world over. Kasparov recently returned to the ballroom of the New York hotel where he was defeated for a debate with AI experts. Wired’s Will Knight was there for a revealing interview with perhaps the greatest human chess player the world has ever known.

”I was the first knowledge worker whose job was threatened by a machine,” says Kasparov, something he foresees coming for us all.

”Every technology destroys jobs before creating jobs. When you look at the statistics, only 4 percent of jobs in the US require human creativity. That means 96 percent of jobs, I call them zombie jobs. They’re dead, they just don’t know it. For several decades we have been training people to act like computers, and now we are complaining that these jobs are in danger. Of course they are.”

Experts say only about 14 percent of US jobs are at risk of replacement by AI and robots. Nevertheless, Kasparov has some advice for us zombies looking to re-skill.

”There are different machines, and it is the role of a human and understand exactly what this machine will need to do its best. ... I describe the human role as being shepherds.”

Kasparov, for example, helps Alphabet’s DeepMind division understand potential weaknesses with AlphaZero’s chess play.

The interview also yielded this gem of a quote from Kasparov:

”People say, oh, we need to make ethical AI. What nonsense. Humans still have the monopoly on evil. The problem is not AI. The problem is humans using new technologies to harm other humans.”

It’s a fascinating read and one that should be done in its entirety, if only to find out why Kasparov thinks AI is making chess more interesting, even though humanity doesn’t stand a chance of beating it.

WILL KNIGHT BUSINESS 02.21.2020 Defeated Chess Champ Garry Kasparov Has Made Peace With AI
Twenty-three years after he lost to Deep Blue, Kasparov says people need to work with machines. You have to “nudge the flock of intelligent algorithms.”

Garry Kasparov is perhaps the greatest chess player in history. For almost two decades after becoming world champion in 1985, he dominated the game with a ferocious style of play and an equally ferocious swagger.

Outside the chess world, however, Kasparov is best known for losing to a machine. In 1997, at the height of his powers, Kasparov was crushed and cowed by an IBM supercomputer called Deep Blue. The loss sent shock waves across the world, and seemed to herald a new era of machine mastery over man.

The years since have put things into perspective. Personal computers have grown vastly more powerful, with smartphones now capable of running chess engines as powerful as Deep Blue alongside other apps. More significantly, thanks to recent progress in artificial intelligence, machines are learning and exploring the game for themselves.

Deep Blue followed hand-coded rules for playing chess. By contrast, AlphaZero, a program revealed by the Alphabet subsidiary DeepMind in 2017, taught itself to play the game at a grandmaster level simply by practicing over and over. Most remarkably, AlphaZero uncovered new approaches to the game that dazzled chess experts.

Last week, Kasparov returned to the scene of his famous Deep Blue defeat—the ballroom of a New York hotel—for a debate with AI experts organized by the Association for the Advancement of Artificial Intelligence. He met with WIRED senior writer Will Knight there to discuss chess, AI, and a strategy for staying a step ahead of machines. An edited transcript follows:


WIRED: What was it like to return to the venue where you lost to Deep Blue?

Garry Kasparov: I’ve made my peace with it. At the end of the day, the match was not a curse but a blessing, because I was a part of something very important. Twenty-two years ago, I would have thought differently. But things happen. We all make mistakes. We lose. What’s important is how we deal with our mistakes, with negative experience.

1997 was an unpleasant experience, but it helped me understand the future of human-machine collaboration. We thought we were unbeatable, at chess, Go, shogi. All these games, they have been gradually pushed to the side [by increasingly powerful AI programs]. But it doesn't mean that life is over. We have to find out how we can turn it to our advantage.

I always say I was the first knowledge worker whose job was threatened by a machine. But that helps me to communicate a message back to the public. Because, you know, nobody can suspect me of being pro-computers.

What message do you want to give people about the impact of AI?

I think it's important that people recognize the element of inevitability. When I hear outcry that AI is rushing in and destroying our lives, that it's so fast, I say no, no, it's too slow.

"I always say I was the first knowledge worker whose job was threatened by a machine."

GARRY KASPAROV

Every technology destroys jobs before creating jobs. When you look at the statistics, only 4 percent of jobs in the US require human creativity. That means 96 percent of jobs, I call them zombie jobs. They're dead, they just don’t know it.

For several decades we have been training people to act like computers, and now we are complaining that these jobs are in danger. Of course they are. We have to look for opportunities to create jobs that will emphasize our strengths. Technology is the main reason why so many of us are still alive to complain about technology. It's a coin with two sides. I think it's important that, instead of complaining, we look at how we can move forward faster.


When these jobs start disappearing, we need new industries, we need to build foundations that will help. Maybe it’s universal basic income, but we need to create a financial cushion for those who are left behind. Right now it's a very defensive reaction, whether it comes from the general public or from big CEOs who are looking at AI and saying it can improve the bottom line but it’s a black box. I think it's we still struggling to understand how AI will fit in.

A lot of people will have to contend with AI taking over some part of their jobs. What advice do you have for them?

There are different machines, and it is the role of a human and understand exactly what this machine will need to do its best. At the end of the day it's about combination. For instance, look at radiology. If you have a powerful AI system, I’d rather have an experienced nurse than a top-notch professor [use it]. A person with decent knowledge will understand that he or she must add only a little bit. But a big star in medicine will like to challenge the machines, and that destroys the communication.

People ask me, “What can you do to assist another chess engine against AlphaZero?” I can look at AlphaZero’s games and understand the potential weaknesses. And I believe it has made some inaccurate evaluations, which is natural. For example, it values bishop over knight. It sees over 60 million games that statistically, you know, the bishop was dominant in many more games. So I think it added too much advantage to bishop in terms of numbers. So what you should do, you should try to get your engine to a position where AlphaZero will make inevitable mistakes [based on this inaccuracy].

"Technology is the main reason why so many of us are still alive to complain about technology."

GARRY KASPAROV

I often use this example. Imagine you have a very powerful gun, a rifle that can shoot a target 1 mile from where you are. Now a 1-millimeter change in the direction could end up with a 10-meter difference a mile away. Because the gun is so powerful, a tiny shift can actually make a big difference. And that's the future of human-machine collaboration.

With AlphaZero and future machines, I describe the human role as being shepherds. You just have to nudge the flock of intelligent algorithms. Just basically push them in one direction or another, and they will do the rest of the job. You put the right machine in the right space to do the right task.

How much progress do you think we’ve made toward human-level AI?

We don't know exactly what intelligence is. Even the best computer experts, the people on the cutting edge of computer science, they still have doubts about exactly what we're doing.


What we understand today is AI is still a tool. We are comfortable with machines making us faster and stronger, but smarter? It’s some sort of human fear. At the same time, what's the difference? We have always invented machines that help us to augment different qualities. And I think AI is just a great tool to achieve something that was impossible 10, 20 years ago.

How it will develop I don't know. But I don't believe in AGI [artificial general intelligence]. I don't believe that machines are capable of transferring knowledge from one open-ended system to another. So machines will be dominant in the closed systems, whether it's games, or any other world designed by humans.

David Silver [the creator of AlphaZero] hasn’t answered my question about whether machines can set up their own goals. He talks about subgoals, but that’s not the same. That’s a certain gap in his definition of intelligence. We set up goals and look for ways to achieve them. A machine can only do the second part.

So far, we see very little evidence that machines can actually operate outside of these terms, which is clearly a sign of human intelligence. Let's say you accumulated knowledge in one game. Can it transfer this knowledge to another game, which might be similar but not the same? Humans can. With computers, in most cases you have to start from scratch.

Let’s talk about the ethics of AI. What do you think of the way the technology is being used for surveillance or weapons?

We know from history that progress cannot be stopped. So we have certain things we cannot prevent. If you [completely] restrict it in Europe, or America, it will just give an advantage to the Chinese. [But] I think we do need to exercise more public control over Facebook, Google, and other companies that generate so much

People say, oh, we need to make ethical AI. What nonsense. Humans still have the monopoly on evil. The problem is not AI. The problem is humans using new technologies to harm other humans.

AlphaZero "values bishop over knight. I think it added too much advantage to bishop in terms of numbers."

GARRY KASPAROV

AI is like a mirror, it amplifies both good and bad. We have to actually look and just understand how we can fix it, not say “Oh, we can create AI that will be better than us.” We are somehow stuck between two extremes. It's not a magic wand or Terminator. It's not a harbinger of utopia or dystopia. It's a tool. Yes, it's a unique tool because it can augment our minds, but it's a tool. And unfortunately we have enough political problems, both inside and outside the free world, that could be made much worse by the wrong use of AI.


Returning to chess, what do you make of AlphaZero’s style of play?

I looked at its games, and I wrote about them in an article that mentioned chess as the “drosophila of reasoning.” Every computer player is now too strong for humans. But we actually could learn more about our games. I can see how the millions of games played by AlphaGo during practice can generate certain knowledge that’s useful.

It was a mistake to think that if we develop very powerful chess machines, the game would be dull, that there will be many draws, maneuvers, or a game will be 1,800, 1,900 moves and nobody can break through. AlphaZero is totally the opposite. For me it was complementary, because it played more like Kasparov than Karpov! It found that it could actually sacrifice material for aggressive action. It’s not creative, it just sees the pattern, the odds. But this actually makes chess more aggressive, more attractive.

Magnus Carlsen [the current World Chess Champion] has said that he studied AlphaZero games, and he discovered certain elements of the game, certain connections. He could have thought about a move, but never dared to actually consider it; now we all know it works.

When you lost to DeepBlue, some people thought chess would no longer be interesting. Why do you think people are still interested in Carlsen?

You answered the question. We are still interested in people. Cars move faster than humans, but so what? The element of human competition is still there, because we want to know that our team, our guy, he or she is the best in the world.

The fact is that you have computers that dominate the game. It creates a sense of uneasiness, but on the other hand, it has expanded interest in chess. It’s not like 30 years ago, when Kasparov plays Karpov, and nobody dared criticize us even if we made a blunder. Now you can look at the screen and the machine tells you what's happening. So somehow machines brought many people into the game. They can follow, it's not a language they don't unde
rstand. AI is like an interface, an interpreter.
Exclusive Survey Reveals Discrimination Against Visa Workers at Tech’s Biggest Companies

Illustrations: Shira Inbar

OneZero conducted a 10,000-person poll to shed light on the plight of H-1B workers


INTO THE VALLEY
Sarah Emerson Feb 24 ·
This article is part of Into the Valley, a feature series from OneZero about Silicon Valley, the people who live there, and the technology they create.
In 2010, Alex left a “pretty small country in Asia” to study computer science at one of America’s elite universities. Alex, who requested anonymity due to the sensitive nature of this story, eventually caught the eye of Microsoft, which sponsored their H-1B visa. Now legally a guest worker in the United States, Alex was one of thousands of foreign employees in a labor pipeline stretching between Silicon Valley and countries like India, China, and South Korea.

Every year, technology giants compete over H-1B visas, and the opportunity to sponsor foreign workers. For these employees, the visa can represent a path to residency, and can mean employment at some of the world’s most high-profile companies. But foreign laborers who enter the program also report feeling like an underclass, with stressful working conditions and discrimination due to their visa status.

Alex successfully petitioned for a green card, earning their permanent residency in 2019. Yet for thousands of workers on H-1B visas, conditions are challenging, and can feel as if they’re designed to keep them silent.

In late 2019, OneZero commissioned a survey in partnership with Blind, an anonymous social networking app that’s widely used by technology employees, to understand the working conditions that H-1B recipients face. The survey ran for two weeks and drew responses from more than 11,500 workers from some of tech’s most notable companies. (Blind verifies the identities of its users based on their company email addresses.) Employees at Amazon, Microsoft, Apple, Google, Uber, and Facebook accounted for a quarter of all feedback. OneZero also distributed a Google questionnaire, completed by more than 180 H-1B workers, that asked about salary, demographics, and the respondents’ opinions of the current administration’s immigration policies and rhetoric.
Graphics: Matthew Conlen

And OneZero spoke to several technology workers who described their H-1B experiences. All of these workers feared the risks that being identified could pose to their personal lives and asked to remain anonymous.

In early 2020, OneZero ran a second poll in order to gauge the sentiment of non-H-1B workers toward those on H-1B visas, asking questions about President Trump’s immigration policies and the visa’s effects on the technology industry. Many of the respondents worked at companies like Amazon, Google, and Microsoft.

The responses are not a scientific representation of the technology industry as a whole. Rather, they lend a voice to, and shine a light on, a workforce that, according to OneZero data and interviews, can feel muzzled by the conditional nature of their employer-backed visas. They suggest that at some companies, H-1B holders are perceived as outsiders who, while benefiting from the industry’s high salaries and generous perks, are simultaneously hired at lower wages than their U.S. counterparts and are indentured to some of the world’s biggest technology titans.

The anonymous feedback indicates that H-1B workers hold overwhelmingly negative views of Trump’s immigration policies. One-third said they plan to remain in the U.S. after their visa expires. Some also believe they earn less than their peers, though many report having six-figure salaries. Between half and nearly 80% percent of H-1B workers at Amazon, Uber, Facebook, WeWork, and eBay indicated that they feel pressure to outperform their American colleagues. Meanwhile, tech workers not on H-1B visas express largely positive views about workers on H-1B visas.

These responses paint a complicated picture of a system that’s become central to Silicon Valley’s labor pipeline.
The H-1B visa system, created in 1990, allows U.S. companies to sponsor 85,000 foreign workers to temporarily perform specialty jobs every year — labor that’s considered “highly skilled” or demands technical expertise. While the H-1B program accounts for less than one percent of the U.S. workforce, it has come under intense scrutiny from the Trump administration, whose “Buy American and Hire American” policy, designed by White House adviser Stephen Miller, has targeted immigrants and deliberately slowed the use of H-1B visas.

Opponents of the H-1B program say it robs opportunities from qualified U.S. workers, often citing the outsourcing industry’s preference for — and exploitation of — South Asian H-1B employees. IT firms have gobbled up visa quotas in recent years, hiring thousands of foreign workers at below market wages to perform services on the cheap for clients such as Apple and Comcast and other companies replacing permanent (and more expensive) jobs with H-1B contractors. In 2014, 13 of the top 20 companies requesting H-1Bs were outsourcing firms.

The outsourcing industry has been hit with several class action lawsuits alleging anti-white discrimination. Former employees at Cognizant, one of the largest of these companies, claimed in a suit last year that they were pushed out and replaced with “less qualified” H-1B workers from India.

As guest workers, H-1B employees lack many of the safeguards afforded to their colleagues, like the freedom to change jobs without having to leave the country or find another sponsor. Recipients may live in the U.S. for three years with the option of a three-year visa renewal, as long as they remain tethered to their employer. Visas are rewarded on a first-come, first-served basis, or through a lottery based on the number of applications submitted to USCIS. Twenty-thousand H-1B visas are reserved for people with a master’s degree and above. Employers may also sponsor H-1B workers for a green card, but for natives of certain countries, wait times can last decades.


“I believe that H-1B employees tend to tolerate more bullshit from managers because they cannot just rage-quit.”


Politically, the H-1B program catches flack from both sides. Often the source of anti-immigrant sentiment, the program is also a target of criticism among labor leaders. “The technology world has specifically used H-1B visas as a way to hollow out careers into short-term contingent jobs,” said Michael Wasser, legislative director at the Department for Professional Employees, a coalition of 24 national unions that includes technology workers. “It doesn’t matter who’s doing the work. If someone can pick up where the next person left off, labor costs stay the same but output continues.”

The program does have one group of steadfast supporters: large tech companies. In 2008, Microsoft founder Bill Gates asked the House Committee on Science and Technology to re-raise the H-1B cap from 65,000 to 195,000. Mark Zuckerberg has similarly lobbied for more permissive immigration standards.

“Ever since Bill Gates testified before Congress, technology leaders have been talking about using the H-1B program to increase employment,” said William Kerr, a professor at Harvard Business School and author of The Gift of Global Talent: How Migration Shapes Business, Economy & Society. “But my take is that, in many ways, it is a crude visa.” The lottery system is ineffective, Kerr says, and too often the program is used to recruit lower-skilled workers at a discount.
OneZero’s survey confirms that for some workers on these visas, conditions can be stressful, precarious, and degrading. The acute pressures of working on an H-1B visa were brought front and center at a protest last year, held outside of Facebook’s headquarters following the suicide of Facebook engineer Qin Chen in 2019. Though the circumstances around their suicide remain unknown, Chen’s former coworker, who led the protest, told Motherboard that he was standing up for Chinese Facebook employees who felt silenced by their immigration statuses.


H-1B workers are hyper-aware of their temporary, employer-dependent livelihoods. OneZero asked Blind users on H-1B visas if they “experience additional pressure to perform at work” because of their status. Seventeen companies stood out for having a significant proportion — at least 50% — of employees responding “Very often” or “Often” to this question. They include Amazon, Facebook, WeWork, Uber, PayPal, eBay, Samsung, Cisco, Capital One, SAP, Symantec, Splunk, Goldman Sachs, Visa, VmWare, Intuit, and Deloitte.

Conversely, at Airbnb, Nvidia, Adobe, and LinkedIn, at least 50% of people responded “Rarely” and “No” to this question.

A burgeoning labor movement swept through Silicon Valley last year, led by employees, contractors, and gig workers. H-1B recipients were absent from this narrative. When visa access is controlled by an employer, even a single poor review can feel like an existential threat. OneZero was told by several H-1B employees that speaking out in the workplace is intimidating for visa holders, so many don’t.

“I believe that H-1B employees tend to tolerate more bullshit from managers because they cannot move to another company that easily, and they cannot just rage-quit,” said Alex. “This is possibly the key reason why managers like H-1Bs — lower turnover rate and employees who will take more shit.”

These employees “cannot afford to disagree with their managers,” said Chris, an H-1B visa holder and manager at Cognizant. “Most of the time there is a consequence for the worker.”


At Amazon, where hundreds of employees recently protested the company’s inaction on climate change, many H-1B workers reported extreme job pressures that could, in theory, deter them from participating in organizing efforts.

“There is always a feeling of, ‘I cannot speak up because that may lead to me losing my job and my right to live in this country,’” said Sam, who worked on an H-1B visa at several technology companies and is now employed by Slack. “I felt that pressure more at a smaller [startup] company like Bright and Flexport than at Facebook and Slack.” Smaller companies tend to have higher rates of friction because employees are taking on more responsibilities.

Workers on H-1B visas face unique forms of discrimination, though they may not be readily apparent. The industry’s affinity for these visas has undeniably hurt American workers laid off and supplanted by H-1B workers, but it has also pitted foreign and U.S. workforces against one another. Oracle, for example, was found guilty last year of showing preference for H-1B workers over Black and women employees.

The responses of H-1B workers to the OneZero survey and questionnaire suggest the issue of workplace discrimination is especially complex, and isn’t confined to racist behavior. “People in general tend to ignore our recommendations, and will later force us to clean up the mess, as that is really our job,” said Chris. “I have seen instances where mistakes done by full-time employees get ignored as ‘moments of learning,’ while if the same was done by a contractor [on an H-1B visa], it would turn out to be a job termination situation.”


When OneZero asked Blind users if they have “ever perceived discrimination at work” because of their H-1B visa status, most participants said they had not. Of the 35 companies represented by the survey, only one, Capital One, had more than 50% of its respondents reply “Very often” or “Often” to the question. At Apple, Salesforce, Lyft, Airbnb, Samsung, Intuit, Bloomberg, Symantec, and Goldman Sachs, between 20% and 40% of respondents replied “Sometimes.” The poll didn’t specify forms of discrimination, but it can manifest as racism, pay inequality, and other prejudicial treatment.

At Uber, a company found guilty of gender discrimination by a federal probe last year, two-thirds of H-1B employees who responded to the survey said they had experienced some degree of workplace discrimination. That same year, Uber laid off nearly 400 employees as it ramped up H-1B hires.

Others indicated they hadn’t experienced any discrimination tied to their status. “I never felt discriminated against,” said Sam. “Facebook has a policy of giving out large signing bonuses of $100,000 for [H-1B recipients who graduate from American universities].”


Forty-two percent of participants said they believe they earn less than their coworkers because of their visa status.

According to the questionnaire, 13% claimed an annual salary range of $50,000 to $99,999; 19% claimed $250,000 to $749,000; 68% claimed $100,000 to $249,999; and one person reported earning $750,000 or higher.

The Department of Labor states that employers must pay H-1B workers no less than the “prevailing wage” set for their role in their geographic area of employment. But the federal law still technically allows employers to misclassify H-1B recipients as entry-level employees, meaning they can be paid less. In Silicon Valley, entry-level computer programmers earn $52,229 while the average for American programmers is roughly $90,000.

“The most damaging discrimination comes from employers,” says Patricia Campos-Medina, an immigrant rights advocate and co-director of the New York State AFL-CIO/Cornell Union Leadership Institute. “Unless you’re able to disentangle the visa from the employer, you have to accept whatever working conditions they offer. Workers can file a complaint, but are they going to do it and lose their visa? Most of them have to accept lower salaries.”
Recent political shifts in the United States, combined with the current administration’s openly hostile posture toward immigrants, has caused some H-1B applicants to think twice about life in America. Sixty-four percent of respondents to OneZero’s Google questionnaire said they do not support the Trump administration’s policies to aggressively deny H-1B applications. Nineteen percent said they do, and the rest were undecided.

“Many prospective H-1B employees actually cheered that Trump would get rid of consultants’ eligibility to apply for H-1Bs,” said Alex. Still, “extending an H-1B used to almost always be approved, now it’s being treated as a new application. Transferring an H-1B to another company now has rejection risks, or [can result in USCIS sending the applicant] a Request for Evidence, which can take months.”

“Politicians need to show something to their support groups, and companies try to avoid becoming a poster child for taking U.S. jobs,” said Chris.

To get a sense of how technology workers not on H-1B visas regard coworkers who are, OneZero ran a second Blind poll in early 2020. The survey ran for 10 days and collected more than 2,600 responses from non-H-1B employees at 42 companies such as Microsoft, Amazon, Facebook, and Google.


Of these workers, nearly 60% oppose policies that would limit the visa program. All of this provides an alternate perspective to stories of friction between American and H-1B workers.

But companies like eBay stood out for having a large percentage of workers who expressed support for curbing the H-1B program. Forty-eight percent of respondents from eBay said they “Very much” support the Trump administration’s efforts to curb the H-1B program, while only 28% said they do not. In contrast, employees at Facebook, Lyft, and Uber overwhelmingly opposed efforts to curb the H-1B program. At Facebook, a largely liberal workplace, 69% of respondents said they do “not at all” support the current administration’s efforts to rein in H-1B visas, while just 16% said they do.

The survey also polled the sentiment of non-H-1B workers about the “overall impact of the H-1B program on the technology industry.” Half of respondents said that the program has a positive impact on the technology sector. A large proportion of workers at Facebook, Google, Nvidia, and Walmart say the program has been “Very positive.” Forty-three percent of workers at Google, another company known for progressive corporate values, responded this way, while only 13% say they feel the H-1B visa’s effect has been “Very negative.” At Amazon, which boasted one of the largest shares of new H-1B visas in 2017, 34% of respondents felt very positive about the program, while 22% felt its impact has been very negative.


More than anything, OneZero’s findings underscore the complexities of the H-1B program, and a mixed sentiment for both workers on H-1B visas and those who work alongside them. Meanwhile, the program continues to come under fire from the Trump administration.

The White House’s “Buy American and Hire American” policy correlates to a record number of visa rejections. The denial rate for first-time H-1B applications jumped from 10% in 2016 to 24% in 2019, according to an investigation by Reveal and Mother Jones. The change has mostly impacted IT outsourcing firms; technology companies like Amazon, Microsoft, and Google saw some of the most H-1B visa approvals in 2017.


“Politicians need to show something to their support groups, and companies try to avoid becoming a poster child for taking U.S. jobs.”


Labor and immigrant rights advocates are calling for a federal overhaul of the H-1B system to make it more equitable. For example, when the number of H-1B applications exceed the amount of available visas — which has been the case for the last seven years — the process is left to a random lottery. But a rule that allocates visas based on higher wage levels can incentivize employers to pay their workers fairly. Democratic lawmakers have also introduced new legislation that would eliminate per-country caps on employment-based visas, something that, if passed, could drastically reduce the wait time for permanent residency. Right now, no more than 7% of green cards can be issued to citizens of any one country, meaning Indian H-1B workers, for instance, are disproportionately faced with decades — even centuries — long backlogs.

“What’s needed is to lift standards for all workers,” said Wasser. The narrative around H-1B holders is “often mischaracterized as pitting one worker against another, which is categorically wrong,” he added. “Programs need to be reformed since right now the system only benefits employers, not H-1B or U.S. workers.”


LINKED FROM https://plawiuk.blogspot.com/2020/02/go-read-this-survey-about-tech-workers.html

This tweet reminded me that driverless cars can still blow our minds

‘Renzo, faster! I need to get to the front seat!’


By Andrew J. Hawkins@andyjayhawk Feb 24, 2020


GIF: Waymo

I saw a tweet today that jolted me out of my self-imposed cynical journalist mindset as it relates to self-driving cars.

It was a short video of two guys trying to catch up to a fully driverless Waymo vehicle in Chandler, a town outside Phoenix, Arizona. “There’s no one in this car!” one guy yells exuberantly, before urging his companion to drive “faster. I need to get to the front seat.” They accelerate and, lo and behold, there is indeed no one in this car. The clip ends with the two guys screaming in unison.

Waymo has been increasing the number of its fully driverless vehicles in the Phoenix area lately, so it makes sense that we would see a corresponding uptick in reactions on social media. But I have yet to come across anyone with such an enthusiastic reaction as these guys. It was a helpful reminder of how this technology will change a lot of people’s basic assumptions about driving and transportation.


THERE’S NO ONE IN THAT WAYMO. @Waymo @GalanDeNovelas pic.twitter.com/LiKJJs4adY— ThatOneGuy (@MrBandito33) February 18, 2020

I’ve been writing about self-driving cars for over five years now, and in the process, I’ve become somewhat inured to the technology. I’ve ridden in a dozen-or-so test cars, seen the technology up close, heard the big proclamations, and I think it’s safe to say the thrill is gone. (Poor me, right?)

I’ve also learned to take with a big grain of salt many of the bold predictions about safety and road-readiness from autonomous vehicle developers. Many of their self-imposed deadlines to launch a robot taxi service have failed to come about, and the switch from human-driven to autonomous transport appears further out than ever. The industry is said to be in a “trough of disillusionment.”

Try telling that to the guy who shot the Waymo video. His name is Gavin Vandine, and he says that his excitement was genuine, mostly because he has more than a passing familiarity with the complex systems that are needed to make a car drive itself.

“All three of my friends in that car went to school for computer systems engineering and our embedded systems course was basically an autonomous driving class,” he said. “We modded an RC car to act on remote coordinate input and through ultrasonic sensors, LIDAR, and softwalls, it worked its way to a destination. I only say all this because I think that’s what added to the excitement.”
“IT WAS A COOL MOMENT.”

He added, “We learned about this, have spoken about this before, and we happened to be together when we all saw it for the first time. It was a cool moment.”

The excitement from Gavin and his friend was a helpful reminder that the vast majority of people in the US and around the world are still totally in the woods when it comes to autonomous vehicles. The number of self-driving cars on the road today is a fraction of a fraction of the total number of personally owned vehicles. We’re only just now seeing some form of a commercial business emerge.

It makes sense that most people are skeptical, or even fearful, about the idea of self-driving cars. It also makes sense that you’d totally freak out if you saw a ghost car just driving down the street.

It’s early days. It’s good to be skeptical about what these companies are selling. It’s also okay to be excited.