Monday, December 28, 2020

UPDATED
Saudi court hands prison sentence to women's rights activist - local media

By Reuters Staff


FILE PHOTO: Saudi women's rights activist Loujain al-Hathloul is seen in this undated handout picture. Marieke Wijntjes/Handout via REUTERS

DUBAI (Reuters) - A Saudi court on Monday sentenced prominent women’s rights activist Loujain al-Hathloul to five years and eight months in prison, local media reported, in a trial that has drawn international condemnation and as Riyadh faces new U.S. scrutiny.

Hathloul, 31, has been held since 2018 following her arrest along with at least a dozen other women’s rights activist.

The verdict, reported by Sabq and al-Shark al-Awsat newspapers, poses an early challenge to Crown Prince Mohammed bin Salman’s relationship with U.S. President-elect Joe Biden, who has described Riyadh as a “pariah” for its human rights record.

Hathloul was charged with seeking to change the Saudi political system and harming national security, local media said. The court suspended two years and 10 months of her sentence, or time served since Hathloul was arrested on May 15, 2018, the newspapers said.

United Nations human rights experts have called the charges against her spurious, and along with leading rights groups and lawmakers in the United States and Europe have called for her release.

The detentions of women activitsts occured shortly before and after the kingdom lifted a ban on women driving, which many activists had long championed, as part of reforms introduced by Crown Prince Mohammed bin Salman that were also accompanied by a crackdown on dissent and an anti-corruption purge.

Hathloul’s sentencing came just nearly three weeks after a Riyadh court jailed U.S.-Saudi physician Walid al-Fitaihi for six years, despite U.S. pressure to release him, in a case rights groups have called politically motivated.

Reporting by Aziz El Yaakoubi and Raya Jalabi; writing by Raya Jalabi; Editing by Gareth Jones and Angus MacSwan




MIDDLE EAST
Saudi women’s rights activist sentenced to nearly 6 years


By Aya Batrawy The Associated Press
Mon., Dec. 28, 2020

DUBAI, United Arab Emirates - One of Saudi Arabia’s most prominent women’s rights activists was sentenced Monday to nearly six years in prison, according to state-linked media, under a vague and broadly worded counterterrorism law. The ruling nearly brings to a close a case that has drawn international criticism and the ire of U.S. lawmakers.

Loujain al-Hathloul has already been in pre-trial detention and has endured several stretches of solitary confinement. Her continued imprisonment was likely to be a point of contention in relations between the kingdom and the incoming presidency of Joe Biden, whose inauguration takes place in January — around two months before what is now expected to be al-Hathloul’s release date.

Rights group “Prisoners of Conscience,” which focuses on Saudi political detainees, said al-Hathloul could be released in March 2021 based on time served. She has been imprisoned since May 2018, and 34 months of her sentencing will be suspended.


Her family said in a statement she will be barred from leaving the kingdom for five years and required to serve three years of probation after her release.

Biden has vowed to review the U.S.-Saudi relationship and take into greater consideration human rights and democratic principles. He has also vowed to reverse President Donald Trump’s policy of giving Saudi Arabia “a blank check to pursue a disastrous set of policies,” including the targeting of female activists.


Al-Hathloul was found guilty and sentenced to five years and eight months by the kingdom’s anti-terrorism court on charges of agitating for change, pursuing a foreign agenda, using the internet to harm public order and co-operating with individuals and entities that have committed crimes under anti-terror laws, according to state-linked Saudi news site Sabq. The charges all come under the country’s broadly worded counterterrorism law.

She has 30 days to appeal the verdict.

“She was charged, tried and convicted using counter-terrorism laws,” her sister, Lina al-Hathloul, said in a statement. “My sister is not a terrorist, she is an activist. To be sentenced for her activism for the very reforms that MBS and the Saudi kingdom so proudly tout is the ultimate hypocrisy,” she said, referring to the Saudi crown prince by his initials.


Sabq, which said its reporter was allowed inside the courtroom, reported that the judge said the defendant had confessed to committing the crimes and that her confessions were made voluntarily and without coercion. The report said the verdict was issued in the presence of the prosecutor, the defendant, a representative from the government’s Human Rights Commission and a handful of select local media representatives.

The 31-year-old Saudi activist has long been defiantly outspoken about human rights in Saudi Arabia, even from behind bars. She launched hunger strikes to protest her imprisonment and joined other female activists in telling Saudi judges that she was tortured and sexually assaulted by masked men during interrogations. The women say they were caned, electrocuted and waterboarded. Some say they were forcibly groped and threatened with rape.

Al-Hathloul rejected an offer to rescind her allegations of torture in exchange for early release, according to her family. A court recently dismissed her allegations, citing a lack of evidence.

Among other allegations was that one of the masked interrogators was Saud al-Qahtani, a close confidante and advisor to Crown Prince Mohammed bin Salman at the time. Al-Qahtani was later sanctioned by the U.S. for his alleged role in the murder of Saudi writer Jamal Khashoggi in the kingdom’s consulate in Turkey.

While more than a dozen other Saudi women’s rights activists face trial, have spent time in prison or remain jailed, al-Hathloul’s case stood out in part because she was the only female rights activist to be referred to the Specialized Criminal Court, which tries terrorism cases.

In many ways, her case came to symbolize Prince Mohammed’s dual strategy of being credited for ushering in sweeping social reforms and simultaneously cracking down on activists who had long pushed for change.

While some activists and their families have been pressured into silence, al-Hathloul’s siblings, who reside in the U.S. and Europe, consistently spoke out against the state prosecutor’s case and launched campaigns calling for her release.

The prosecutor had called for the maximum sentence of 20 years, citing evidence such as al-Hathloul’s tweets in support of lifting a decades-long ban on women driving and speaking out against male guardianship laws that had led to multiple instances of Saudi women fleeing abusive families for refuge abroad. Al-Hathloul’s family said the prosecutor’s evidence also included her contacts with rights group Amnesty International and speaking to European diplomats about human rights in Saudi Arabia.

The longtime activist was first detained in 2014 under the previous monarch, King Abdullah, and held for more than 70 days after she attempted to livestream herself driving from the United Arab Emirates to Saudi Arabia to protest the ban on women driving.

She’s also spoken out against guardianship laws that barred women from travelling abroad without the consent of a male relative, such as a father, husband or brother. The kingdom eased guardianship laws last year, allowing women to apply for a passport and travel freely.

Her activism landed her multiple human rights awards and spreads in magazines like Vanity Fair in a photo shoot next to Meghan Markle, who would later become the Duchess of Sussex. She was also a Nobel Peace Prize nominee.

Al-Hathloul’s family say in 2018, shortly after attending a U.N.-related meeting in Geneva about the situation of women’s rights in Saudi Arabia, she was kidnapped by Emirati security forces in Abu Dhabi, where she’d been residing and pursuing a master’s degree. She was then forced on a plane to Saudi Arabia, where she was barred from travelling and later arrested.

Al-Hathloul was among three female activists targeted that year by state-linked media, which circulated her picture online and dubbed her a traitor.








Saudi activist jail sentence paves way for release within months: family

BY ANUJ CHOPRA (AFP) 

A Saudi court on Monday sentenced prominent activist Loujain al-Hathloul to five years and eight months in prison for terrorism-related crimes, but she is expected to be released within months, her family said.

Hathloul, 31, was arrested in May 2018 with about a dozen other women activists just weeks before the historic lifting of a decades-long ban on female drivers, a reform they had long campaigned for, sparking a torrent of international criticism.

The women's rights activist was convicted of "various activities prohibited by the anti-terrorism law", the pro-government online outlet Sabq and other media allowed to attend her trial cited the court as saying.

The court handed a prison term of five years and eight months, but suspended two years and 10 months of the sentence "if she does not commit any crime" within the next three years, they added.

"A suspension of 2 years and 10 months in addition to the time already served (since May 2018) would see her (released) in approximately two months," Lina al-Hathloul, the activist's sister, wrote on Twitter.

Another source close to her family and the London-based campaign group ALQST said she would be released by March next year.

The court also banned the activist from leaving the kingdom for five years, her sister said.

This verdict was a "face saving exit strategy" for the Saudi government after coming under severe international pressure for her release, the source told AFP.

A motion to appeal can be filed within 30 days, local media reported.


After being tried in Riyadh's criminal court, her trial was transferred last month to the Specialised Criminal Court, or the anti-terrorism court, which campaigners say is notorious for issuing long jail terms and is used to silence critical voices under the cover of fighting terrorism.

Earlier this month, Foreign Minister Prince Faisal bin Farhan told AFP that Hathloul was accused of contacting "unfriendly" states and providing classified information, but her family said no evidence to support the allegations had been put forward.

While some detained women activists have been provisionally released, Hathloul and others remain imprisoned on what rights groups describe as opaque charges.

The pro-government Saudi media has branded them as "traitors" and Hathloul's family alleges she experienced sexual harassment and torture in detention. Saudi authorities deny the charges.

- Spotlight on human rights -


Saudi Arabia, an absolute monarchy, has faced growing international criticism for its human rights record.

But the kingdom appears to be doubling down on dissent, even as US President-elect Joe Biden's incoming administration could intensify scrutiny of its human rights failings.

Aside from a host of international campaigners and celebrities, United States Senate Committee on Foreign Relations has demanded the "immediate and unconditional release" of Hathloul.

The detention of women activists has cast a spotlight on the human rights record of the kingdom, which has also faced intense criticism over the 2018 murder of journalist Jamal Khashoggi in its Istanbul consulate.

Hathloul began a hunger strike in prison on October 26 to demand regular contact with her family, but felt compelled to end it two weeks later, her siblings said.

"She was being woken up by the guards every two hours, day and night, as a brutal tactic to break her," Amnesty said last month, citing the activist's family.

"Yet, she is far from broken."

The Specialised Criminal Court was established in 2008 to handle terrorism-related cases, but has been widely used to try political dissidents and human rights activists.

In a report earlier this year, Amnesty International said the secretive court was being used to silence critical voices under the cover of fighting terrorism.


Read more: http://www.digitaljournal.com/news/world/saudi-activist-loujain-al-hathloul-jailed-for-5-years-8-months/article/583128#ixzz6hzLA8K1S



Essential Science: Immuno-antibiotics, the answer to resistance?
BY TIM SANDLE 8 HOURS AGO IN SCIENCE
The search for new antibiotics and other antimicrobials continues to be a pressing need in humanity's battle against bacterial infection. A new class of antibiotics, called immuno-antibiotics, may present the answer.

Scientists rom The Wistar Institute have developed a new class of dual-acting immuno-antibiotics. These drugs appear capable of blocking an essential pathway in bacteria and activate the human body's adaptive immune response.

The new research represents a double-pronged strategy. It is about developing new molecules that can eliminate difficult-to-treat bacterial infections while, at the same time, enhancing the natural host immune response to the infection. The rationale is that by harnessing the immune system to tackle bacterial infections from two different sides, this means that it becomes far harder to the organisms of concern to develop resistance.


The issue of resistance is an increasingly pressing one for humanity.

The problem of antimicrobial resistance

For the past 70 years, antimicrobial drugs, such as antibiotics, have been successfully used to treat patients with bacterial and infectious diseases. Over time, however,
many infectious organisms have adapted to the drugs designed to kill them, making the products less effective. A growing number of disease-causing organisms (pathogens) are resistant to one or more antimicrobial drugs used for treatment.


E. coli magnified 10,000 times using an electron microscope.
Brian0918

Over the past several years, bacterial resistance has increased at an alarming rate. Antimicrobial resistance is a complex and multifaceted problem that is driven by many factors, including:

Bacterial population density in health care facilities, which allows transfer of bacteria within a community and enables resistance to emerge;

Inadequate adherence to proven hospital hygiene measures;

An increasing number of high risk populations, including chemotherapy, dialysis, and transplant patients as well as those in long-term care facilities;

Overuse of antibiotics in agriculture;

Global travel and trade, which can lead to transfer of resistant infections and resistance genes;

Poor sanitation in certain areas, which can contaminate water systems and spread resistant bacteria in sewage;

Inappropriate use of antibiotics in human medicine (e.g., for viral infections);

Overprescribing of broad-spectrum drugs, which can exert selective pressure on commensal bacteria and predispose to secondary infection; and

Lack of rapid diagnostics to help guide appropriate use of antibiotics.

The rise of multi-drug resistance

Of particular concern are
multi-drug resistant organisms (MDROs). These are bacteria that are resistant to one or more of the antibiotics used to treat them. n MDROs can develop when antibiotics are not used appropriately. Factors that can contribute to the development of MDROs include taking antibiotics for the incorrect duration or using antibiotics when they are not needed (such as for viral infections).


Microbes: Staphylococcus is a common bacteria which can cause anything from a simple boil to horrible flesh-eating infections
Vano Shlamov, AFP/File

MDROs can cause
a variety of infections in the body, including but not limited to:
Skin,
Lung,
Urinary Tract,
Bloodstream,
Wound,
Surgical Site.

MDROs can be spread from person to person through direct contact. Sometimes they can also be spread by sharing personal items, such as towels or razors. In the hospital, MDROs can spread through equipment that is contaminated or improperly reused. The mechanism of spread can depend on the type of organism.

The problems around drug development

Even as antimicrobial resistance has accelerated, antibiotic discovery and development efforts have declined, with many major pharmaceutical companies discontinuing their antibiotic development programs over the past decade.

This decline is due to a number of factors, including the low return on investment of antibacterials compared with other therapeutics, difficulty in identifying new compounds via traditional discovery methods, and regulatory requirements necessitating large and complex clinical trials for approval.



A laboratory technician works on coronavirus samples at "Fire Eye" laboratory in Wuhan
STR, STR, AFP


The research that is taking place in largely centered in academia. New technologies have helped with the quest for new agents. For example, opportunities are now available to examine biological systems (i.e., metabolic, immunologic, signaling, and regulatory pathways) beyond their individual components. These holistic approaches offer new research strategies to understand the functional molecular networks generated by the interactions of the host with the pathogen in response to therapeutic treatment.

New research

In approaching the new development, researchers examined a metabolic pathway required by almost all bacteria but absent in humans. This made the pathway - methyl-D-erythritol phosphate - an ideal site. The pathway plays a role in the biosynthesis of isoprenoids; these are molecules required for bacterial cell survival.


This inoculated MacConkey agar culture plate cultivated colonial growth of Gram-negative, small rod-shaped and facultatively anaerobic Klebsiella pneumoniae bacteria.
CDC

The
study targeted the IspH enzyme, which is necessary for isoprenoid biosynthesis. This proved to be the mechanism for blocking the methyl-D-erythritol phosphate pathway and hence killing pathogenic bacteria. What is of most interest about the pathway and the enzyme IspH is the fact that most bacteria require it in order to survive.

In order to identify the necessary drug active substance, the scientists deployed computer modeling in order to screen several million commercially available compounds. The focus was on finding a compound capable of binding to the enzyme. Those compounds found to be capable of inhibiting IspH function became the launch points for drug discovery.


Later studies showed the selected IspH inhibitors outperformed most established antibiotics in killing bacterial cells. Tests were conducted against several multidrug-resistant bacteria, including those from the genera Acinetobacter, Pseudomonas, Klebsiella, Enterobacter, Vibrio, Shigella, Salmonella, Yersinia, Mycobacterium and Bacillus.


A Bacillus species bacterium growing on a Petri-dish (from Tim Sandle's laboratory)
Tim Sandle

The compounds were also shown to be non-toxic to human cells, and thus providing the basis for the development of a safe medication.

Research paper

The new research has been published in the journal Nature. The research paper is titled "IspH inhibitors kill Gram-negative bacteria and mobilize immune clearance."

Essential Science

This article forms part of Digital Journal’s long-running Essential Science series, where new research items relating to wider science stories of interest are presented by Tim Sandle on a weekly basis.


Read more: http://www.digitaljournal.com/tech-and-science/science/essential-science-immuno-antibiotics-the-answer-to-resistance/article/583114#ixzz6hwzbrG4U
The fate of Boston Dynamics
By Ben Dickson
-December 15, 2020


This article is part of our series that explore the business of artificial intelligence

Last week, Hyundai officially announced the much-anticipated deal to acquire a controlling interest in famous robotics company Boston Dynamics. According to a joint press release by the two companies, Hyundai will buy an 80-percent stake in Boston Dynamics, and SoftBank, the previous owner of the robotics company, will retain 20 percent ownership after the transaction is completed in June 2021. While details have yet to be revealed, the deal puts Boston Robotics at a $1.1 billion valuation.

The mere fact that Boston Dynamics has managed to survive so long in an industry that has been marked with failures and shuttered companies is commendable. But what will the acquisition mean for the future of the company that made its fame with YouTube videos of robots performing impressive feats? The press release and the history of the company provide some hints. And depending on what you expect from Boston Dynamics, the outcome can be both good and bad.



What is the value of Boston Dynamics?


Boston Dynamics was founded in 1992 as a spinoff from the Massachusetts Institute of Technology, working on robotics projects largely funded by the military. In 2013, as advances and interest in deep learning began to pick pace, Google’s moonshot subsidiary Google X bought the robotics company for an undisclosed amount.

But Google did not succeed in turning Boston Dynamics—or any of its other robotic ventures—into a profitable business. It eventually shut down Google X Robotics and sold Boston Dynamics to Japanese investment giant SoftBank for a reported $165 million in 2017.

Therefore, even though changing hands three times in a decade is not a good outlook for any company, the $1.1 billion valuation shows that Boston Dynamics’ value has increased, and SoftBank is making a lot of money out of the deal.

“Boston Dynamics is at the heart of smart robotics,” Masayoshi Son, the chairman and CEO of SoftBank said, according to Friday’s release. “We are thrilled to partner with Hyundai, one of the world’s leading global mobility companies, to accelerate the company’s path to commercialization. Boston Dynamics has a very bright future and we remain invested in the company’s success.”

This begs the question: If Boston Dynamics has “a very bright future,” then why sell your major stake? This probably means that its “present” is not very bright.


Running a robot company is hard


The past few years have seen the fall of several robotics companies. After running out of money to support its hardware and software business, Anki, a startup that raised $200 million to create cute home robots, shut down in 2019 and later sold its assets to edtech startup Digital Dream Labs. Mayfield Robotics, the maker of the Kuri home robot, also ceased operations in 2018. Rethink Robotics, the manufacturer of the famous Baxter and Sawyer robots, also closed shop in 2018. Its assets were later acquired by German automation firm HAHN Group.

Meanwhile, under the SoftBank ownership, Boston Dynamics tripled its staff, bought new headquarters, and finally started leasing the quadruped robot Spot for an undisclosed price in 2019. Earlier this year, Boston Dynamics started selling the robot at a hefty $74,500.

Friday’s press release states that after the launch of Spot, Boston Dynamics has sold “hundreds of robots in a variety of industries, such as power utilities, construction, manufacturing, oil and gas, and mining.”

According to a Bloomberg report, the real sales figure is close to 400 units, which amounts to about $30 million. But Boston Dynamics’ operations cost SoftBank more than $150 million, which means the company is still far from being profitable.

Also, so far, Spot’s biggest use case has been navigating and inspecting complex environments. In June, Zack Jackowski, Boston Dynamics’ lead robotics engineer, told The Verge, “We mostly sell the robot to industrial and commercial customers who have a sensor they want to take somewhere they don’t want a person to go. Usually because it’s dangerous or because they need to do it so often that it would drive someone mad. Like carrying a camera around a factory 40 times a day and taking the same pictures each time.”

But inspection is a task that drones can already accomplish with much less difficulty and at lower costs. In fact, several companies already provide drone inspection solutions for the industries mentioned in Boston Dynamics’ press release.

This means that at present, Boston Dynamics faces a tough challenge growing in a market that has already been largely conquered by drones equipped with advanced computer vision technology.

So why would Hyundai pay such a huge sum for a company will not be profitable for the time being? This brings us back to Son’s remark about the “bright future” of Boston Dynamics.



Boston Dynamics under Hyundai

The true benefit of Spot and other robots Boston Dynamics is creating is their ability to interact with and manipulate their environment. In fact, one of the advertised features of Spot was its ability to attach and use props such as a mechanical arm that can open doors and pick up objects. But the technology is still in early stages and dexterous manipulation of objects is a hot area of artificial intelligence research.

That is the future of Boston Dynamics. To get there, the company will need more time and money that SoftBank could not provide.

This is where Hyundai enters the picture. From the press release: “Hyundai Motor Group will provide Boston Dynamics a strategic partner affording access to Hyundai Motor Group’s in-house manufacturing capability and cost benefits stemming from efficiencies of scale (emphasis mine).”

With the backing of Hyundai, Boston Dynamics will be able to reduce manufacturing costs and sell Spot and its future robots at much more competitive prices and sell more units.

Beyond economies of scale, Boston Dynamics will benefit from becoming a subsidiary of one of the world leaders in robotics. Hyundai is already heavily invested in robotics research and production. It is engaged in several projects that, like Boston Dynamics’ robots, are focused on solving mobility problems. The integration will provide Boston Dynamics with the right tools to speed up its research in a cost-efficient way and develop robots that can do more than just inspect their environment.

While I don’t have enough information to provide an exact estimate, but I believe that under Hyundai, Boston Dynamics finally has the potential to develop a profitable business model. In this light, it makes sense for SoftBank to relinquish major ownership, knowing that its 20-percent stake will become much more valuable if Boston Dynamics has access to the right manufacturing infrastructure and facilities.
The long-term impact on Hyundai and Boston Dynamics

“Over time, Hyundai Motor Group plans to expand its presence into the humanoid robot market with the aim of developing humanoid robots for sophisticated services such as caregiving for patients at hospitals,” according to the release.

This is an important statement, I believe, for two reasons. First, it shows that Hyundai shares Boston Dynamics’ vision in biped (and quadruped) robots. And second, Hyundai also acknowledges that this is an area that requires long-term investment.

So, as long as Hyundai doesn’t give up on its dreams of creating human-like robots, Boston Dynamics is in good hands even if it isn’t profitable.

But long-term dreams tend to change. While Hyundai’s vision for caregiving robots is commendable, it is also a very complicated problem, one that cannot be solved with today’s AI technologies and has no clear solution in sight (prominent roboticist Rodney Brooks has a series of posts that discuss this challenge).

Hyundai a publicly-traded company that is expected to turn in profits every year. Should the company see that its robotics efforts will not yield results in a timely fashion, it might have a change of heart. What will happen to Boston Dynamics then?

As we’ve seen with DeepMind and OpenAI, when an AI research lab becomes too enmeshed with commercial entities, it gradually undergoes a transformation, drifting from pushing the limits of science to developing products that turn in short-term return on investment.

Boston Dynamics might claim to be a commercial company. But at heart, it is still an AI and robotics research lab. It has built its fame on its advanced research and a continuous stream of videos showing robots doing things that were previously thought impossible. The reality, however, real-world applications seldom use cutting-edge AI and robotics technology. Today’s businesses don’t have much use for dancing and backflipping robots. What they need are stable solutions that can integrate with their current software and hardware ecosystem, boost their operations, and cut costs.

As Boston Dynamics’ vice president of business development Michael Perry told The Verge in June, “[A] lot of the most interesting stuff from a business perspective are things that people would find boring, like enabling the robot to read analogue gauges in an industrial facility. That’s not something that will set the internet on fire, but it’s transformative for a lot of businesses.”

So, the good is that under Hyundai, Boston Dynamics has a better chance to survive. The bad: It might have to shed some of this coolness and become a bit boring. You can’t build a profitable robotics company on viral YouTube videos.


Ben Dickson

Ben is a software engineer and the founder of TechTalks. He writes about technology, business and politics.
Machine learning adversarial attacks are a ticking time bomb
By Ben Dickson
-December 16, 2020



If you’ve been following news about artificial intelligence, you’ve probably heard of or seen modified images of pandas and turtles and stop signs that look ordinary to the human eye but cause AI systems to behave erratically. Known as adversarial examples or adversarial attacks, these images—and their audio and textual counterparts—have become a source of growing interest and concern for the machine learning community.

But despite the growing body of research on adversarial machine learning, the numbers show that there has been little progress in tackling adversarial attacks in real-world applications.

The fast-expanding adoption of machine learning makes it paramount that the tech community traces a roadmap to secure the AI systems against adversarial attacks. Otherwise, adversarial machine learning can be a disaster in the making
.
AI researchers discovered that by adding small black and white stickers to stop signs, they could make them invisible to computer vision algorithms (Source: arxiv.org)

What makes adversarial attacks different?

Every type of software has its own unique security vulnerabilities, and with new trends in software, new threats emerge. For instance, as web applications with database backends started replacing static websites, SQL injection attacks became prevalent. The widespread adoption of browser-side scripting languages gave rise to cross-site scripting attacks. Buffer overflow attacks overwrite critical variables and execute malicious code on target computers by taking advantage of the way programming languages such as C handle memory allocation. Deserialization attacks exploit flaws in the way programming languages such as Java and Python transfer information between applications and processes. And more recently, we’ve seen a surge in prototype pollution attacks, which use peculiarities in the JavaScript language to cause erratic behavior on NodeJS servers.

In this regard, adversarial attacks are no different than other cyberthreats. As machine learning becomes an important component of many applications, bad actors will look for ways to plant and trigger malicious behavior in AI models.

What makes adversarial attacks different, however, is their nature and the possible countermeasures. For most security vulnerabilities, the boundaries are very clear. Once a bug is found, security analysts can precisely document the conditions under which it occurs and find the part of the source code that is causing it. The response is also straightforward. For instance, SQL injection vulnerabilities are the result of not sanitizing user input. Buffer overflow bugs happen when you copy string arrays without setting limits on the number of bytes copied from the source to the destination.

In most cases, adversarial attacks exploit peculiarities in the learned parameters of machine learning models. An attacker probes a target model by meticulously making changes to its input until it produces the desired behavior. For instance, by making gradual changes to the pixel values of an image, an attacker can cause the convolutional neural network to change its prediction from, say, “turtle” to “rifle.” The adversarial perturbation is usually a layer of noise that is imperceptible to the human eye.

(Note: in some cases, such as data poisoning, adversarial attacks are made possible through vulnerabilities in other components of the machine learning pipeline, such as a tampered training data set.) 
A neural network thinks this is a picture of a rifle. The human vision system would never make this mistake (source: LabSix)

The statistical nature of machine learning makes it difficult to find and patch adversarial attacks. An adversarial attack that works under some conditions might fail in others, such as a change of angle or lighting conditions. Also, you can’t point to a line of code that is causing the vulnerability because it spread across the thousands and millions of parameters that constitute the model.

Defenses against adversarial attacks are also a bit fuzzy. Just as you can’t pinpoint a location in an AI model that is causing an adversarial vulnerability, you also can’t find a precise patch for the bug. Adversarial defenses usually involve statistical adjustments or general changes to the architecture of the machine learning model.

For instance, one popular method is adversarial training, where researchers probe a model to produce adversarial examples and then retrain the model on those examples and their correct labels. Adversarial training readjusts all the parameters of the model to make it robust against the types of examples it has been trained on. But with enough rigor, an attacker can find other noise patterns to create adversarial examples.

The plain truth is, we are still learning how to cope with adversarial machine learning. Security researchers are used to perusing code for vulnerabilities. Now they must learn to find security holes in machine learning that are composed of millions of numerical parameters.
Growing interest in adversarial machine learning

Recent years have seen a surge in the number of papers on adversarial attacks. To track the trend, I searched the arXiv preprint server for papers that mention “adversarial attacks” or “adversarial examples” in the abstract section. In 2014, there were zero papers on adversarial machine learning. In 2020, around 1,100 papers on adversarial examples and attacks were submitted to arxiv.
From 2014 to 2020, arXiv.org has gone from zero papers on adversarial machine learning to 1,100 papers in one year.

Adversarial attacks and defense methods have also become a key highlight of prominent AI conferences such as NeurIPS and ICLR. Even cybersecurity conferences such as DEF CON, Black Hat, and Usenix have started featuring workshops and presentations on adversarial attacks.

The research presented at these conferences shows tremendous progress in detecting adversarial vulnerabilities and developing defense methods that can make machine learning models more robust. For instance, researchers have found new ways to protect machine learning models against adversarial attacks using random switching mechanisms and insights from neuroscience.

It is worth noting, however, that AI and security conferences focus on cutting edge research. And there’s a sizeable gap between the work presented at AI conferences and the practical work done at organizations every day.
The lackluster response to adversarial attacks

Alarmingly, despite growing interest in and louder warnings on the threat of adversarial attacks, there’s very little activity around tracking adversarial vulnerabilities in real-world applications.

I referred to several sources that track bugs, vulnerabilities, and bug bounties. For instance, out of more than 145,000 records in the NIST National Vulnerability Database, there are no entries on adversarial attacks or adversarial examples. A search for “machine learning” returns five results. Most of them are cross-site scripting (XSS) and XML external entity (XXE) vulnerabilities in systems that contain machine learning components. One of them regards a vulnerability that allows an attacker to create a copy-cat version of a machine learning model and gain insights, which could be a window to adversarial attacks. But there are no direct reports on adversarial vulnerabilities. A search for “deep learning” shows a single critical flaw filed in November 2017. But again, it’s not an adversarial vulnerability but rather a flaw in another component of a deep learning system.
The National Vulnerability Database contains very little information on adversarial attacks

I also checked GitHub’s Advisory database, which tracks security and bug fixes on projects hosted on GitHub. Search for “adversarial attacks,” “adversarial examples,” “machine learning,” and “deep learning” yielded no results. A search for “TensorFlow” yields 41 records, but they’re mostly bug reports on the codebase of TensorFlow. There’s nothing about adversarial attacks or hidden vulnerabilities in the parameters of TensorFlow models.

This is noteworthy because GitHub already hosts many deep learning models and pretrained neural networks.
GitHub Advisory contains no records on adversarial attacks.

Finally, I checked HackerOne, the platform many companies use to run bug bounty programs. Here too, none of the reports contained any mention of adversarial attacks.

While this might not be a very precise assessment, the fact that none of these sources have anything on adversarial attacks is very telling.
The growing threat of adversarial attacks
Adversarial vulnerabilities are deeply embedded in the many parameters of machine learning models, which makes it hard to detect them with traditional security tools.

Automated defense is another area that is worth discussing. When it comes to code-based vulnerabilities Developers have a large set of defensive tools at their disposal.

Static analysis tools can help developers find vulnerabilities in their code. Dynamic testing tools examine an application at runtime for vulnerable patterns of behavior. Compilers already use many of these techniques to track and patch vulnerabilities. Today, even your browser is equipped with tools to find and block possibly malicious code in client-side script.

At the same time, organizations have learned to combine these tools with the right policies to enforce secure coding practices. Many companies have adopted procedures and practices to rigorously test applications for known and potential vulnerabilities before making them available to the public. For instance, GitHub, Google, and Apple make use of these and other tools to vet the millions of applications and projects uploaded on their platforms.

But the tools and procedures for defending machine learning systems against adversarial attacks are still in the preliminary stages. This is partly why we’re seeing very few reports and advisories on adversarial attacks.

Meanwhile, another worrying trend is the growing use of deep learning models by developers of all levels. Ten years ago, only people who had a full understanding of machine learning and deep learning algorithms could use them in their applications. You had to know how to set up a neural network, tune the hyperparameters through intuition and experimentation, and you also needed access to the compute resources that could train the model.

But today, integrating a pre-trained neural network into an application is very easy.

For instance, PyTorch, which is one of the leading Python deep learning platforms, has a tool that enables machine learning engineers to publish pretrained neural networks on GitHub and make them accessible to developers. If you want to integrate an image classifier deep learning model into your application, you only need a rudimentary knowledge of deep learning and PyTorch.

Since GitHub has no procedure to detect and block adversarial vulnerabilities, a malicious actor could easily use these kinds of tools to publish deep learning models that have hidden backdoors and exploit them after thousands of developers integrate them in their applications.

How to address the threat of adversarial attacks


Understandably, given the statistical nature of adversarial attacks, it’s difficult to address them with the same methods used against code-based vulnerabilities. But fortunately, there have been some positive developments that can guide future steps.

The Adversarial ML Threat Matrix, published last month by researchers at Microsoft, IBM, Nvidia, MITRE, and other security and AI companies, provides security researchers with a framework to find weak spots and potential adversarial vulnerabilities in software ecosystems that include machine learning components. The Adversarial ML Threat Matrix follows the ATT&CK framework, a known and trusted format among security researchers.

Another useful project is IBM’s Adversarial Robustness Toolbox, an open-source Python library that provides tools to evaluate machine learning models for adversarial vulnerabilities and help developers harden their AI systems.

These and other adversarial defense tools that will be developed in the future need to be backed by the right policies to make sure machine learning models are safe. Software platforms such as GitHub and Google Play must establish procedures and integrate some of these tools into the vetting process of applications that include machine learning models. Bug bounties for adversarial vulnerabilities can also be a good measure to make sure the machine learning systems used by millions of users are robust.

New regulations for the security of machine learning systems might also be necessary. Just as the software that handles sensitive operations and information is expected to conform to a set of standards, machine learning algorithms used in critical applications such as biometric authentication and medical imaging must be audited for robustness against adversarial attacks.

As the adoption of machine learning continues to expand, the threat of adversarial attacks is becoming more imminent. Adversarial vulnerabilities are a ticking timebomb. Only a systematic response can defuse it.


Ben Dickson

Ben is a software engineer and the founder of TechTalks. He writes about technology, business and politics.


DeepMind’s annual report: Why it’s hard to run a commercial AI lab

deepmind google logos

This article is part of our series that explore the business of artificial intelligence

Last week, on the heels of DeepMind’s breakthrough in using artificial intelligence to predict protein folding came the news that the UK-based AI company is still costing its parent company Alphabet Inc hundreds of millions of dollars in losses each year.

A tech company losing money is nothing new. The tech industry is replete with examples of companies who burned investor money long before becoming profitable. But DeepMind is not a normal company seeking to grab a share of a specific market. It is an AI research lab that has had to repurpose itself into a semi-commercial outfit to ensure its survival.

And while its owner, which is also the parent company of Google, is currently happy with footing the bill for DeepMind’s expensive AI research, it is not guaranteed that it will continue to do so forever.

DeepMind’s profits and losses

DeepMind AlphaFold
DeepMind’s AlphaFold project used artificial intelligence to help advance the complicated challenge of protein folding.

According to its annual report filed with the UK’s Companies House register, DeepMind has more than doubled its revenue, raking in £266 million in 2019, up from £103 million in 2018. But the company’s expenses continue to grow as well, increasing from £568 million in 2018 to £717 in 2019. The overall losses of the company grew from £470 million in 2018 to £477 million in 2019.

At first glance, this isn’t bad news. Compared to the previous years, DeepMind’s revenue growth is accelerating while its losses are plateauing.

deepmind revenue and losses
DeepMind’s revenue and losses from 2016 to 2019

But the report contains a few more significant facts. The document mentions “Turnover research and development remuneration from other group undertakings.” This means DeepMind’s main customer is its owner. Alphabet is paying DeepMind to apply its AI research and talent to Google’s services and infrastructure. In the past, Google has used DeepMind’s services for tasks such as managing the power grid of its data centers and improving the AI of its voice assistant.

What this also means that there isn’t yet a market for DeepMind’s AI, and if there is, it will only be available through Google.

The document also mentions that the growth of costs “mainly relates to a rise in technical infrastructure, staff costs, and other related charges.”

This is an important point. DeepMind’s “technical infrastructure” runs mainly on Google’s huge cloud services and its special AI processors, the Tensor Processing Unit (TPU). DeepMind’s main area of research is deep reinforcement learning, which requires access to very expensive compute resources. Some of the company’s projects in 2019 included work on an AI system that played StarCraft 2 and another that played Quake 3, both of which cost millions of dollars in training.

A spokesperson for DeepMind told the media that the costs mentioned in the document also included work on the AlphaFold, the company’s celebrated protein-folding AI, another very expensive project

There are no public details on how much Google charges DeepMind for access to its cloud AI services, but it is most likely renting its TPUs at a discount. This means that without the support and backing of Google, the company’s expenses would have been much higher.

Staff costs is another important issue. While participation in machine learning courses has increased in the past few years, scientists that can engage in the kind of cutting-edge AI research DeepMind is involved in are very scarce. And by some accounts, top AI talent command seven-digit salaries.

The growing interest in deep learning and its applicability to commercial settings has created an arms race between tech companies to acquire top AI talent. Most of the industry’s top AI scientists and pioneers are working either full- or half-time at large companies such as Google, Facebook, Amazon, and Microsoft. The fierce competition for snatching top AI talent has had two consequences. First, like every other field where supply doesn’t meet demand, it has resulted in a steep incline in the salaries of AI scientists. And second, it has driven many AI scientists from academic institutions that can’t afford stellar salaries to wealthy tech companies that can. Some scientists continue to stay in academia for the sake of continuing scientific research, but they are too few and far between.

And without the backing of a large tech company like Google, research labs like DeepMind can’t afford to hire new researchers for their projects.

So, while DeepMind shows signs of slowly turning around its losses, its growth has made it even more dependent on Google’s financial resources and large cloud infrastructure.

Google is still satisfied with DeepMind

DeepMind AlphaStar
DeepMind’s developed an AI system called AlphaStar that can beat the best players at the real-time strategy game StarCraft 2

According to DeepMind’s annual report, Google Ireland Holdings Unlimited, one of the investment branches of Alphabet, “waived the repayment of intercompany loans and all accrued interest amounting to £1.1 billion.”

DeepMind has also received written assurances from Google that it will “continue to provide adequate financial support” to the AI firm for “a period of at least twelve months.”

For the time being, Google seems to be satisfied with the progress DeepMind has made, which is also reflected in remarks made by Google and Alphabet executives.

In July’s quarterly earnings call with investors and analysts, Alphabet CEO Sundar Pichai said, “I’m very happy with the pace at which our R&D on AI is progressing. And for me, it’s important that we are state-of-the-art as a company, and we are leading. And to me, I’m excited at the pace at which our engineering and R&D teams are working both across Google and DeepMind.”

But the corporate world and scientific research move at different paces.

Scientific research is measured in decades. Much of the AI technology used today in commercial applications has been in the making since the 1970s and 1980s. Likewise, a lot of the cutting-edge research and techniques presented at AI conferences today will probably not find their way into the mass market in the coming years. DeepMind’s ultimate goal, developing artificial general intelligence (AGI), is by the most optimistic estimates at least decades away.

On the other hand, the patience of shareholders and investors is measured in months and years. Companies that can’t turn over a profit in years or at least show hopeful signs of growth fall afoul of investors. DeepMind currently has none of those. It doesn’t have measurable growth, because its only client is Google itself. And it’s not clear when—if ever— some of its technology will be ready for commercialization.

sundar pichai
Google CEO Sundar Pichai is satisfied with the pace of AI research and development at DeepMind

And here’s where DeepMind’s dilemma lies. At heart, it is a research lab that wants to push the limits and of science and make sure advances in AI are beneficial to all humans. Its owner’s goal, however, is to build products that solve specific problems and turn in profits. The two goals are diametrically opposed, pulling DeepMind in two different directions: maintaining its scientific nature or transforming into a product-making AI company. The company has already had troubles finding balance scientific research and product development in the past.

And DeepMind is not alone. OpenAI, DeepMind’s implicit rival, has been facing a similar identity crisis, transforming from an AI research lab to a Microsoft-backed for-profit company that rents its deep learning models.

Therefore, while DeepMind doesn’t need to worry about its unprofitable research yet, but as it becomes more and more enmeshed in the corporate dynamics of its owner, it should think deeply about its future and the future of scientific AI research.


Ben Dickson

Ben is a software engineer and the founder of TechTalks. He writes about technology, business and politics.