Monday, December 28, 2020

Essential Science: Immuno-antibiotics, the answer to resistance?
BY TIM SANDLE 8 HOURS AGO IN SCIENCE
The search for new antibiotics and other antimicrobials continues to be a pressing need in humanity's battle against bacterial infection. A new class of antibiotics, called immuno-antibiotics, may present the answer.

Scientists rom The Wistar Institute have developed a new class of dual-acting immuno-antibiotics. These drugs appear capable of blocking an essential pathway in bacteria and activate the human body's adaptive immune response.

The new research represents a double-pronged strategy. It is about developing new molecules that can eliminate difficult-to-treat bacterial infections while, at the same time, enhancing the natural host immune response to the infection. The rationale is that by harnessing the immune system to tackle bacterial infections from two different sides, this means that it becomes far harder to the organisms of concern to develop resistance.


The issue of resistance is an increasingly pressing one for humanity.

The problem of antimicrobial resistance

For the past 70 years, antimicrobial drugs, such as antibiotics, have been successfully used to treat patients with bacterial and infectious diseases. Over time, however,
many infectious organisms have adapted to the drugs designed to kill them, making the products less effective. A growing number of disease-causing organisms (pathogens) are resistant to one or more antimicrobial drugs used for treatment.


E. coli magnified 10,000 times using an electron microscope.
Brian0918

Over the past several years, bacterial resistance has increased at an alarming rate. Antimicrobial resistance is a complex and multifaceted problem that is driven by many factors, including:

Bacterial population density in health care facilities, which allows transfer of bacteria within a community and enables resistance to emerge;

Inadequate adherence to proven hospital hygiene measures;

An increasing number of high risk populations, including chemotherapy, dialysis, and transplant patients as well as those in long-term care facilities;

Overuse of antibiotics in agriculture;

Global travel and trade, which can lead to transfer of resistant infections and resistance genes;

Poor sanitation in certain areas, which can contaminate water systems and spread resistant bacteria in sewage;

Inappropriate use of antibiotics in human medicine (e.g., for viral infections);

Overprescribing of broad-spectrum drugs, which can exert selective pressure on commensal bacteria and predispose to secondary infection; and

Lack of rapid diagnostics to help guide appropriate use of antibiotics.

The rise of multi-drug resistance

Of particular concern are
multi-drug resistant organisms (MDROs). These are bacteria that are resistant to one or more of the antibiotics used to treat them. n MDROs can develop when antibiotics are not used appropriately. Factors that can contribute to the development of MDROs include taking antibiotics for the incorrect duration or using antibiotics when they are not needed (such as for viral infections).


Microbes: Staphylococcus is a common bacteria which can cause anything from a simple boil to horrible flesh-eating infections
Vano Shlamov, AFP/File

MDROs can cause
a variety of infections in the body, including but not limited to:
Skin,
Lung,
Urinary Tract,
Bloodstream,
Wound,
Surgical Site.

MDROs can be spread from person to person through direct contact. Sometimes they can also be spread by sharing personal items, such as towels or razors. In the hospital, MDROs can spread through equipment that is contaminated or improperly reused. The mechanism of spread can depend on the type of organism.

The problems around drug development

Even as antimicrobial resistance has accelerated, antibiotic discovery and development efforts have declined, with many major pharmaceutical companies discontinuing their antibiotic development programs over the past decade.

This decline is due to a number of factors, including the low return on investment of antibacterials compared with other therapeutics, difficulty in identifying new compounds via traditional discovery methods, and regulatory requirements necessitating large and complex clinical trials for approval.



A laboratory technician works on coronavirus samples at "Fire Eye" laboratory in Wuhan
STR, STR, AFP


The research that is taking place in largely centered in academia. New technologies have helped with the quest for new agents. For example, opportunities are now available to examine biological systems (i.e., metabolic, immunologic, signaling, and regulatory pathways) beyond their individual components. These holistic approaches offer new research strategies to understand the functional molecular networks generated by the interactions of the host with the pathogen in response to therapeutic treatment.

New research

In approaching the new development, researchers examined a metabolic pathway required by almost all bacteria but absent in humans. This made the pathway - methyl-D-erythritol phosphate - an ideal site. The pathway plays a role in the biosynthesis of isoprenoids; these are molecules required for bacterial cell survival.


This inoculated MacConkey agar culture plate cultivated colonial growth of Gram-negative, small rod-shaped and facultatively anaerobic Klebsiella pneumoniae bacteria.
CDC

The
study targeted the IspH enzyme, which is necessary for isoprenoid biosynthesis. This proved to be the mechanism for blocking the methyl-D-erythritol phosphate pathway and hence killing pathogenic bacteria. What is of most interest about the pathway and the enzyme IspH is the fact that most bacteria require it in order to survive.

In order to identify the necessary drug active substance, the scientists deployed computer modeling in order to screen several million commercially available compounds. The focus was on finding a compound capable of binding to the enzyme. Those compounds found to be capable of inhibiting IspH function became the launch points for drug discovery.


Later studies showed the selected IspH inhibitors outperformed most established antibiotics in killing bacterial cells. Tests were conducted against several multidrug-resistant bacteria, including those from the genera Acinetobacter, Pseudomonas, Klebsiella, Enterobacter, Vibrio, Shigella, Salmonella, Yersinia, Mycobacterium and Bacillus.


A Bacillus species bacterium growing on a Petri-dish (from Tim Sandle's laboratory)
Tim Sandle

The compounds were also shown to be non-toxic to human cells, and thus providing the basis for the development of a safe medication.

Research paper

The new research has been published in the journal Nature. The research paper is titled "IspH inhibitors kill Gram-negative bacteria and mobilize immune clearance."

Essential Science

This article forms part of Digital Journal’s long-running Essential Science series, where new research items relating to wider science stories of interest are presented by Tim Sandle on a weekly basis.


Read more: http://www.digitaljournal.com/tech-and-science/science/essential-science-immuno-antibiotics-the-answer-to-resistance/article/583114#ixzz6hwzbrG4U
The fate of Boston Dynamics
By Ben Dickson
-December 15, 2020


This article is part of our series that explore the business of artificial intelligence

Last week, Hyundai officially announced the much-anticipated deal to acquire a controlling interest in famous robotics company Boston Dynamics. According to a joint press release by the two companies, Hyundai will buy an 80-percent stake in Boston Dynamics, and SoftBank, the previous owner of the robotics company, will retain 20 percent ownership after the transaction is completed in June 2021. While details have yet to be revealed, the deal puts Boston Robotics at a $1.1 billion valuation.

The mere fact that Boston Dynamics has managed to survive so long in an industry that has been marked with failures and shuttered companies is commendable. But what will the acquisition mean for the future of the company that made its fame with YouTube videos of robots performing impressive feats? The press release and the history of the company provide some hints. And depending on what you expect from Boston Dynamics, the outcome can be both good and bad.



What is the value of Boston Dynamics?


Boston Dynamics was founded in 1992 as a spinoff from the Massachusetts Institute of Technology, working on robotics projects largely funded by the military. In 2013, as advances and interest in deep learning began to pick pace, Google’s moonshot subsidiary Google X bought the robotics company for an undisclosed amount.

But Google did not succeed in turning Boston Dynamics—or any of its other robotic ventures—into a profitable business. It eventually shut down Google X Robotics and sold Boston Dynamics to Japanese investment giant SoftBank for a reported $165 million in 2017.

Therefore, even though changing hands three times in a decade is not a good outlook for any company, the $1.1 billion valuation shows that Boston Dynamics’ value has increased, and SoftBank is making a lot of money out of the deal.

“Boston Dynamics is at the heart of smart robotics,” Masayoshi Son, the chairman and CEO of SoftBank said, according to Friday’s release. “We are thrilled to partner with Hyundai, one of the world’s leading global mobility companies, to accelerate the company’s path to commercialization. Boston Dynamics has a very bright future and we remain invested in the company’s success.”

This begs the question: If Boston Dynamics has “a very bright future,” then why sell your major stake? This probably means that its “present” is not very bright.


Running a robot company is hard


The past few years have seen the fall of several robotics companies. After running out of money to support its hardware and software business, Anki, a startup that raised $200 million to create cute home robots, shut down in 2019 and later sold its assets to edtech startup Digital Dream Labs. Mayfield Robotics, the maker of the Kuri home robot, also ceased operations in 2018. Rethink Robotics, the manufacturer of the famous Baxter and Sawyer robots, also closed shop in 2018. Its assets were later acquired by German automation firm HAHN Group.

Meanwhile, under the SoftBank ownership, Boston Dynamics tripled its staff, bought new headquarters, and finally started leasing the quadruped robot Spot for an undisclosed price in 2019. Earlier this year, Boston Dynamics started selling the robot at a hefty $74,500.

Friday’s press release states that after the launch of Spot, Boston Dynamics has sold “hundreds of robots in a variety of industries, such as power utilities, construction, manufacturing, oil and gas, and mining.”

According to a Bloomberg report, the real sales figure is close to 400 units, which amounts to about $30 million. But Boston Dynamics’ operations cost SoftBank more than $150 million, which means the company is still far from being profitable.

Also, so far, Spot’s biggest use case has been navigating and inspecting complex environments. In June, Zack Jackowski, Boston Dynamics’ lead robotics engineer, told The Verge, “We mostly sell the robot to industrial and commercial customers who have a sensor they want to take somewhere they don’t want a person to go. Usually because it’s dangerous or because they need to do it so often that it would drive someone mad. Like carrying a camera around a factory 40 times a day and taking the same pictures each time.”

But inspection is a task that drones can already accomplish with much less difficulty and at lower costs. In fact, several companies already provide drone inspection solutions for the industries mentioned in Boston Dynamics’ press release.

This means that at present, Boston Dynamics faces a tough challenge growing in a market that has already been largely conquered by drones equipped with advanced computer vision technology.

So why would Hyundai pay such a huge sum for a company will not be profitable for the time being? This brings us back to Son’s remark about the “bright future” of Boston Dynamics.



Boston Dynamics under Hyundai

The true benefit of Spot and other robots Boston Dynamics is creating is their ability to interact with and manipulate their environment. In fact, one of the advertised features of Spot was its ability to attach and use props such as a mechanical arm that can open doors and pick up objects. But the technology is still in early stages and dexterous manipulation of objects is a hot area of artificial intelligence research.

That is the future of Boston Dynamics. To get there, the company will need more time and money that SoftBank could not provide.

This is where Hyundai enters the picture. From the press release: “Hyundai Motor Group will provide Boston Dynamics a strategic partner affording access to Hyundai Motor Group’s in-house manufacturing capability and cost benefits stemming from efficiencies of scale (emphasis mine).”

With the backing of Hyundai, Boston Dynamics will be able to reduce manufacturing costs and sell Spot and its future robots at much more competitive prices and sell more units.

Beyond economies of scale, Boston Dynamics will benefit from becoming a subsidiary of one of the world leaders in robotics. Hyundai is already heavily invested in robotics research and production. It is engaged in several projects that, like Boston Dynamics’ robots, are focused on solving mobility problems. The integration will provide Boston Dynamics with the right tools to speed up its research in a cost-efficient way and develop robots that can do more than just inspect their environment.

While I don’t have enough information to provide an exact estimate, but I believe that under Hyundai, Boston Dynamics finally has the potential to develop a profitable business model. In this light, it makes sense for SoftBank to relinquish major ownership, knowing that its 20-percent stake will become much more valuable if Boston Dynamics has access to the right manufacturing infrastructure and facilities.
The long-term impact on Hyundai and Boston Dynamics

“Over time, Hyundai Motor Group plans to expand its presence into the humanoid robot market with the aim of developing humanoid robots for sophisticated services such as caregiving for patients at hospitals,” according to the release.

This is an important statement, I believe, for two reasons. First, it shows that Hyundai shares Boston Dynamics’ vision in biped (and quadruped) robots. And second, Hyundai also acknowledges that this is an area that requires long-term investment.

So, as long as Hyundai doesn’t give up on its dreams of creating human-like robots, Boston Dynamics is in good hands even if it isn’t profitable.

But long-term dreams tend to change. While Hyundai’s vision for caregiving robots is commendable, it is also a very complicated problem, one that cannot be solved with today’s AI technologies and has no clear solution in sight (prominent roboticist Rodney Brooks has a series of posts that discuss this challenge).

Hyundai a publicly-traded company that is expected to turn in profits every year. Should the company see that its robotics efforts will not yield results in a timely fashion, it might have a change of heart. What will happen to Boston Dynamics then?

As we’ve seen with DeepMind and OpenAI, when an AI research lab becomes too enmeshed with commercial entities, it gradually undergoes a transformation, drifting from pushing the limits of science to developing products that turn in short-term return on investment.

Boston Dynamics might claim to be a commercial company. But at heart, it is still an AI and robotics research lab. It has built its fame on its advanced research and a continuous stream of videos showing robots doing things that were previously thought impossible. The reality, however, real-world applications seldom use cutting-edge AI and robotics technology. Today’s businesses don’t have much use for dancing and backflipping robots. What they need are stable solutions that can integrate with their current software and hardware ecosystem, boost their operations, and cut costs.

As Boston Dynamics’ vice president of business development Michael Perry told The Verge in June, “[A] lot of the most interesting stuff from a business perspective are things that people would find boring, like enabling the robot to read analogue gauges in an industrial facility. That’s not something that will set the internet on fire, but it’s transformative for a lot of businesses.”

So, the good is that under Hyundai, Boston Dynamics has a better chance to survive. The bad: It might have to shed some of this coolness and become a bit boring. You can’t build a profitable robotics company on viral YouTube videos.


Ben Dickson

Ben is a software engineer and the founder of TechTalks. He writes about technology, business and politics.
Machine learning adversarial attacks are a ticking time bomb
By Ben Dickson
-December 16, 2020



If you’ve been following news about artificial intelligence, you’ve probably heard of or seen modified images of pandas and turtles and stop signs that look ordinary to the human eye but cause AI systems to behave erratically. Known as adversarial examples or adversarial attacks, these images—and their audio and textual counterparts—have become a source of growing interest and concern for the machine learning community.

But despite the growing body of research on adversarial machine learning, the numbers show that there has been little progress in tackling adversarial attacks in real-world applications.

The fast-expanding adoption of machine learning makes it paramount that the tech community traces a roadmap to secure the AI systems against adversarial attacks. Otherwise, adversarial machine learning can be a disaster in the making
.
AI researchers discovered that by adding small black and white stickers to stop signs, they could make them invisible to computer vision algorithms (Source: arxiv.org)

What makes adversarial attacks different?

Every type of software has its own unique security vulnerabilities, and with new trends in software, new threats emerge. For instance, as web applications with database backends started replacing static websites, SQL injection attacks became prevalent. The widespread adoption of browser-side scripting languages gave rise to cross-site scripting attacks. Buffer overflow attacks overwrite critical variables and execute malicious code on target computers by taking advantage of the way programming languages such as C handle memory allocation. Deserialization attacks exploit flaws in the way programming languages such as Java and Python transfer information between applications and processes. And more recently, we’ve seen a surge in prototype pollution attacks, which use peculiarities in the JavaScript language to cause erratic behavior on NodeJS servers.

In this regard, adversarial attacks are no different than other cyberthreats. As machine learning becomes an important component of many applications, bad actors will look for ways to plant and trigger malicious behavior in AI models.

What makes adversarial attacks different, however, is their nature and the possible countermeasures. For most security vulnerabilities, the boundaries are very clear. Once a bug is found, security analysts can precisely document the conditions under which it occurs and find the part of the source code that is causing it. The response is also straightforward. For instance, SQL injection vulnerabilities are the result of not sanitizing user input. Buffer overflow bugs happen when you copy string arrays without setting limits on the number of bytes copied from the source to the destination.

In most cases, adversarial attacks exploit peculiarities in the learned parameters of machine learning models. An attacker probes a target model by meticulously making changes to its input until it produces the desired behavior. For instance, by making gradual changes to the pixel values of an image, an attacker can cause the convolutional neural network to change its prediction from, say, “turtle” to “rifle.” The adversarial perturbation is usually a layer of noise that is imperceptible to the human eye.

(Note: in some cases, such as data poisoning, adversarial attacks are made possible through vulnerabilities in other components of the machine learning pipeline, such as a tampered training data set.) 
A neural network thinks this is a picture of a rifle. The human vision system would never make this mistake (source: LabSix)

The statistical nature of machine learning makes it difficult to find and patch adversarial attacks. An adversarial attack that works under some conditions might fail in others, such as a change of angle or lighting conditions. Also, you can’t point to a line of code that is causing the vulnerability because it spread across the thousands and millions of parameters that constitute the model.

Defenses against adversarial attacks are also a bit fuzzy. Just as you can’t pinpoint a location in an AI model that is causing an adversarial vulnerability, you also can’t find a precise patch for the bug. Adversarial defenses usually involve statistical adjustments or general changes to the architecture of the machine learning model.

For instance, one popular method is adversarial training, where researchers probe a model to produce adversarial examples and then retrain the model on those examples and their correct labels. Adversarial training readjusts all the parameters of the model to make it robust against the types of examples it has been trained on. But with enough rigor, an attacker can find other noise patterns to create adversarial examples.

The plain truth is, we are still learning how to cope with adversarial machine learning. Security researchers are used to perusing code for vulnerabilities. Now they must learn to find security holes in machine learning that are composed of millions of numerical parameters.
Growing interest in adversarial machine learning

Recent years have seen a surge in the number of papers on adversarial attacks. To track the trend, I searched the arXiv preprint server for papers that mention “adversarial attacks” or “adversarial examples” in the abstract section. In 2014, there were zero papers on adversarial machine learning. In 2020, around 1,100 papers on adversarial examples and attacks were submitted to arxiv.
From 2014 to 2020, arXiv.org has gone from zero papers on adversarial machine learning to 1,100 papers in one year.

Adversarial attacks and defense methods have also become a key highlight of prominent AI conferences such as NeurIPS and ICLR. Even cybersecurity conferences such as DEF CON, Black Hat, and Usenix have started featuring workshops and presentations on adversarial attacks.

The research presented at these conferences shows tremendous progress in detecting adversarial vulnerabilities and developing defense methods that can make machine learning models more robust. For instance, researchers have found new ways to protect machine learning models against adversarial attacks using random switching mechanisms and insights from neuroscience.

It is worth noting, however, that AI and security conferences focus on cutting edge research. And there’s a sizeable gap between the work presented at AI conferences and the practical work done at organizations every day.
The lackluster response to adversarial attacks

Alarmingly, despite growing interest in and louder warnings on the threat of adversarial attacks, there’s very little activity around tracking adversarial vulnerabilities in real-world applications.

I referred to several sources that track bugs, vulnerabilities, and bug bounties. For instance, out of more than 145,000 records in the NIST National Vulnerability Database, there are no entries on adversarial attacks or adversarial examples. A search for “machine learning” returns five results. Most of them are cross-site scripting (XSS) and XML external entity (XXE) vulnerabilities in systems that contain machine learning components. One of them regards a vulnerability that allows an attacker to create a copy-cat version of a machine learning model and gain insights, which could be a window to adversarial attacks. But there are no direct reports on adversarial vulnerabilities. A search for “deep learning” shows a single critical flaw filed in November 2017. But again, it’s not an adversarial vulnerability but rather a flaw in another component of a deep learning system.
The National Vulnerability Database contains very little information on adversarial attacks

I also checked GitHub’s Advisory database, which tracks security and bug fixes on projects hosted on GitHub. Search for “adversarial attacks,” “adversarial examples,” “machine learning,” and “deep learning” yielded no results. A search for “TensorFlow” yields 41 records, but they’re mostly bug reports on the codebase of TensorFlow. There’s nothing about adversarial attacks or hidden vulnerabilities in the parameters of TensorFlow models.

This is noteworthy because GitHub already hosts many deep learning models and pretrained neural networks.
GitHub Advisory contains no records on adversarial attacks.

Finally, I checked HackerOne, the platform many companies use to run bug bounty programs. Here too, none of the reports contained any mention of adversarial attacks.

While this might not be a very precise assessment, the fact that none of these sources have anything on adversarial attacks is very telling.
The growing threat of adversarial attacks
Adversarial vulnerabilities are deeply embedded in the many parameters of machine learning models, which makes it hard to detect them with traditional security tools.

Automated defense is another area that is worth discussing. When it comes to code-based vulnerabilities Developers have a large set of defensive tools at their disposal.

Static analysis tools can help developers find vulnerabilities in their code. Dynamic testing tools examine an application at runtime for vulnerable patterns of behavior. Compilers already use many of these techniques to track and patch vulnerabilities. Today, even your browser is equipped with tools to find and block possibly malicious code in client-side script.

At the same time, organizations have learned to combine these tools with the right policies to enforce secure coding practices. Many companies have adopted procedures and practices to rigorously test applications for known and potential vulnerabilities before making them available to the public. For instance, GitHub, Google, and Apple make use of these and other tools to vet the millions of applications and projects uploaded on their platforms.

But the tools and procedures for defending machine learning systems against adversarial attacks are still in the preliminary stages. This is partly why we’re seeing very few reports and advisories on adversarial attacks.

Meanwhile, another worrying trend is the growing use of deep learning models by developers of all levels. Ten years ago, only people who had a full understanding of machine learning and deep learning algorithms could use them in their applications. You had to know how to set up a neural network, tune the hyperparameters through intuition and experimentation, and you also needed access to the compute resources that could train the model.

But today, integrating a pre-trained neural network into an application is very easy.

For instance, PyTorch, which is one of the leading Python deep learning platforms, has a tool that enables machine learning engineers to publish pretrained neural networks on GitHub and make them accessible to developers. If you want to integrate an image classifier deep learning model into your application, you only need a rudimentary knowledge of deep learning and PyTorch.

Since GitHub has no procedure to detect and block adversarial vulnerabilities, a malicious actor could easily use these kinds of tools to publish deep learning models that have hidden backdoors and exploit them after thousands of developers integrate them in their applications.

How to address the threat of adversarial attacks


Understandably, given the statistical nature of adversarial attacks, it’s difficult to address them with the same methods used against code-based vulnerabilities. But fortunately, there have been some positive developments that can guide future steps.

The Adversarial ML Threat Matrix, published last month by researchers at Microsoft, IBM, Nvidia, MITRE, and other security and AI companies, provides security researchers with a framework to find weak spots and potential adversarial vulnerabilities in software ecosystems that include machine learning components. The Adversarial ML Threat Matrix follows the ATT&CK framework, a known and trusted format among security researchers.

Another useful project is IBM’s Adversarial Robustness Toolbox, an open-source Python library that provides tools to evaluate machine learning models for adversarial vulnerabilities and help developers harden their AI systems.

These and other adversarial defense tools that will be developed in the future need to be backed by the right policies to make sure machine learning models are safe. Software platforms such as GitHub and Google Play must establish procedures and integrate some of these tools into the vetting process of applications that include machine learning models. Bug bounties for adversarial vulnerabilities can also be a good measure to make sure the machine learning systems used by millions of users are robust.

New regulations for the security of machine learning systems might also be necessary. Just as the software that handles sensitive operations and information is expected to conform to a set of standards, machine learning algorithms used in critical applications such as biometric authentication and medical imaging must be audited for robustness against adversarial attacks.

As the adoption of machine learning continues to expand, the threat of adversarial attacks is becoming more imminent. Adversarial vulnerabilities are a ticking timebomb. Only a systematic response can defuse it.


Ben Dickson

Ben is a software engineer and the founder of TechTalks. He writes about technology, business and politics.


DeepMind’s annual report: Why it’s hard to run a commercial AI lab

deepmind google logos

This article is part of our series that explore the business of artificial intelligence

Last week, on the heels of DeepMind’s breakthrough in using artificial intelligence to predict protein folding came the news that the UK-based AI company is still costing its parent company Alphabet Inc hundreds of millions of dollars in losses each year.

A tech company losing money is nothing new. The tech industry is replete with examples of companies who burned investor money long before becoming profitable. But DeepMind is not a normal company seeking to grab a share of a specific market. It is an AI research lab that has had to repurpose itself into a semi-commercial outfit to ensure its survival.

And while its owner, which is also the parent company of Google, is currently happy with footing the bill for DeepMind’s expensive AI research, it is not guaranteed that it will continue to do so forever.

DeepMind’s profits and losses

DeepMind AlphaFold
DeepMind’s AlphaFold project used artificial intelligence to help advance the complicated challenge of protein folding.

According to its annual report filed with the UK’s Companies House register, DeepMind has more than doubled its revenue, raking in £266 million in 2019, up from £103 million in 2018. But the company’s expenses continue to grow as well, increasing from £568 million in 2018 to £717 in 2019. The overall losses of the company grew from £470 million in 2018 to £477 million in 2019.

At first glance, this isn’t bad news. Compared to the previous years, DeepMind’s revenue growth is accelerating while its losses are plateauing.

deepmind revenue and losses
DeepMind’s revenue and losses from 2016 to 2019

But the report contains a few more significant facts. The document mentions “Turnover research and development remuneration from other group undertakings.” This means DeepMind’s main customer is its owner. Alphabet is paying DeepMind to apply its AI research and talent to Google’s services and infrastructure. In the past, Google has used DeepMind’s services for tasks such as managing the power grid of its data centers and improving the AI of its voice assistant.

What this also means that there isn’t yet a market for DeepMind’s AI, and if there is, it will only be available through Google.

The document also mentions that the growth of costs “mainly relates to a rise in technical infrastructure, staff costs, and other related charges.”

This is an important point. DeepMind’s “technical infrastructure” runs mainly on Google’s huge cloud services and its special AI processors, the Tensor Processing Unit (TPU). DeepMind’s main area of research is deep reinforcement learning, which requires access to very expensive compute resources. Some of the company’s projects in 2019 included work on an AI system that played StarCraft 2 and another that played Quake 3, both of which cost millions of dollars in training.

A spokesperson for DeepMind told the media that the costs mentioned in the document also included work on the AlphaFold, the company’s celebrated protein-folding AI, another very expensive project

There are no public details on how much Google charges DeepMind for access to its cloud AI services, but it is most likely renting its TPUs at a discount. This means that without the support and backing of Google, the company’s expenses would have been much higher.

Staff costs is another important issue. While participation in machine learning courses has increased in the past few years, scientists that can engage in the kind of cutting-edge AI research DeepMind is involved in are very scarce. And by some accounts, top AI talent command seven-digit salaries.

The growing interest in deep learning and its applicability to commercial settings has created an arms race between tech companies to acquire top AI talent. Most of the industry’s top AI scientists and pioneers are working either full- or half-time at large companies such as Google, Facebook, Amazon, and Microsoft. The fierce competition for snatching top AI talent has had two consequences. First, like every other field where supply doesn’t meet demand, it has resulted in a steep incline in the salaries of AI scientists. And second, it has driven many AI scientists from academic institutions that can’t afford stellar salaries to wealthy tech companies that can. Some scientists continue to stay in academia for the sake of continuing scientific research, but they are too few and far between.

And without the backing of a large tech company like Google, research labs like DeepMind can’t afford to hire new researchers for their projects.

So, while DeepMind shows signs of slowly turning around its losses, its growth has made it even more dependent on Google’s financial resources and large cloud infrastructure.

Google is still satisfied with DeepMind

DeepMind AlphaStar
DeepMind’s developed an AI system called AlphaStar that can beat the best players at the real-time strategy game StarCraft 2

According to DeepMind’s annual report, Google Ireland Holdings Unlimited, one of the investment branches of Alphabet, “waived the repayment of intercompany loans and all accrued interest amounting to £1.1 billion.”

DeepMind has also received written assurances from Google that it will “continue to provide adequate financial support” to the AI firm for “a period of at least twelve months.”

For the time being, Google seems to be satisfied with the progress DeepMind has made, which is also reflected in remarks made by Google and Alphabet executives.

In July’s quarterly earnings call with investors and analysts, Alphabet CEO Sundar Pichai said, “I’m very happy with the pace at which our R&D on AI is progressing. And for me, it’s important that we are state-of-the-art as a company, and we are leading. And to me, I’m excited at the pace at which our engineering and R&D teams are working both across Google and DeepMind.”

But the corporate world and scientific research move at different paces.

Scientific research is measured in decades. Much of the AI technology used today in commercial applications has been in the making since the 1970s and 1980s. Likewise, a lot of the cutting-edge research and techniques presented at AI conferences today will probably not find their way into the mass market in the coming years. DeepMind’s ultimate goal, developing artificial general intelligence (AGI), is by the most optimistic estimates at least decades away.

On the other hand, the patience of shareholders and investors is measured in months and years. Companies that can’t turn over a profit in years or at least show hopeful signs of growth fall afoul of investors. DeepMind currently has none of those. It doesn’t have measurable growth, because its only client is Google itself. And it’s not clear when—if ever— some of its technology will be ready for commercialization.

sundar pichai
Google CEO Sundar Pichai is satisfied with the pace of AI research and development at DeepMind

And here’s where DeepMind’s dilemma lies. At heart, it is a research lab that wants to push the limits and of science and make sure advances in AI are beneficial to all humans. Its owner’s goal, however, is to build products that solve specific problems and turn in profits. The two goals are diametrically opposed, pulling DeepMind in two different directions: maintaining its scientific nature or transforming into a product-making AI company. The company has already had troubles finding balance scientific research and product development in the past.

And DeepMind is not alone. OpenAI, DeepMind’s implicit rival, has been facing a similar identity crisis, transforming from an AI research lab to a Microsoft-backed for-profit company that rents its deep learning models.

Therefore, while DeepMind doesn’t need to worry about its unprofitable research yet, but as it becomes more and more enmeshed in the corporate dynamics of its owner, it should think deeply about its future and the future of scientific AI research.


Ben Dickson

Ben is a software engineer and the founder of TechTalks. He writes about technology, business and politics.
Philippines troops, ministers get COVID-19 vaccine before approval












FILE PHOTO: Philippines President Rodrigo Duterte reviews military cadets during change of command ceremonies of the Armed Forces of the Philippines (AFP) at Camp Aguinaldo in Quezon City, metro Manila, Philippines October 26, 2017. REUTERS/Dondi Tawatao

28 Dec 2020 

MANILA: Some Philippine soldiers and Cabinet ministers have already received COVID-19 vaccine injections, officials said on Monday (Dec 28), despite an absence of regulatory approval that the country's health ministry said was vital to ensure safety.

Interior minister Eduardo Ano said some Cabinet members have already received COVID-19 vaccines and army chief Lieutenant General Cirilito Sobejana said some troops have been vaccinated but the number was not large. Neither said what brand of vaccine was administered.

The health ministry in a statement said all vaccines must first be evaluated by experts, and "only vaccines which have been approved and found to be safe should be administered".

Food and Drug Administration head Rolando Enrique Domingo said Philippine regulators have yet to approve any COVID-19 vaccine, making any importation, distribution and sale of one illegal.

Domingo warned the public that unapproved vaccines exposed them to "all sorts of dangers" and told CNN Philippines that side effects were possible "especially if you don't know how these things have been handled".

So far only Pfizer has applied for emergency use approval of its COVID-19 vaccine in the Philippines, while Sinovac, Gamaleya, Johnson & Johnson's Janssen and Clover's late-stage trial applications have yet to be approved

Health Undersecretary Maria Rosario Vergeire said the ministry had no information about the soldiers' vaccination and military spokesman Colonel Edgard Arevalo said there had been no inoculation sanctioned by the armed forces leadership.

The Presidential Security Group (PSG) ,which is tasked with protecting Duterte, said some of its personnel have already been inoculated.

"The PSG administered COVID-19 vaccine to its personnel performing close-in security operations to the president," unit chief Brigadier General Jesus Durante said in a statement, without specifying how many got the drug.

Duterte has not been vaccinated, according to his spokesman, Harry Roque, who said he had no problem with soldiers being given the shots and protecting themselves.

During a televised meeting with health officials on Saturday, Duterte said "almost all" soldiers have already been inoculated.

He said "many", without identifying who, in the Philippines had received a COVID-19 vaccine developed by China National Pharmaceutical Group (Sinopharm).

Sinopharm could not be immediately reached for comment.

Asked if the soldiers' vaccination was authorised by the president's office, Sobejana said: "Well of course, our president is our commander-in-chief."

Roque said on Monday the Sinopharm drug was given to the soldiers, confirming Duterte's comments at the weekend that "a select few" had been inoculated with the Chinese vaccine.

He played down concerns about the safety of the Sinopharm drug, saying it was meant to send a message of hope to Filipinos.

"The news is that the vaccine is already here and if we cannot be given Western vaccines, our friend and neighbour China is willing to give us vaccines," Roque said.

"It's not prohibited under the law to get inoculated with an unregistered (vaccine). What is illegal is the distribution and selling."
Thai protest demands help for shrimp sellers after COVID-19 outbreak


Anti government protesters sell shrimps in front of government house as people now fear to eat shrimps due to the coronavirus disease (COVID-19) outbreak in Bangkok, Thailand Dec 26, 2020. (Photo: REUTERS/Soe Zeya Tun)
26 Dec 2020 



BANGKOK: Thai protesters demonstrated on Saturday (Dec 26) to demand more action to help seafood sellers hit by a COVID-19 outbreak as the government urged people to eat more shellfish.

Thailand's worst outbreak of the new coronavirus was reported just over a week ago, with more than 1,500 infections now linked to a shrimp market outside Bangkok. Most of those infected have been migrant workers from Myanmar.

Seafood sellers say business has fallen in a country whose economy had already been badly hit by a collapse in tourism.

"We want the government to create confidence in shrimp consumption," said Piyarat Chongthep, among the scores of protesters at Government House, some of whom briefly scuffled with police.





People line up to buy shrimps from anti government protesters selling shrimps in front of government house amidst the coronavirus disease (COVID-19) outbreak in Bangkok, Thailand Dec 26, 2020. (Photo: REUTERS/Soe Zeya Tun)



Police stand guard as anti government protesters try sell shrimps in front of government house amidst the coronavirus disease (COVID-19) outbreak in Bangkok, Thailand Dec 26, 2020. (Photo: REUTERS/Soe Zeya Tun)


The issue is the latest seized on by protesters who for months have been demanding the removal of Prime Minister Prayut Chan-ocha, a new constitution and reforms of the monarchy.


At a seafood-eating event in a nearby province, government ministers said they were trying to promote seafood.

"We are building confidence that you can have seafood without getting infected," Anucha Nakasai, minister for the prime minister's office, told reporters.

A major shrimp exporter, Thailand sold 36 billion baht (US$1.2 billion) worth in the first 10 months of 2020, industry association data showed.

"The problem now is there is no market," said one shrimp seller at Government House.

COVID-19 task force spokesman Taweesin Wisanuyothin reported 110 new coronavirus infections, of which nearly all 94 were connected to the seafood market.

Thailand has a total of 6,020 confirmed cases and 60 deaths, low rates for a country of 70 million people.