Monday, December 28, 2020

Machine learning adversarial attacks are a ticking time bomb
By Ben Dickson
-December 16, 2020



If you’ve been following news about artificial intelligence, you’ve probably heard of or seen modified images of pandas and turtles and stop signs that look ordinary to the human eye but cause AI systems to behave erratically. Known as adversarial examples or adversarial attacks, these images—and their audio and textual counterparts—have become a source of growing interest and concern for the machine learning community.

But despite the growing body of research on adversarial machine learning, the numbers show that there has been little progress in tackling adversarial attacks in real-world applications.

The fast-expanding adoption of machine learning makes it paramount that the tech community traces a roadmap to secure the AI systems against adversarial attacks. Otherwise, adversarial machine learning can be a disaster in the making
.
AI researchers discovered that by adding small black and white stickers to stop signs, they could make them invisible to computer vision algorithms (Source: arxiv.org)

What makes adversarial attacks different?

Every type of software has its own unique security vulnerabilities, and with new trends in software, new threats emerge. For instance, as web applications with database backends started replacing static websites, SQL injection attacks became prevalent. The widespread adoption of browser-side scripting languages gave rise to cross-site scripting attacks. Buffer overflow attacks overwrite critical variables and execute malicious code on target computers by taking advantage of the way programming languages such as C handle memory allocation. Deserialization attacks exploit flaws in the way programming languages such as Java and Python transfer information between applications and processes. And more recently, we’ve seen a surge in prototype pollution attacks, which use peculiarities in the JavaScript language to cause erratic behavior on NodeJS servers.

In this regard, adversarial attacks are no different than other cyberthreats. As machine learning becomes an important component of many applications, bad actors will look for ways to plant and trigger malicious behavior in AI models.

What makes adversarial attacks different, however, is their nature and the possible countermeasures. For most security vulnerabilities, the boundaries are very clear. Once a bug is found, security analysts can precisely document the conditions under which it occurs and find the part of the source code that is causing it. The response is also straightforward. For instance, SQL injection vulnerabilities are the result of not sanitizing user input. Buffer overflow bugs happen when you copy string arrays without setting limits on the number of bytes copied from the source to the destination.

In most cases, adversarial attacks exploit peculiarities in the learned parameters of machine learning models. An attacker probes a target model by meticulously making changes to its input until it produces the desired behavior. For instance, by making gradual changes to the pixel values of an image, an attacker can cause the convolutional neural network to change its prediction from, say, “turtle” to “rifle.” The adversarial perturbation is usually a layer of noise that is imperceptible to the human eye.

(Note: in some cases, such as data poisoning, adversarial attacks are made possible through vulnerabilities in other components of the machine learning pipeline, such as a tampered training data set.) 
A neural network thinks this is a picture of a rifle. The human vision system would never make this mistake (source: LabSix)

The statistical nature of machine learning makes it difficult to find and patch adversarial attacks. An adversarial attack that works under some conditions might fail in others, such as a change of angle or lighting conditions. Also, you can’t point to a line of code that is causing the vulnerability because it spread across the thousands and millions of parameters that constitute the model.

Defenses against adversarial attacks are also a bit fuzzy. Just as you can’t pinpoint a location in an AI model that is causing an adversarial vulnerability, you also can’t find a precise patch for the bug. Adversarial defenses usually involve statistical adjustments or general changes to the architecture of the machine learning model.

For instance, one popular method is adversarial training, where researchers probe a model to produce adversarial examples and then retrain the model on those examples and their correct labels. Adversarial training readjusts all the parameters of the model to make it robust against the types of examples it has been trained on. But with enough rigor, an attacker can find other noise patterns to create adversarial examples.

The plain truth is, we are still learning how to cope with adversarial machine learning. Security researchers are used to perusing code for vulnerabilities. Now they must learn to find security holes in machine learning that are composed of millions of numerical parameters.
Growing interest in adversarial machine learning

Recent years have seen a surge in the number of papers on adversarial attacks. To track the trend, I searched the arXiv preprint server for papers that mention “adversarial attacks” or “adversarial examples” in the abstract section. In 2014, there were zero papers on adversarial machine learning. In 2020, around 1,100 papers on adversarial examples and attacks were submitted to arxiv.
From 2014 to 2020, arXiv.org has gone from zero papers on adversarial machine learning to 1,100 papers in one year.

Adversarial attacks and defense methods have also become a key highlight of prominent AI conferences such as NeurIPS and ICLR. Even cybersecurity conferences such as DEF CON, Black Hat, and Usenix have started featuring workshops and presentations on adversarial attacks.

The research presented at these conferences shows tremendous progress in detecting adversarial vulnerabilities and developing defense methods that can make machine learning models more robust. For instance, researchers have found new ways to protect machine learning models against adversarial attacks using random switching mechanisms and insights from neuroscience.

It is worth noting, however, that AI and security conferences focus on cutting edge research. And there’s a sizeable gap between the work presented at AI conferences and the practical work done at organizations every day.
The lackluster response to adversarial attacks

Alarmingly, despite growing interest in and louder warnings on the threat of adversarial attacks, there’s very little activity around tracking adversarial vulnerabilities in real-world applications.

I referred to several sources that track bugs, vulnerabilities, and bug bounties. For instance, out of more than 145,000 records in the NIST National Vulnerability Database, there are no entries on adversarial attacks or adversarial examples. A search for “machine learning” returns five results. Most of them are cross-site scripting (XSS) and XML external entity (XXE) vulnerabilities in systems that contain machine learning components. One of them regards a vulnerability that allows an attacker to create a copy-cat version of a machine learning model and gain insights, which could be a window to adversarial attacks. But there are no direct reports on adversarial vulnerabilities. A search for “deep learning” shows a single critical flaw filed in November 2017. But again, it’s not an adversarial vulnerability but rather a flaw in another component of a deep learning system.
The National Vulnerability Database contains very little information on adversarial attacks

I also checked GitHub’s Advisory database, which tracks security and bug fixes on projects hosted on GitHub. Search for “adversarial attacks,” “adversarial examples,” “machine learning,” and “deep learning” yielded no results. A search for “TensorFlow” yields 41 records, but they’re mostly bug reports on the codebase of TensorFlow. There’s nothing about adversarial attacks or hidden vulnerabilities in the parameters of TensorFlow models.

This is noteworthy because GitHub already hosts many deep learning models and pretrained neural networks.
GitHub Advisory contains no records on adversarial attacks.

Finally, I checked HackerOne, the platform many companies use to run bug bounty programs. Here too, none of the reports contained any mention of adversarial attacks.

While this might not be a very precise assessment, the fact that none of these sources have anything on adversarial attacks is very telling.
The growing threat of adversarial attacks
Adversarial vulnerabilities are deeply embedded in the many parameters of machine learning models, which makes it hard to detect them with traditional security tools.

Automated defense is another area that is worth discussing. When it comes to code-based vulnerabilities Developers have a large set of defensive tools at their disposal.

Static analysis tools can help developers find vulnerabilities in their code. Dynamic testing tools examine an application at runtime for vulnerable patterns of behavior. Compilers already use many of these techniques to track and patch vulnerabilities. Today, even your browser is equipped with tools to find and block possibly malicious code in client-side script.

At the same time, organizations have learned to combine these tools with the right policies to enforce secure coding practices. Many companies have adopted procedures and practices to rigorously test applications for known and potential vulnerabilities before making them available to the public. For instance, GitHub, Google, and Apple make use of these and other tools to vet the millions of applications and projects uploaded on their platforms.

But the tools and procedures for defending machine learning systems against adversarial attacks are still in the preliminary stages. This is partly why we’re seeing very few reports and advisories on adversarial attacks.

Meanwhile, another worrying trend is the growing use of deep learning models by developers of all levels. Ten years ago, only people who had a full understanding of machine learning and deep learning algorithms could use them in their applications. You had to know how to set up a neural network, tune the hyperparameters through intuition and experimentation, and you also needed access to the compute resources that could train the model.

But today, integrating a pre-trained neural network into an application is very easy.

For instance, PyTorch, which is one of the leading Python deep learning platforms, has a tool that enables machine learning engineers to publish pretrained neural networks on GitHub and make them accessible to developers. If you want to integrate an image classifier deep learning model into your application, you only need a rudimentary knowledge of deep learning and PyTorch.

Since GitHub has no procedure to detect and block adversarial vulnerabilities, a malicious actor could easily use these kinds of tools to publish deep learning models that have hidden backdoors and exploit them after thousands of developers integrate them in their applications.

How to address the threat of adversarial attacks


Understandably, given the statistical nature of adversarial attacks, it’s difficult to address them with the same methods used against code-based vulnerabilities. But fortunately, there have been some positive developments that can guide future steps.

The Adversarial ML Threat Matrix, published last month by researchers at Microsoft, IBM, Nvidia, MITRE, and other security and AI companies, provides security researchers with a framework to find weak spots and potential adversarial vulnerabilities in software ecosystems that include machine learning components. The Adversarial ML Threat Matrix follows the ATT&CK framework, a known and trusted format among security researchers.

Another useful project is IBM’s Adversarial Robustness Toolbox, an open-source Python library that provides tools to evaluate machine learning models for adversarial vulnerabilities and help developers harden their AI systems.

These and other adversarial defense tools that will be developed in the future need to be backed by the right policies to make sure machine learning models are safe. Software platforms such as GitHub and Google Play must establish procedures and integrate some of these tools into the vetting process of applications that include machine learning models. Bug bounties for adversarial vulnerabilities can also be a good measure to make sure the machine learning systems used by millions of users are robust.

New regulations for the security of machine learning systems might also be necessary. Just as the software that handles sensitive operations and information is expected to conform to a set of standards, machine learning algorithms used in critical applications such as biometric authentication and medical imaging must be audited for robustness against adversarial attacks.

As the adoption of machine learning continues to expand, the threat of adversarial attacks is becoming more imminent. Adversarial vulnerabilities are a ticking timebomb. Only a systematic response can defuse it.


Ben Dickson

Ben is a software engineer and the founder of TechTalks. He writes about technology, business and politics.


DeepMind’s annual report: Why it’s hard to run a commercial AI lab

deepmind google logos

This article is part of our series that explore the business of artificial intelligence

Last week, on the heels of DeepMind’s breakthrough in using artificial intelligence to predict protein folding came the news that the UK-based AI company is still costing its parent company Alphabet Inc hundreds of millions of dollars in losses each year.

A tech company losing money is nothing new. The tech industry is replete with examples of companies who burned investor money long before becoming profitable. But DeepMind is not a normal company seeking to grab a share of a specific market. It is an AI research lab that has had to repurpose itself into a semi-commercial outfit to ensure its survival.

And while its owner, which is also the parent company of Google, is currently happy with footing the bill for DeepMind’s expensive AI research, it is not guaranteed that it will continue to do so forever.

DeepMind’s profits and losses

DeepMind AlphaFold
DeepMind’s AlphaFold project used artificial intelligence to help advance the complicated challenge of protein folding.

According to its annual report filed with the UK’s Companies House register, DeepMind has more than doubled its revenue, raking in £266 million in 2019, up from £103 million in 2018. But the company’s expenses continue to grow as well, increasing from £568 million in 2018 to £717 in 2019. The overall losses of the company grew from £470 million in 2018 to £477 million in 2019.

At first glance, this isn’t bad news. Compared to the previous years, DeepMind’s revenue growth is accelerating while its losses are plateauing.

deepmind revenue and losses
DeepMind’s revenue and losses from 2016 to 2019

But the report contains a few more significant facts. The document mentions “Turnover research and development remuneration from other group undertakings.” This means DeepMind’s main customer is its owner. Alphabet is paying DeepMind to apply its AI research and talent to Google’s services and infrastructure. In the past, Google has used DeepMind’s services for tasks such as managing the power grid of its data centers and improving the AI of its voice assistant.

What this also means that there isn’t yet a market for DeepMind’s AI, and if there is, it will only be available through Google.

The document also mentions that the growth of costs “mainly relates to a rise in technical infrastructure, staff costs, and other related charges.”

This is an important point. DeepMind’s “technical infrastructure” runs mainly on Google’s huge cloud services and its special AI processors, the Tensor Processing Unit (TPU). DeepMind’s main area of research is deep reinforcement learning, which requires access to very expensive compute resources. Some of the company’s projects in 2019 included work on an AI system that played StarCraft 2 and another that played Quake 3, both of which cost millions of dollars in training.

A spokesperson for DeepMind told the media that the costs mentioned in the document also included work on the AlphaFold, the company’s celebrated protein-folding AI, another very expensive project

There are no public details on how much Google charges DeepMind for access to its cloud AI services, but it is most likely renting its TPUs at a discount. This means that without the support and backing of Google, the company’s expenses would have been much higher.

Staff costs is another important issue. While participation in machine learning courses has increased in the past few years, scientists that can engage in the kind of cutting-edge AI research DeepMind is involved in are very scarce. And by some accounts, top AI talent command seven-digit salaries.

The growing interest in deep learning and its applicability to commercial settings has created an arms race between tech companies to acquire top AI talent. Most of the industry’s top AI scientists and pioneers are working either full- or half-time at large companies such as Google, Facebook, Amazon, and Microsoft. The fierce competition for snatching top AI talent has had two consequences. First, like every other field where supply doesn’t meet demand, it has resulted in a steep incline in the salaries of AI scientists. And second, it has driven many AI scientists from academic institutions that can’t afford stellar salaries to wealthy tech companies that can. Some scientists continue to stay in academia for the sake of continuing scientific research, but they are too few and far between.

And without the backing of a large tech company like Google, research labs like DeepMind can’t afford to hire new researchers for their projects.

So, while DeepMind shows signs of slowly turning around its losses, its growth has made it even more dependent on Google’s financial resources and large cloud infrastructure.

Google is still satisfied with DeepMind

DeepMind AlphaStar
DeepMind’s developed an AI system called AlphaStar that can beat the best players at the real-time strategy game StarCraft 2

According to DeepMind’s annual report, Google Ireland Holdings Unlimited, one of the investment branches of Alphabet, “waived the repayment of intercompany loans and all accrued interest amounting to £1.1 billion.”

DeepMind has also received written assurances from Google that it will “continue to provide adequate financial support” to the AI firm for “a period of at least twelve months.”

For the time being, Google seems to be satisfied with the progress DeepMind has made, which is also reflected in remarks made by Google and Alphabet executives.

In July’s quarterly earnings call with investors and analysts, Alphabet CEO Sundar Pichai said, “I’m very happy with the pace at which our R&D on AI is progressing. And for me, it’s important that we are state-of-the-art as a company, and we are leading. And to me, I’m excited at the pace at which our engineering and R&D teams are working both across Google and DeepMind.”

But the corporate world and scientific research move at different paces.

Scientific research is measured in decades. Much of the AI technology used today in commercial applications has been in the making since the 1970s and 1980s. Likewise, a lot of the cutting-edge research and techniques presented at AI conferences today will probably not find their way into the mass market in the coming years. DeepMind’s ultimate goal, developing artificial general intelligence (AGI), is by the most optimistic estimates at least decades away.

On the other hand, the patience of shareholders and investors is measured in months and years. Companies that can’t turn over a profit in years or at least show hopeful signs of growth fall afoul of investors. DeepMind currently has none of those. It doesn’t have measurable growth, because its only client is Google itself. And it’s not clear when—if ever— some of its technology will be ready for commercialization.

sundar pichai
Google CEO Sundar Pichai is satisfied with the pace of AI research and development at DeepMind

And here’s where DeepMind’s dilemma lies. At heart, it is a research lab that wants to push the limits and of science and make sure advances in AI are beneficial to all humans. Its owner’s goal, however, is to build products that solve specific problems and turn in profits. The two goals are diametrically opposed, pulling DeepMind in two different directions: maintaining its scientific nature or transforming into a product-making AI company. The company has already had troubles finding balance scientific research and product development in the past.

And DeepMind is not alone. OpenAI, DeepMind’s implicit rival, has been facing a similar identity crisis, transforming from an AI research lab to a Microsoft-backed for-profit company that rents its deep learning models.

Therefore, while DeepMind doesn’t need to worry about its unprofitable research yet, but as it becomes more and more enmeshed in the corporate dynamics of its owner, it should think deeply about its future and the future of scientific AI research.


Ben Dickson

Ben is a software engineer and the founder of TechTalks. He writes about technology, business and politics.
Philippines troops, ministers get COVID-19 vaccine before approval












FILE PHOTO: Philippines President Rodrigo Duterte reviews military cadets during change of command ceremonies of the Armed Forces of the Philippines (AFP) at Camp Aguinaldo in Quezon City, metro Manila, Philippines October 26, 2017. REUTERS/Dondi Tawatao

28 Dec 2020 

MANILA: Some Philippine soldiers and Cabinet ministers have already received COVID-19 vaccine injections, officials said on Monday (Dec 28), despite an absence of regulatory approval that the country's health ministry said was vital to ensure safety.

Interior minister Eduardo Ano said some Cabinet members have already received COVID-19 vaccines and army chief Lieutenant General Cirilito Sobejana said some troops have been vaccinated but the number was not large. Neither said what brand of vaccine was administered.

The health ministry in a statement said all vaccines must first be evaluated by experts, and "only vaccines which have been approved and found to be safe should be administered".

Food and Drug Administration head Rolando Enrique Domingo said Philippine regulators have yet to approve any COVID-19 vaccine, making any importation, distribution and sale of one illegal.

Domingo warned the public that unapproved vaccines exposed them to "all sorts of dangers" and told CNN Philippines that side effects were possible "especially if you don't know how these things have been handled".

So far only Pfizer has applied for emergency use approval of its COVID-19 vaccine in the Philippines, while Sinovac, Gamaleya, Johnson & Johnson's Janssen and Clover's late-stage trial applications have yet to be approved

Health Undersecretary Maria Rosario Vergeire said the ministry had no information about the soldiers' vaccination and military spokesman Colonel Edgard Arevalo said there had been no inoculation sanctioned by the armed forces leadership.

The Presidential Security Group (PSG) ,which is tasked with protecting Duterte, said some of its personnel have already been inoculated.

"The PSG administered COVID-19 vaccine to its personnel performing close-in security operations to the president," unit chief Brigadier General Jesus Durante said in a statement, without specifying how many got the drug.

Duterte has not been vaccinated, according to his spokesman, Harry Roque, who said he had no problem with soldiers being given the shots and protecting themselves.

During a televised meeting with health officials on Saturday, Duterte said "almost all" soldiers have already been inoculated.

He said "many", without identifying who, in the Philippines had received a COVID-19 vaccine developed by China National Pharmaceutical Group (Sinopharm).

Sinopharm could not be immediately reached for comment.

Asked if the soldiers' vaccination was authorised by the president's office, Sobejana said: "Well of course, our president is our commander-in-chief."

Roque said on Monday the Sinopharm drug was given to the soldiers, confirming Duterte's comments at the weekend that "a select few" had been inoculated with the Chinese vaccine.

He played down concerns about the safety of the Sinopharm drug, saying it was meant to send a message of hope to Filipinos.

"The news is that the vaccine is already here and if we cannot be given Western vaccines, our friend and neighbour China is willing to give us vaccines," Roque said.

"It's not prohibited under the law to get inoculated with an unregistered (vaccine). What is illegal is the distribution and selling."
Thai protest demands help for shrimp sellers after COVID-19 outbreak


Anti government protesters sell shrimps in front of government house as people now fear to eat shrimps due to the coronavirus disease (COVID-19) outbreak in Bangkok, Thailand Dec 26, 2020. (Photo: REUTERS/Soe Zeya Tun)
26 Dec 2020 



BANGKOK: Thai protesters demonstrated on Saturday (Dec 26) to demand more action to help seafood sellers hit by a COVID-19 outbreak as the government urged people to eat more shellfish.

Thailand's worst outbreak of the new coronavirus was reported just over a week ago, with more than 1,500 infections now linked to a shrimp market outside Bangkok. Most of those infected have been migrant workers from Myanmar.

Seafood sellers say business has fallen in a country whose economy had already been badly hit by a collapse in tourism.

"We want the government to create confidence in shrimp consumption," said Piyarat Chongthep, among the scores of protesters at Government House, some of whom briefly scuffled with police.





People line up to buy shrimps from anti government protesters selling shrimps in front of government house amidst the coronavirus disease (COVID-19) outbreak in Bangkok, Thailand Dec 26, 2020. (Photo: REUTERS/Soe Zeya Tun)



Police stand guard as anti government protesters try sell shrimps in front of government house amidst the coronavirus disease (COVID-19) outbreak in Bangkok, Thailand Dec 26, 2020. (Photo: REUTERS/Soe Zeya Tun)


The issue is the latest seized on by protesters who for months have been demanding the removal of Prime Minister Prayut Chan-ocha, a new constitution and reforms of the monarchy.


At a seafood-eating event in a nearby province, government ministers said they were trying to promote seafood.

"We are building confidence that you can have seafood without getting infected," Anucha Nakasai, minister for the prime minister's office, told reporters.

A major shrimp exporter, Thailand sold 36 billion baht (US$1.2 billion) worth in the first 10 months of 2020, industry association data showed.

"The problem now is there is no market," said one shrimp seller at Government House.

COVID-19 task force spokesman Taweesin Wisanuyothin reported 110 new coronavirus infections, of which nearly all 94 were connected to the seafood market.

Thailand has a total of 6,020 confirmed cases and 60 deaths, low rates for a country of 70 million people.



China passes law to protect Yangtze River

The country’s ‘mother river’ has suffered a series of environmen
tal problems and this year a 10-year fishing ban was instituted to conserve stocks

New legislation comes into force in March and is designed to strengthen sustainable development

Alice Yan in Shanghai
Published: 27 Dec, 2020
Why you can trust SCMP

The country’s “mother river” has suffered a series of environmental problems and this year a 10-year fishing ban was instituted to conserve stocks. Photo: Simon Song

China has passed a law to protect the Yangtze, which has been described as the country’s “mother river”.

The Yangtze River Protection Law will come into force on March 1 after being approved by the National People’s Congress Standing Committee, the country’s top legislative body, on Saturday.

It is the first law to protect a particular waterway in China.

The 6,300km (3,900-mile) Yangtze is the longest river in Asia and provides a vital lifeline for hundreds of millions of people.

Its valley covers an area of 1.8 million sq km, about a fifth of the national total, while the Yangtze River Economic Zone covers 11 provinces and cities, accounting for 40 per cent of the total population and GDP.

It provides a third of the country’s fresh water resources and three fifths of its hydroenergy reserves, but has suffered a series of environmental problems in recent years, including heavy pollution.

It is the site of 40 per cent of the country’s wastewater discharges and has high levels of chemicals such as ammonia nitrate, sulphur dioxide and nitrogen oxide of up to twice the national average.

China imposed a 10-year fishing ban at the beginning of this year to conserve its dwindling fish stocks.




China’s Yangtze fishing communities struggle amid 10-year fishing ban



The ban, initially covering 332 stretches of the river, will be extended to the whole waterway and its major tributaries next year.

Figures from the agriculture ministry earlier this month show that around 231,000 fishermen had relinquished their rights along the Yangtze.

Vice-premier Han Zheng said earlier this week that more assistance should be given to these fishermen to help them find new jobs and places to live.

China to showcase ‘dual circulation’ strategy in Yangtze River Delta
25 Aug 2020



He also called for stronger efforts to prevent illegal fishing and urged the public to support the ban.

State news agency Xinhua said the new law was designed to strengthen environmental protections, use resources efficiently and ensure sustainable development.

There are nine chapters in the law, covering areas such as design and management, resource protection, anti-pollution measures, green development and legal responsibilities.


It will also require stricter supervision and establish a coordination mechanism to direct the protection work undertaken by different provinces.

The new legislation confirms the fishing ban and has clauses restricting sand excavation and chemical production along its length.
Bangladesh set to move second batch of Rohingya refugees to remote island: Officials












Rohingya refugees are seen aboard a ship as they are moved to Bhasan Char island in Chattogram, Bangladesh, Dec 4, 2020. (Photo: Reuters/Mohammad Ponir Hossain)

27 Dec 2020 01:01PM(Updated: 27 Dec 2020 01:10PM)

DHAKA: Bangladesh is set to move a second batch of Rohingya refugees from neighbouring Myanmar to the remote island of Bhasan Char in the Bay of Bengal this month, officials said on Sunday (Dec 27), despite calls by rights groups not to carry out further relocations.

About 1,000 Rohingya refugees, members of a Muslim minority who have fled Myanmar, will be moved to the island in the next few days after Bangladesh relocated more than 1,600 early this month, two officials with the direct knowledge of the matter said.

READ: 'What choice do we have?': Rohingya women face odyssey of misery

"They will be moved to Chittagong first and then to Bhasan Char, depending on the high tide," one of the officials said. The officials declined to be named as the issue had not been made public.

Mohammed Shamsud Douza, the deputy Bangladesh government official in charge of refugees, said the relocation was voluntary. "They will not be sent against their will."

The United Nations has said it has not been allowed to carry out a technical and safety assessment of Bhasan Char, a flood-prone island in the Bay of Bengal, and was not involved in the transfer of refugees there.
Advertisement

Bangladesh says it is transferring only people who are willing to go and the move will ease chronic overcrowding in camps that are home to more than 1 million Rohingya.

But refugees and humanitarian workers say some of the Rohingya have been coerced into going to the island, which emerged from the sea 20 years ago.
READ: Video chat only hope for divided Rohingya couple

Bangladesh Foreign Minister Abdul Momen told Reuters earlier this month the United Nations should first assess and verify how conducive the environment in Myanmar's Rakhine state was for repatriating the refugees, before carrying out an assessment of Bhasan Char.

Several attempts to kickstart repatriation of Rohingya to Myanmar have failed after refugees said they were too fearful of further violence to return.

Source: Reuters/zl

New studies suggest vaping could cloud your thoughts

UNIVERSITY OF ROCHESTER MEDICAL CENTER

Research News

Two new studies from the University of Rochester Medical Center (URMC) have uncovered an association between vaping and mental fog. Both adults and kids who vape were more likely to report difficulty concentrating, remembering, or making decisions than their non-vaping, non-smoking peers. It also appeared that kids were more likely to experience mental fog if they started vaping before the age of 14.

While other studies have found an association between vaping and mental impairment in animals, the URMC team is the first to draw this connection in people. Led by Dongmei Li, Ph.D., associate professor in the Clinical and Translational Science Institute at URMC, the team mined data from two major national surveys.

"Our studies add to growing evidence that vaping should not be considered a safe alternative to tobacco smoking," said study author Li.

The studies, published in the journals Tobacco Induced Diseases and Plos One, analyzed over 18,000 middle and high school student responses to the National Youth Tobacco Survey and more than 886,000 responses to the Behavioral Risk Factor Surveillance System phone survey from U.S. adults. Both surveys ask similar questions about smoking and vaping habits as well as issues with memory, attention and mental function.

Both studies show that people who smoke and vape - regardless of age - are most likely to report struggling with mental function. Behind that group, people who only vape or only smoke reported mental fog at similar rates, which were significantly higher than those reported by people who don't smoke or vape.

The youth study also found that students who reported starting to vape early - between eight and 13 years of age - were more likely to report difficulty concentrating, remembering, or making decisions than those who started vaping at 14 or older.

"With the recent rise in teen vaping, this is very concerning and suggests that we need to intervene even earlier," said Li. "Prevention programs that start in middle or high school might actually be too late."

Adolescence is a critical period for brain development, especially for higher-order mental function, which means tweens and teens may be more susceptible to nicotine-induced brain changes. While e-cigarettes lack many of the dangerous compounds found in tobacco cigarettes, they deliver the same amount or even more nicotine.

While the URMC studies clearly show an association between vaping and mental function, it's not clear which causes which. It is possible that nicotine exposure through vaping causes difficulty with mental function. But it is equally possible that people who report mental fog are simply more likely to smoke or vape - possibly to self-medicate.

Li and her team say that further studies that follow kids and adults over time are needed to parse the cause and effect of vaping and mental fog.

###

In addition to Li, authors of the youth study include Catherine Xie, and Zidian Xie, Ph.D. For the adult study, Li was joined by co-authors Zidian Xie, Ph.D., Deborah J. Ossip, Ph.D. Irfan Rahman, Ph.D., and Richard J. O'Connor, Ph.D. Both studies were funded by the National Cancer Institute and the U.S. Food and Drug Administration's Center for Tobacco Products.

One psychedelic experience may lessen trauma of racial injustice

Lower stress, depression recalled after using drug, study finds

OHIO STATE UNIVERSITY

Research News

COLUMBUS, Ohio - A single positive experience on a psychedelic drug may help reduce stress, depression and anxiety symptoms in Black, Indigenous and people of color whose encounters with racism have had lasting harm, a new study suggests.

The participants in the retrospective study reported that their trauma-related symptoms linked to racist acts were lowered in the 30 days after an experience with either psilocybin (Magic Mushrooms), LSD or MDMA (Ecstasy).

"Their experience with psychedelic drugs was so powerful that they could recall and report on changes in symptoms from racial trauma that they had experienced in their lives, and they remembered it having a significant reduction in their mental health problems afterward," said Alan Davis, co-lead author of the study and an assistant professor of social work at The Ohio State University.

Overall, the study also showed that the more intensely spiritual and insightful the psychedelic experience was, the more significant the recalled decreases in trauma-related symptoms were.

A growing body of research has suggested psychedelics have a place in therapy, especially when administered in a controlled setting. What previous mental health research has generally lacked, Davis noted, is a focus on people of color and on treatment that could specifically address the trauma of chronic exposure to racism.

Davis partnered with co-lead author Monnica Williams, Canada Research Chair in Mental Health Disparities at the University of Ottawa, to conduct the research.

"Currently, there are no empirically supported treatments specifically for racial trauma. This study shows that psychedelics can be an important avenue for healing," Williams said.

The study is published online in the journal Drugs: Education, Prevention and Policy.

The researchers recruited participants in the United States and Canada using Qualtrics survey research panels, assembling a sample of 313 people who reported they had taken a dose of a psychedelic drug in the past that they believed contributed to "relief from the challenging effects of racial discrimination." The sample comprised adults who identified as Black, Asian, Hispanic, Native American/Indigenous Canadian, Native Hawaiian and Pacific Islander.

Once enrolled, participants completed questionnaires collecting information on their past experiences with racial trauma, psychedelic use and mental health symptoms, and were asked to recall a memorable psychedelic experience and its short-term and enduring effects. Those experiences had occurred as recently as a few months before the study and as long ago as at least 10 years earlier.

The discrimination they had encountered included unfair treatment by neighbors, teachers and bosses, false accusations of unethical behavior and physical violence. The most commonly reported issues involved feelings of severe anger about being subjected to a racist act and wanting to "tell someone off" for racist behavior, but saying nothing instead.

Researchers asked participants to recall the severity of symptoms of anxiety, depression and stress linked to exposure to racial injustice in the 30 days before and 30 days after the experience with psychedelic drugs. Considering the probability that being subjected to racism is a lifelong problem rather than a single event, the researchers also assessed symptoms characteristic of people suffering from discrimination-related post-traumatic stress disorder (PTSD).

"Not everybody experiences every form of racial trauma, but certainly people of color are experiencing a lot of these different types of discrimination on a regular basis," said Davis, who also is an adjunct faculty member in the Johns Hopkins University Center for Psychedelic and Consciousness Research. "So in addition to depression and anxiety, we were asking whether participants had symptoms of race-based PTSD."

Participants were also asked to report on the intensity of three common kinds of experiences people have while under the influence of psychedelic drugs: a mystical, insightful or challenging experience. A mystical experience can feel like a spiritual connection to the divine, an insightful experience increases people's awareness and understanding about themselvess, and a challenging experience relates to emotional and physical reactions such as anxiety or difficulty breathing.

All participants recalled their anxiety, depression and stress symptoms after the memorable psychedelic experience were lower than they had been before the drug use. The magnitude of the positive effects of the psychedelics influenced their reduction in symptoms.

"What this analysis showed is that a more intense mystical experience and insightful experience, and a less intense challenging experience, is what was related to mental health benefits," Davis said.

The researchers noted in the paper that the study had limitations because the findings were based on participant recall and the entire sample of recruited research volunteers had reported benefits they associated with their psychedelic experience - meaning it cannot be assumed that psychedelics will help all people of color with racial trauma. Davis and Williams are working on proposals for clinical trials to further investigate the effects of psychedelics on mental health symptoms in specific populations, including Black, Indigenous and people of color.

"This was really the first step in exploring whether people of color are experiencing benefits of psychedelics and, in particular, looking at a relevant feature of their mental health, which is their experience of racial trauma," Davis said. "This study helps to start that conversation with this emerging treatment paradigm."

###

This work was funded by the University of Ottawa, the Canada Research Chairs Program and the National Institutes of Health. Additional co-authors included Yitong Xin of Ohio State's College of Social Work; Nathan Sepeda of Johns Hopkins; Pamela Grigas and Sinead Sinnott of the University of Connecticut; and Angela Haeny of Yale School of Medicine.


Neurologists say there is no medical justification for police use of neck restraints

In a perspective piece, they note that some police departments justify these tactics with misleading language.

MASSACHUSETTS GENERAL HOSPITAL

Research News

BOSTON - Some police departments in the United States continue to teach officers that neck restraints are a safe method for controlling agitated or aggressive people, but that's a dangerous myth, according to a Viewpoint written by three neurologists at Massachusetts General Hospital (MGH) in JAMA Neurology.

The killing of George Floyd, a Black man who died while being arrested in May 2020 after a police officer pressed a knee to his neck for more than eight minutes, helped spark a national conversation about racial injustice in the United States. Floyd's death made headlines, as did that of Eric Garner in 2014 after police placed him in a chokehold. Yet a number of other Americans have died during confrontations with police officers who used neck restraints, says MGH neurologist Altaf Saadi, MD, senior author of the Viewpoint column.

Along with coauthors Jillian M. Berkman, MD, and Joseph A. Rosenthal, MD, PhD, Saadi was disturbed by the use of neck restraints by police departments in the United States. They found that some prohibit chokeholds and other neck restraints, but others teach the techniques for the purpose of subduing allegedly uncooperative people during encounters. Notably, some police agencies advise that carotid restraint--compressing the two large blood vessels on either side of the neck, which is known as a stranglehold--is a safe, nonlethal tactic that temporarily renders a person unconscious by reducing blood flow to the brain.

"As a neurologist, I know that there is never a scenario where stopping the flow of blood and oxygen to the brain is medically appropriate," says Saadi. "What shocked me most was that much of the literature supporting these techniques hides behind medical language, but lacks a real understanding of the pathophysiology of the significant harm they cause to an individual. As neurologists, we are taught that 'time is brain,' because there's such a rapid loss of human nervous tissue when the flow of blood and oxygen to the brain is reduced or stopped."

In their Viewpoint, Saadi and her colleagues describe how carotid compression--which can occur with as few as 6 kilograms (13 pounds) of force, or about the weight of a typical house cat--can result in stroke, seizure and death. They call for the creation of a system for reporting on law enforcement's use of neck restraints, including how often the technique is used and if it results in death or disability.

"It's in the public's best interest to have this data," says Saadi. She believes that increasing awareness about the impact of neck restraints could help curb their use. Ultimately, says Saadi, there is no medical justification for neck restraints in policing.

###

Altaf Saadi, MD, is also an instructor in neurology at Harvard Medical School. Jillian M. Berkman, MD, is a resident physician at Brigham and Women's Hospital. Joseph A. Rosenthal, MD, PhD, is a resident physician at MGH.

About the Massachusetts General Hospital

Massachusetts General Hospital, founded in 1811, is the original and largest teaching hospital of Harvard Medical School. The Mass General Research Institute conducts the largest hospital-based research program in the nation, with annual research operations of more than $1 billion and comprises more than 9,500 researchers working across more than 30 institutes, centers and departments. In August 2020, Mass General was named #6 in the U.S. News & World Report list of "America's Best Hospitals."

Disclaimer: AAAS and EurekAlert! are not responsible f