Monday, April 10, 2023

Vaccine hesitancy has become a nationwide issue: What can science do about it?

Medical University of South Carolina researchers use a national survey to understand South Carolina’s COVID-19 vaccine hesitancy.


MEDICAL UNIVERSITY OF SOUTH CAROLINA

COVID-19 vaccines 

IMAGE: VIALS OF COVID-19 VACCINES AND SYRINGES. view more 

CREDIT: MEDICAL UNIVERSITY OF SOUTH CAROLINA, SARAH PACK.

South Carolina residents were more hesitant than Americans as a whole to receive the COVID-19 vaccine during the fall of 2020, report researchers in the Department of Psychiatry and Behavioral Sciences at the Medical University of South Carolina (MUSC) in the Journal of Psychiatry Research. The MUSC study showed that the two most important factors contributing to COVID-19 vaccine hesitancy were low levels of confidence in public health scientists and low levels of collective responsibility to prevent COVID. Postdoctoral fellow Caitlin Rancher, Ph.D., and Dean G. Kilpatrick, Ph.D., director of the National Crime Victims Research and Treatment Center, led the study.

The first step in combating vaccine hesitancy is to figure out why people are unwilling to receive vaccines, said Rancher. This will inform scientists on how to better educate the public on why these vaccines are beneficial. MUSC researchers like Rancher and Kilpatrick are stepping up to the plate and taking this challenge head on.

“It's really understanding the factors that are contributing to hesitancy,” said Rancher. “It's critical because it can help us to inform public health with targeted and effective response campaigns.”

To understand these factors better, the researchers surveyed a national sample of 1,450 people, excluding South Carolinians, and a state sample of 784 South Carolinians. To select recipients, they drew on the MFour Mobile Research Panel, which includes approximately 2 million people reflective of all 50 states and the District of Columbia. To participate in the survey, recipients had to have a smartphone and register for MFour’s Surveys On The Go app.

“The MFour Mobile Research Panel has several advantages for this type of research due to its large size and widespread coverage of South Carolina and the U.S. and the fact that the panel is constructed to closely match the demographic characteristics of the U.S. population,” said Kilpatrick.

The survey asked recipients to gauge their degrees of trust in the COVID-19 vaccine and their intentions to receive it. Researchers assessed several of these survey questions with validated multi-item measures known as the 5Cs: 

Confidence: Do I trust the science and my provider?

Complacency: Is this vaccine necessary to my health?

Constraints: Is the vaccine affordable and accessible?

Risk Calculation: What is the risk of me getting sick with this disease?

Collective Responsibility: Will getting this vaccine help others in the community?

The MUSC study found that two components of the 5C model were the driving factors for the increased hesitancy in South Carolina: confidence in science and shared collective responsibility. South Carolina residents reported lower confidence in their providers and the science backing the vaccine. Residents also felt less collective responsibility or an ethical obligation to get vaccinated to protect their community.

Focusing on these components of vaccine hesitancy will allow researchers to do a better job of educating patients and communities in vaccine science and the benefits of certain shots, said Angela Moreland-Johnson, Ph.D., a co-author of the study and an assistant professor at MUSC in the Department of Psychiatry and Behavioral Sciences. She believes a way to overcome vaccine hesitancy is to educate health professionals to help patients to understand the science and the benefits of the vaccine.

“The science and the medicine don't do a whole lot of good if people don't trust in them and trust what's being told and given to them,” she said.

Although vaccine hesitancy is not new, the pandemic exacerbated the situation. And as the survey brought to light, distrust in the scientific community has grown markedly, leading to reluctance by some to accept not only the COVID-19 vaccine but even standard childhood shots.

“Globally, we're really seeing that vaccine hesitancy is contributing to a rise in cases and mortality from diseases that we thought we had a handle on,” said Rancher. “For example, here, even in the U.S., in 2019, there were several measles outbreaks, where governors were declaring states of emergency.”

The key takeaway from this study was that the researchers were able to determine the main drivers of COVID-19 vaccine hesitancy in South Carolina using data from the MFour Mobile Research Panel and applying the 5C model.

The MUSC researchers think that by understanding which factors are preventing people from receiving vaccines, it will allow scientists and health providers to tailor vaccine and public-education efforts specifically for those target populations. They hope that better education will lead to less vaccine hesitancy and leave the state better positioned for future outbreaks or pandemics.

# # #

About MUSC

Founded in 1824 in Charleston, MUSC is the state’s only comprehensive academic health system, with a unique mission to preserve and optimize human life in South Carolina through education, research and patient care. Each year, MUSC educates more than 3,000 students in six colleges – Dental Medicine, Graduate Studies, Health Professions, Medicine, Nursing and Pharmacy – and trains more than 850 residents and fellows in its health system. MUSC brought in more than $297.8 million in research funds in fiscal year 2022, leading the state overall in research funding. For information on academic programs, visit musc.edu.

As the health care system of the Medical University of South Carolina, MUSC Health is dedicated to delivering the highest quality and safest patient care while educating and training generations of outstanding health care providers and leaders to serve the people of South Carolina and beyond. Patient care is provided at 14 hospitals with approximately 2,500 beds and five additional hospital locations in development, more than 350 telehealth sites and connectivity to patients’ homes, and nearly 750 care locations situated in all regions of South Carolina. In 2022, for the eighth consecutive year, U.S. News & World Report named MUSC Health the No. 1 hospital in South Carolina. To learn more about clinical patient services, visit muschealth.org.

MUSC and its affiliates have collective annual budgets of $5.1 billion. The nearly 25,000 MUSC team members include world-class faculty, physicians, specialty providers, scientists, students, affiliates and care team members who deliver groundbreaking education, research and patient care.

Endocrine Society alarmed by Texas court ruling banning mifepristone

Decision has significant impact on treatment of endocrine disease, drug approval process, and access to reproductive care


THE ENDOCRINE SOCIETY

WASHINGTON—The Endocrine Society is deeply concerned about a Texas ruling that reverses the U.S. Food and Drug Administration’s (FDA) approval of mifepristone, a drug used to treat Cushing’s syndrome and to end pregnancy safely and effectively.

Last week, two conflicting federal court decisions in Texas and Washington state left access to mifepristone unclear for both the treatment of Cushing’s syndrome and for medical abortions.

The Texas court ruled to ban mifepristone, taking away the FDA's authority to regulate the safety of medications. The judge paused his ruling from going into effect for a week, and the government has already appealed the decision.

This decision to overrule the FDA’s approval of mifepristone could restrict access to treatment for Cushing’s syndrome and has broader implications on drug approvals as well as access to reproductive health care.

Mifepristone is used to treat people with Cushing’s syndrome and diabetes or high blood sugar who are not surgical candidates or have failed surgery. Cushing’s syndrome is a rare endocrine disease that causes elevated cortisol levels and can be life-threatening in certain cases. Mifepristone has also been safely and effectively used to end pregnancy since the FDA first approved the drug over 20 years ago.

“We believe the FDA, not the court, should have the authority to approve drugs and determine their efficacy and safety based on medical evidence and established standards and processes,” said Endocrine Society President Ursula B. Kaiser, M.D.   The Society will continue to work with the medical community on access to care and to advocate for access to mifepristone for all who need it to treat endocrine-related disease.

Physicians and trained health professionals should be able to make medical decisions based on science and the circumstances of the individual patient rather than politics.

# # #

Endocrinologists are at the core of solving the most pressing health problems of our time, from diabetes and obesity to infertility, bone health, and hormone-related cancers. The Endocrine Society is the world’s oldest and largest organization of scientists devoted to hormone research and physicians who care for people with hormone-related conditions.

The Society has more than 18,000 members, including scientists, physicians, educators, nurses and students in 122 countries. To learn more about the Society and the field of endocrinology, visit our site at www.endocrine.org. Follow us on Twitter at @TheEndoSociety and @EndoMedia.

UK
NHS leader urges Acas intervention to avert ‘catastrophic’ junior doctor strikes

Laura Parnaby, PA
Mon, 10 April 2023 

An NHS leader has urged the Government and junior doctors’ unions to call in the Acas conciliation service to end their stalemate and avoid “catastrophic” strikes.

On the eve of what is set to be the most disruptive walkout in NHS history, chief executive of the NHS Confederation Matthew Taylor has said the likely impact is “heartbreaking” and called on both sides to end their “battle of rhetoric”.

Thousands of junior doctors led by the British Medical Association (BMA) will take industrial action in the four days between Easter Bank Holiday and the following weekend.

Mr Taylor said there is “no question” this strike will be more disruptive than the 72-hour walkouts by NHS staff last month, which led to 175,000 cancelled appointments.

Speaking about pay negotiations which would avoid the action, Mr Taylor told BBC Breakfast: “It’s depressing that there seems to be no movement at all from the two sides of this dispute over the last few days.

“We should consider asking the Government and the trade unions to call in Acas, the conciliation service, to provide some basis for negotiations, because if anything the positions seem to have hardened over the last couple of days.”



Junior doctor strikes are set to take place for 96 hours from 7am on Tuesday (Jacob King/PA)

Mr Taylor also told Sky News that “what we’re seeing is a battle of rhetoric rather than talks”, adding that he expected “up to 350,000” appointments to be cancelled.

He said: “These strikes are going to have a catastrophic impact on the capacity of the NHS to recover services.

“The health service has to meet high levels of demand at the same time as making inroads into that huge backlog that built up before Covid, but then built up much more during Covid.

“That’s a tough thing to do at the best of times, it’s impossible to do when strikes are continuing.”

NHS Confederation chief executive Matthew Taylor has said the BMA strikes will pose ‘risks to patients’ (Victoria Jones/PA)

Asked whether everyone who needs urgent care this week will get it, he said: “We hope so.

“There’s no point hiding the fact that there will be risks to patients, risks to patient safety, risks to patient dignity, as we’re not able to provide the kind of care that we want to.”

Mr Taylor also advised the public to use NHS services in “the most responsible way you can”, and to avoid “risky behaviour” which could result in a visit to A&E during the strikes.

The BMA has asked for a 35% pay rise to bring junior doctor pay back to 2008 levels, and help resolve the recruitment and retention crisis.

Health Secretary Steve Barclay has blasted this demand as “unrealistic” and said their strikes had been planned to “cause maximum disruption”.

Junior doctors will walk out from 6.59am on Tuesday until 6.59am on Saturday.
Climate emergency is the biggest health crisis of our time – bigger than Covid

Pascal Soriot
Mon, 10 April 2023 

Photograph: Borg Wong/AP

The latest Intergovernmental Panel on Climate Change report is a grim, yet unsurprising, reminder of the catastrophic effect global heating is having on our planet. The message from leading climate scientists is clear: action is needed now. Not tomorrow, not next year, not by the end of the decade.

Even drastic carbon reduction today that limits temperature increases to the 1.5C agreed in Paris will alter the world we live in for ever. The IPCC concludes that every fraction of a degree more will edge us towards tipping points that will leave deep scars on our planet.

While much has been said about the damage to weather patterns, crop yields and coral reefs, less well understood is the effect a hotter world has on our health. The reality is that the climate crisis is the biggest health crisis of our time, bigger even than Covid-19.

There has been a rise in chronic diseases linked to air pollution

Heat-related deaths are expected to treble by 2050. Currently 7 million people die prematurely each year from air pollution, more than the death toll during the entire pandemic.

There has been a rise in chronic diseases linked to air pollution, and evidence suggests it is driving up cancer rates, too. Lung cancer and respiratory-related deaths linked to air pollution are up 160% over the past 30 years and it causes genetic mutations in some lung cancer-linked genes.

The toll on loved ones, public health and the economy is large and rising. It has been estimated that poor health costs the global economy 15% of global GDP. It is not only oil companies, carmakers and airlines at fault. Healthcare alone contributes 5% of global greenhouse gas emissions, up to 8% in advanced economies.

Investing in greenhouse gas reduction is now proven beyond doubt, equalled or outweighed by the economic benefits of a healthier population, the IPCC says.

Increasingly investors are focused on those companies that are taking bold and science-driven action to mitigate climate warming. While this is the right thing to do, it also makes good business sense.

Decarbonising the economy is an opportunity and a driver of growth, which is why the backlash against environmental, social and governance focused investing is so misguided. As we saw during the pandemic, science finds a way, ingenuity triumphs and is duly rewarded.

At my company, AstraZeneca, we are on track to meet our targets to: reduce scope 1 and 2 greenhouse gas emissions – those from burning fuels and energy use – by 98% by 2026; halve our absolute total emissions across our value chain by 2030; and be net zero by 2045. In addition, we have planted millions of trees as part of a large reforestation and biodiversity restoration programme.

Like it or not, regulatory requirements are moving towards stopping companies selling products that are not sustainable. In Japan we have converted paper leaflets to e-leaflets for all our products, reducing costs, simplifying operations and saving 180 tonnes of waste and 30m tonnes of paper – the equivalent of 4,500 trees.

But companies cannot act alone. Together we must make public health more resilient, so that it can cope with change and become truly accessible. A forum such as the Sustainable Markets Initiative established by King Charles when he was Prince of Wales, convening stakeholders from across industries, is a good example of collaboration to take shared action at scale.

The toll on human health is huge and likely to get worse. Healthcare companies will do all they can to treat patients suffering from climate-related disease. Prevention, however, will have the biggest impact on health. Only by taking radical action on greenhouse gas emissions will we be able to say that we are doing everything we can for the health of humanity.

• Pascal Soriot is the chief executive of the Anglo-Swedish pharmaceutical company AstraZeneca
UK
Mental health sick days taken by civil servants rise 38% in one year

The figures show that the amount of mental health-related sick leave across the heart of government has been rising continuously for the past decade.


Denis Campbell
 Health policy editor
THE GUARDIAN
Mon, 10 April 2023 


Whitehall civil servants took a record 771,433 days of sick leave last year because of stress and other mental health problems, figures show.

That number of mental health sick days taken by officials working for government departments was 38% higher than the 558,125 recorded the year before.


Labour said the “shocking” figures, which they obtained under freedom of information laws, revealed “a mental health crisis at the heart of Whitehall”. The Conservative government had not properly supported people with anxiety, depression and other mental health conditions, it claimed.

The figures show that the amount of mental health-related sick leave across the heart of government has been rising continuously for the past decade.

Civil servants at the Ministry of Justice (MoJ) took the most sick days for mental health in 2021-22 – a total of 280,597. Staff at the Department for Work and Pensions (DWP) took off almost as much time – 236,365 days – while the Ministry of Defence (MoD) had the third-highest total, at 88,723 days.


Unions representing Whitehall workers blamed the rising numbers on increased workloads, the impact of Covid-19, staff cuts, low pay, long hours and poor morale. The figures underlined the “scandalous” extent of mental health problems in the civil service, one said.

Angela Rayner, Labour’s deputy leader, linked MoJ staff having the largest amount of mental health sick leave to the fact that it was headed by Dominic Raab, the lord chancellor and secretary of state for justice, who is also Rishi Sunak’s deputy prime minister.

The Guardian disclosed in January that Raab was facing 24 separate complaints from civil servants of bullying. An inquiry into his behaviour is expected to report back in the near future.

Rayner, who is also the shadow secretary for the future of work, said: “Instead of setting an example to employers about how to treat workers respectfully, Rishi Sunak’s deputy faces 24 formal allegations of bullying, with his department experiencing the highest rate of mental health sickness across Whitehall.

“These shocking figures reveal a mental health crisis at the heart of Whitehall with a chronic failure by the Conservatives to support their workforce.”

Other cabinet ministers have also faced allegations of bullying civil servants in recent years, including the former home secretary Priti Patel, and the former education and defence secretary Gavin Williamson. Patel was found to have bullied officials but the then prime minister, Boris Johnson, took no action. Williamson resigned last November as a Cabinet Office minister after text messages he sent to a fellow minister emerged.

Labour’s research also found that of all government departments, officials at the MoJ took the highest number of sick days a year per head as a result of mental health issues.

Last year its staff took an average of 3.32 days off for mental health, up from 2.58 days the year before. Personnel at the DWP had the second-highest per capita rate of mental health absence – 2.88 days – while the Department for Transport had the next-highest rate, at 1.92 days.

Lucille Thirlby, the assistant general secretary of the FDA, which represents many civil servants, said: “It’s not surprising that civil servants’ mental health is suffering, as our members report increasing workload pressures and regularly working well beyond their contracted hours.

“Excessive workloads and working hours are long-running issues across the civil service and have a huge impact on people’s mental health. The government should take this issue seriously and take concrete steps to better support staff.”

In a member survey, published last November in an FDA report into hybrid working, the union found that 77% of respondents did some unpaid extra hours of work every week, while 74% said that working excessive hours had negatively affected their wellbeing, and 60% thought unduly long hours were a problem where they worked.

The FDA has previously aired its concern about the impact of alleged “bullying” behaviour by several ministers in Sunak’s administration on their civil servants.

Mark Serwotka, the general secretary of the PCS union, said: “These figures further demonstrate the devastating impact the government’s appalling treatment is having on its own workers. Staff cuts, increased pressure from unmanageable workloads and low pay have worsened the already rock-bottom morale of staff.

“PCS members went above and beyond during the pandemic, often at risk to their own physical and mental health. It’s no surprise that the effects of their sacrifice are on this scandalous scale.”

A government spokesperson said: “The civil service continues to improve its approach to health and wellbeing to help employees access the support they need to stay in or return to work, and fulfil their potential.”
'High time Britain supported peace talks'

Letters to the Editor
Bournemouth Echo UK
Sun, 9 April 2023 


A Ukrainian soldier looks at the heavily damaged building after Russian attacks
 (Image: Muhammed Enes Yildirim/Anadolu Agency via Getty Images)

MIKE Fry makes points of the highest importance in his Ukraine letter April 5 citing the 1990 Paris Charter and the 1995 Budapest Memorandum.

The end of the Cold War with Europe wide agreements to lock in security for European nations east and west.

But what Mr Fry is not factoring in is all that has happened since the early 1990s that has led to this dreadful war.

Like it or not the fact stands Ukraine post Soviet era is a deeply divided country.

The West and central areas are ethnic Ukrainian.

The east and south including Crimea seven million ethnic Russians.

That is the problem coming to the surface in wide scale protests in Kiev 2014. And that followed by civil war in the Donbass region with pro-Russian regions refusing to accept new pro western government in Kyiv.

If we do not look at this whole terrible war in the round, both sides, we will be locked into war for years if not far worse.

Russia will not accept US NATO nuclear weapons on their borders in a clearly hostile country. They have on record made this clear time and again. That is why in the end they invaded Ukraine.

No different in any way to the Cuba nuclear crisis 1962. The US would not have USSR nuclear missiles 90 miles from their country. The principle at the heart of the UN Charter “No country to advance its security at the expense of the security of another nation.” All blatantly ignored by US lead NATO ever moving east.

As for Donald Trump he said four weeks ago in the clearest terms “The world has never been closer to World War III and nuclear Armageddon”. He is absolutely right. The exact words of the United Nations.

As matters stand now China to their great credit is negotiating peace talks east and west.

Absolutely and high time Britain supported peace talks. Not more weapons, more war, more destruction.

JEFF WILLIAMS

Jubilee Road, Poole

GREENWASHING
Exxon’s new ‘advanced recycling’ plant raises environmental concerns

Maddie Stone
THE GUARDIAN
Mon, 10 April 2023 

Photograph: Jessica Rinaldi/Reuters

ExxonMobil just launched one of the largest chemical recycling plants in North America – but environmental advocates say the technology is a dangerous distraction from the need to reduce plastic production.

On the surface, the latest addition to ExxonMobil’s giant petrochemical refinery complex in Baytown, Texas, sounds like it could be a good thing: An “advanced recycling” facility capable of breaking down 36,000 metric tons of hard-to-recycle plastic each year. But plastic waste advocates warn that plants like it do little actual recycling, and instead generate hazardous pollutants while providing cover for oil giants to keep producing millions of tons of new plastic products each year.

Related: A US city received $500,000 to remove lead pipes – and still hasn’t spent it

The facility, which began large-scale operations in December of last year, is one of the largest chemical recycling plants in North America. Chemical recycling works by breaking down plastic polymers into small molecules in order to make new plastics, synthetic fuels and other products. Companies like ExxonMobil have rebranded the technology as “advanced recycling” and are now touting it as the latest hi-tech fix to address the plastic crisis, as traditional, mechanical recycling has failed to slow the tide of plastic piling up in landfills and the ocean.

ExxonMobil also says it’s planning to build chemical recycling plants at “many of its other manufacturing sites around the world”. Though it hasn’t committed specific dollar amounts to building new plants, the company is currently assessing locations in Louisiana, Illinois, Belgium, Singapore and elsewhere.

By the end of 2026, the oil giant hopes to have enough chemical recycling capacity to process roughly 450,000 metric tons of plastic each year.

But that’s a drop in the bucket compared with how much plastic ExxonMobil creates.

In 2021 alone, ExxonMobil churned out 6m tons of new single-use plastic, more than any other petrochemical company, according to a recent report by the philanthropic Minderoo Foundation. What’s more, recent research has shown that chemical recycling is worse for the environment than mechanical recycling in terms of greenhouse gas emissions and water use, and in some cases, worse than virgin plastic production. The process ExxonMobil’s Baytown plant uses, called pyrolysis, is often so inefficient that many environmental advocates say it should not be called recycling at all.


Smoke fills the air at the refinery in Baytown, Texas, in 2021. In that year alone, ExxonMobil churned out 6m tons of new single-use plastic Photograph: Mark Mulligan/AP

Conventional mechanical recycling involves sorting different types of plastic into individual streams that are washed, shredded and melted down to make new products. During this process, the chemical makeup of the plastic remains unchanged, although contaminants can find their way in during the melting and cutting process and the end products have a weaker physical structure.

Chemical recycling relies on high heat, pressure or chemical catalysts like enzymes to break down plastic into its molecular building blocks. Those building blocks can then be used to make new products – including new plastics with the same physical structure as the original material.

The most commercially widespread chemical recycling technology today is pyrolysis, according to Taylor Uekert, a scientist at the National Renewable Energy Laboratory who studies plastic recycling.

Pyrolysis has significant environmental impacts. Plants that use it require large amounts of energy to operate: Uekert found that recycling a kilogram of high-density polyethylene plastic using pyrolysis requires nearly seven times the amount of energy needed to make a kilogram of virgin plastic. Typically, that energy comes from burning fossil fuels, which creates air pollution and planet-heating carbon emissions.

Pyrolysis operations can also consume large volumes of water, and they often generate hazardous waste. Overall, Uekert’s research found that the environmental impact of making recycled plastics with pyrolysis is 10 to 100 times greater than virgin plastic production.

In a pyrolysis plant, plastic is put in a reactor and subjected to high temperatures (ranging from 300 to 900C) and pressures in the absence of oxygen. This treatment transforms plastic into a synthetic form of crude oil which can be used as a replacement for fossil fuels or to create new plastics.

While pyrolysis is able to handle more types of plastic waste than some other chemical recycling technologies, Uekert said it is not typically considered “closed loop” recycling because the fuel it generates is often burned for energy – meaning it can’t be recycled again and again. Although pyrolysis is not the same as incineration, in which waste is burned in the presence of oxygen, environmental advocates often liken pyrolysis to incineration since the end products tend to go up in smoke one way or the other.

Chemical recycling “is a way for the industry to continue to expand its plastic production and assuage people’s concerns about plastic waste”, said Veena Singla, a senior scientist at the Natural Resources Defense Council who has analyzed chemical recycling facilities around the US. “They’re trying to put a pretty bow on it.”

ExxonMobil’s Baytown recycling plant uses the firm’s proprietary “Exxtend” technology, a pyrolysis-based approach, according to company statements. Reached for comment, an ExxonMobil spokesperson, Julie King, told the Guardian that this process “complements traditional mechanical recycling” by turning hard-to-recycle plastics into raw materials which can be used to make new plastics for food packaging, medical equipment and personal hygiene products.

King declined to respond to criticisms about the negative environmental impacts of pyrolysis or answer questions about how much pollution the Baytown recycling plant generates. She also did not confirm the exact name or location of the plant: when asked for any identifying information that could be used to look up its state and federal permits, King simply said that ExxonMobil reports emissions to the US Environmental Protection Agency (EPA) and the state of Texas in a “consistent and timely manner in accordance with all laws, regulations and permits”. King also offered that a third-party analysis by the environmental consulting firm Sphera found that every ton of plastic waste fed through ExxonMobil’s chemical recycling process generates 19 to 49% fewer greenhouse gas emissions than processing the same amount of crude-based feedstocks. (ExxonMobil did not share a copy of a report, and Sphera didn’t answer emails.)

She also declined to say how much of the plastic waste fed into the plant would be used to make recycled plastic versus synthetic fuel. An internal analysis shared with the Guardian by the Minderoo Foundation found that if ExxonMobil’s Baytown plant had yields typical of pyrolysis plants, only 23% of the fuel it generates would be used to produce new plastics. The rest would go to other non-plastic applications, like fuel for transportation.

Chemical recycling is “deflecting attention away from what we need, which is reducing single-use plastics and a global treaty on plastic waste”, said Phaedra Pezzullo, a professor at the University of Colorado, Boulder who has a book forthcoming on plastics and environmental justice.

In an undercover investigation in 2021, Unearthed caught the ExxonMobil lobbyist Keith McCoy on video explaining how the firm uses recycling – including the Baytown plant – to shift the conversation around how to deal with plastic away from reducing consumption. (ExxonMobil’s CEO, Darren Woods, later said that McCoy’s comments “in no way represent the company’s position on a variety of issues”, and the oil company has since parted ways with McCoy.)

As the petrochemical industry forges ahead with chemical recycling, the same low-income communities and communities of color that bear the burden of plastic manufacturing are seeing these plants pop up in their backyards.

Of the eight chemical recycling facilities operating in the US in 2021, six are located in disproportionately Black and brown communities, according to a report by Singla. Five are in areas with a large number of households living on less than $25,000 a year.


The refinery in Baytown, Texas. Nearly 20% of the city’s predominately white, working-class residents live in poverty, with a per-capita income of just $25,000.
 Photograph: Mark Mulligan/AP

The Baytown plant wasn’t included in Singla’s analysis, which only included facilities for which data had been reported to the EPA or state permits were available as of August 2021. But the city, already a hub of petrochemical production, fits the pattern she identified: nearly 20% of its predominantly white, working-class residents live in poverty, with a per-capita income of just $25,000.

ExxonMobil’s Baytown complex – which includes the third largest oil refinery in the US and a plant that manufactures 2.3m metric tons of plastic a year – is a major contributor to regional air and water pollution. It also has a long history of emitting chemicals above its permit limits, including the carcinogenic compound benzene. In recent years, ExxonMobil’s Baytown complex has been the site of fires and explosions that have injured workers and triggered shelter-in-place orders for nearby residents.

“Exxon has a terrible track record of polluting the Baytown community,” Luke Metzger, the executive director of Environment Texas, told the Guardian. “This false ‘chemical recycling’ will only produce more toxic misery for Baytown.”
Russian embassy shares AI-generated image of Julian Assange in prison

Aude Dejaifve
Mon, 10 April 2023 

© Observers

The Russian Embassy in Kenya shared an image on Twitter on April 6 showing an exhausted-looking Julian Assange, the founder of WikiLeaks, who has been incarcerated in the United Kingdom since 2019. However, it turns out that one of his supporters generated the image using artificial intelligence.

If you only have a minute:

The Russian Embassy in Kenya tweeted an image of a weak and ill-looking Julian Assange in a bid to complain about the embattled Wikileaks founder’s conditions of imprisonment in the UK.

However, we noticed some aspects of the image that made us suspect that it wasn’t an actual photo but rather a fake image.

It turns out that a Twitter account that supports Assange used artificial intelligence to generate the image in order to raise awareness about his imprisonment.

The fact-check, in detail

The same image was shared on March 31 in a tweet in French, which has since garnered more than 191,000 views.


Artist uses AI to reimagine world’s most famous billionaires if they were born poor

Namita Singh
Mon, 10 April 2023 

An Indian artist used artificial intelligence to create portraits of famous billionaires – from Elon Musk to Mukesh Ambani – imagining the world’s wealthiest people as poor.

Digital artist Gokul Pillai used artificial intelligence programme Midjourney to create portraits of Bill Gates, Mark Zuckerberg, Warren Buffett, Jeff Bezos, Elon Musk, and Mukesh Ambani as “slumdog billionaires”.

His portrayal of the Microsoft co-founder as a lanky old man standing bare-chested outside a shanty, wearing nothing but a grey colored loin cloth, was liked over 10,000 times.

Indian billionaire Ambani, in this alternative reality, is not dressed in his tweed suit but instead wearing an old, seemingly unwashed and oversized T-shirt and blue trousers, as he stood next to a garbage dump.

His post also featured the AI-generated image of Meta chief Zuckerberg in a dusty T-shirt and blue shorts standing in the middle of a slum, while American businessman Buffet is seen wearing a slightly dirty white T-shirt tucked into unzipped trousers.

“This is epic,” wrote a user.

Warren Buffet is looking rich here as well,” said another.

Earlier this month, AI-generated images of Mr Zuckerberg walking the ramp in flashy Louis Vuitton clothes, flooded social media.



In March, AI enthusiast Jyo John Mulloor posted a series of images on Instagram portraying Game of Thrones characters Daenerys Targaryen, Jon Snow, and Arya Stark in royal Indian attire.

"If George RR Martin has hired an Indian costume designer for Game of Thrones (sic)," he captioned.

An AI-generated image of Pope Francis looking stylish in a large white puffer coat also went viral on social media earlier last month, leaving viewers shocked.

Many social media users, including model Chrissy Teigen, expressed confusion over whether or not the fashionable photograph was real.

“I thought the Pope’s puffer jacket was real and didn’t give it a second thought. No way am I surviving the future of technology,” wrote Teigen.

Another AI-generated image showing Mr Trump wearing an orange prison jumpsuit went viral amid the 34 felony charges brought against him for allegedly falsifying business records related to hush money payment to adult film star Stormy Daniels in the run up to the 2016 presidential elections.

Fake images have sparked concern among lawmakers and experts who fear it could spread harmful disinformation.



Why a fake Pope picture could herald the end of humanity
Matthew Field
Mon, April 10, 2023 

AI-generated fake image of the Pope in white puffer jacket fooled the internet - Pablo Xavier

For a moment the internet was fooled. An image of Pope Francis in a gleaming white, papal puffer jacket spread like wildfire across the web.

Yet the likeness of the unusually dapper 86-year-old head of the Vatican was a fake. The phoney picture had been created as a joke using artificial intelligence technology, but was realistic enough to trick the untrained eye.

AI fakes are quickly spreading across social media as the popularity of machine-learning tools surges. As well as invented images, chatbot-based AI tools such as OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Bing have been accused of creating a new avenue for misinformation and fake news.

These bots, trained on billions of pages of articles and millions of books, can provide convincing-sounding, human-like responses, but often make up facts - a phenomenon known in the industry as hallucinating. Some AI models have even been taught to code, unleashing the possibility they could be used for cyber attacks.

On top of fears about false news and “deep fake” images, a growing number of future gazers are concerned that AI is turning into an existential threat to humanity.

Scientists at Microsoft last month went so far as to claim one algorithm, ChatGPT-4, had “sparks of… human-level intelligence”. Sam Altman, the creator of OpenAI, the US start-up behind the ChatGPT technology, admitted in a recent interview: “We are a little bit scared of this”.

Now, a backlash against so-called “generative AI” is brewing as Silicon Valley heavyweights clash over the risks, and potentially infinite rewards, of this new technological wave.

A fortnight ago, more than 3,000 researchers, scientists and entrepreneurs including Elon Musk penned an open letter demanding a six month “pause” on development of OpenAI’s most advanced chatbot tool, a so-called “large language model”, or LLM.

Elon Musk - Taylor Hill/Getty Images

“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the scientists wrote. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Musk and others fear the destructive potential of AI, that an all-powerful “artificial general intelligence” could have profound dangers for humanity.

But their demand for a six-month ban on developing more advanced AI models has been met with scepticism. Yann LeCun, the top AI expert at Facebook-parent company Meta, compared the attempt to the Catholic church trying to ban the printing press.

“Imagine what could happen if the commoners get access to books,” he said on Twitter.

"They could read the Bible for themselves and society would be destroyed.”

Others have pointed out that several of the letter's signatories have their own agenda. Musk, who has openly clashed with OpenAI, is looking to develop his own rival project. At least two signatories were researchers from DeepMind, which is owned by Google and working on its own AI bots. AI sceptics have warned the letter buys into the hype around the latest technology, with statements such as “should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?”

Many of the AI tools currently in development are effectively black boxes, with little public information on how they actually work. Despite this, they are already being incorporated into hundreds of businesses. OpenAI’s ChatGPT, for instance, is being harnessed by the payments company Stripe, Morgan Stanley bank and buy-now-pay-later company Klarna.

Richard Robinson, founder of legal start-up RobinAI, which is working with $4bn AI company Anthropic, another OpenAI rival, says even the builders of large language models don’t fully understand how they work. However, he adds: "I also think that there’s a real risk regulators will overreact to these developments.”

Government watchdogs are already limbering up for a fight with AI companies over privacy and data worries. In Europe, Italy has threatened to ban ChatGPT over claims it has scraped information from across the web with little regard for consumers' rights.

Italy’s privacy watchdog said the bot had “no age verification system” to stop access by under 18s. Under European data rules, the regulator can impose fines of up to €20m (£18m), or 4pc of OpenAI’s turnover, unless it changes its data practices.

In response, OpenAI said it had stopped offering ChatGPT in Italy. “We believe we offer ChatGPT in compliance with GDPR and other privacy laws,” the company said.

France, Ireland and Germany are all examining similar regulatory crackdowns. In Britain, the Information Commissioner’s Office said: “There really can be no excuse for getting the privacy implications of generative AI wrong.”

However, while the privacy watchdog has raised red flags, so far the UK has not gone as far as to threaten to ban ChatGPT. In an AI regulation white paper published earlier this month, the Government decided against a formal AI regulator. Edward Machin, a lawyer at the firm Ropes & Gray, says: “The UK is striking its own path, it is taking a much lighter approach.”

Several AI experts told The Telegraph that the real concerns about ChatGPT, Bard and others were less about the long-term consequences of some kind of killer, all-powerful AI, but the damage it could do in the here and now.

Juan José López Murphy, head of AI and data science at tech company Globant, says there are near-term issues with helping people spot deep fakes or false information generated by chatbots. “That technology is already here… it is about how we misuse it,” he says.

“Training ChatGPT on the whole internet is potentially dangerous due to the biases of the internet,” says computer expert Dame Wendy Hall. She suggests calls for a moratorium on development would likely be ineffective, since China is rapidly developing its own tools.

OpenAI appears alive to the possibility of a crackdown. On Friday, it posted a blog which said: “We believe that powerful AI systems should be subject to rigorous safety evaluations. Regulation is needed to ensure that such practices are adopted.”

Marc Warner, of UK-based Faculty AI, which is working with OpenAI in Europe, says regulators will still need to plan with the possibility that a super powerful AI may be on the horizon.

“It seems general artificial intelligence might be coming sooner than many expect,” he says, urging labs to stop the rat race and collaborate on safety.

“We have to be aware of what could happen in the future, we need to think about regulation now so we don’t develop a monster,” Dame Wendy says.

“It doesn’t need to be that scary at the moment… That future is still a long way away, I think.

Fake images of the Pope might seem a long way from world domination. But, if you believe the experts, the gap is starting to shrink.

Fact check: How can I spot AI-generated images?


Joscha Weber | Kathrin Wesolowski | Thomas Sparrow
DW
April 9, 2023

Midjourney, DALL-E, DeepAI — images created with artificial intelligence tools are flooding social media. Some carry the risk of spreading false information. Which images are real and which are not? Here are a few tips.

https://p.dw.com/p/4PnBK

It has never been easier to create images that look shockingly realistic but are actually fake.

Anyone with an internet connection and access to a tool that uses artificial intelligence (AI) can create photorealistic images within seconds, and they can then spread them on social networks at breakneck speed.

In the last few days, many of these images became viral: Vladimir Putin apparently being arrested or Elon Musk holding hands with General Motors CEO Mary Barra, just to name two examples.

The problem is that both AI images show events that never happened. Even photographers have published portraits that turn out to be images created with artificial intelligence.

And while some of these images may be funny, they can also pose real dangers in terms of disinformation and propaganda, according to experts consulted by DW.

This AI-generated viral photo purports to show Elon Musk with GM CEO Mary Barra. 
It is fake


An earthquake that never happened

Pictures showing the arrest of politicians like Russian President Vladimir Putin or former US President Donald Trump can be verified fairly quickly by users if they check reputable media sources.

Other images are more difficult, such as those in which the people on the picture are not so well-known, AI expert Henry Ajder told DW.

One example: a German member of Parliament for the far-right AfD party spread an AI-generated image of screaming men on his Instagram account in order to show he was against the arrival of refugees.

And it's not just AI-generated images of people that can spread disinformation, according to Ajder.

He says there have been examples of users creating events that never happened.

This was the case with a severe earthquake that is said to have shaken the Pacific Northwest of the United States and Canada in 2001.

But this earthquake never happened, and the images shared on Reddit were AI-generated.

And this can be a problem, according to Ajder. "If you're generating a landscape scene as opposed to a picture of a human being, it might be harder to spot," he explains.

However, AI tools do make mistakes, even if they are evolving rapidly. Currently, as of April 2023, programs like Midjourney, Dall-E and DeepAI have their glitches, especially with images that show people.

DW’s fact-checking team has compiled some suggestions that can help you gauge whether an image is fake. But one initial word of caution: AI tools are developing so rapidly that these tips only reflect the current state of affairs.

1. Zoom in and look carefully


Many images generated by AI look real at first glance.

That's why our first suggestion is to look closely at the picture. To do this, search for the image in the highest-possible resolution and then zoom in on the details.

Enlarging the picture will reveal inconsistencies and errors that may have gone undetected at first glance.

2. Find the image source


If you are unsure whether an image is real or generated by AI, try to find its source.

You may be able to see some information on where the image was first posted by reading comments published by other users below the picture.

Or you may carry out a reverse image search. To do this, upload the image to tools like Google Image Reverse Search, TinEye, or Yandex, and you may find the original source of the image.

The results of these searches may also show links to fact checks done by reputable media outlets which provide further context.

Putin is supposed to have knelt down in front of Xi Jinping, but a closer looks shows that the picture is fake
Image: Twitter/DW

3. Pay attention to body proportions


Do the depicted people have correct body proportions?

It is not uncommon for AI-generated images to show discrepancies when it comes to proportions. Hands may be too small or fingers too long. Or the head and feet do not match the rest of the body.

This is the case with the picture above, in which Putin is supposed to have knelt down in front of Xi Jinping. The kneeling person’s shoe is disproportionately large and wide. The calf appears elongated. The half-covered head is also very large and does not match the rest of the body in proportion.

More about this fake in our dedicated fact check.

4. Watch out for typical AI errors

Hands are currently the main source of errors in AI image programs like Midjourney or DALL-E.

People frequently have a sixth finger, such as the policeman to Putin's left in our picture at the very top.

Or also in these pictures of Pope Francis, which you’ve probably seen.



But did you realize that Pope Francis seems to only have four fingers in the right picture? And did you notice that his fingers on the left are unusually long? These photos are fake.

Other common errors in AI-generated images include people with far too many teeth, or glasses frames that are oddly deformed, or ears that have unrealistic shapes, such as in the aforementioned fake image of Xi and Putin.

Surfaces that reflect, such as helmet visors, also cause problems for AI programs, sometimes appearing to disintegrate, as in the alleged Putin arrest.

AI expert Henry Ajder warns, however, that newer versions of programs like Midjourney are becoming better at generating hands, which means that users won’t be able to rely much longer on spotting these kinds of mistakes.

5. Does the image look artificial and smoothed out?


The app Midjourney in particular creates many images that seem too good to be true.

Follow your gut feeling here: Can such a perfect image with flawless people really be real?

"The faces are too pure, the textiles that are shown are also too harmonious," Andreas Dengel of the German Research Center for AI told DW.

People’s skin in many AI images is often smooth and free of any irritation, and even their hair and teeth are flawless. This is usually not the case in real life.



Many images also have an artistic, shiny, glittery look that even professional photographers have difficulty achieving in studio photography.

AI tools often seem to design ideal images that are supposed to be perfect and please as many people as possible.


6. Examine the background

The background of an image can often reveal whether it was manipulated.



Here, too, objects can appear deformed; for example, street lamps.

In a few cases, AI programs clone people and objects and use them twice. And it is not uncommon for the background of AI images to be blurred.

But even this blurring can contain errors. Like the example above, which purports to show an angry Will Smith at the Oscars. The background is not merely out of focus but appears artificially blurred.
Conclusion

Many AI-generated images can currently still be debunked with a little research. But technology is getting better and mistakes are likely to become rarer in the future. Can AI detectors like Hugging Face help us detect manipulation?

Based on our findings, detectors provide clues, but nothing more.

The experts we interviewed tend to advise against their use, saying the tools are not developed enough. Even genuine photos are declared fake and vice versa.

Therefore, in case of doubt, the best thing users can do to distinguish real events from fakes is to use their common sense, rely on reputable media and avoid sharing the pictures.




Joscha Weber Editor and fact-checker focusing on separating facts from fiction and uncovering disinformation.