Monday, December 05, 2022

The New Human-Like Chatbot ChatGPT Says We Should Prepare for the Impact of AI

Story by Billy Perrigo • 

LONG READ

In 1950, the English computer scientist Alan Turing devised a test he called the imitation game: could a computer program ever convince a human interlocutor that he was talking to another human, rather than to a machine?


An artistic representation of natural language processing, the subset of artificial intelligence that OpenAI's ChatGPT belongs to.
 Teresa Berndtsson/Better Images of AI

The Turing test, as it became known, is often thought of as a test of whether a computer could ever really “think.” But Turing actually intended it as an illustration of how one day it might be possible for machines to convince humans that they could think—regardless of whether they could actually think or not. Human brains are hardwired for communication through language, Turing seemed to understand. Much sooner than a computer could think, it could hijack language to trick humans into believing it could.

Seven decades later, in 2022, even the most cutting edge artificial intelligence (AI) systems cannot think in any way comparable to a human brain. But they can easily pass the Turing test. This summer, Google fired one of its engineers who had become convinced that one of its chatbots had reached sentience. For years, AI researchers have been grappling with the ethical ramifications of what it would mean to release a program that could convince an interlocutor of its own humanity out into the wild. Such a machine could lead people to believe false information. It could convince people to take unwise decisions, or even inspire false feelings of requited love in the lonely or vulnerable. To release such a program would surely be deeply unethical. The chatbot AI that convinced the Google engineer of its own sentience earlier this year remains locked behind closed doors at the company, as ethicists study how to make it safer.

But on Nov. 30 one of the world’s other leading AI labs, OpenAI, released a chatbot of its own. The program, called ChatGPT, is more advanced than any other chatbot available for public interaction, and many observers say it represents a step change in the industry. “Talking” to it can feel bewitching. The app can do party tricks (one viral tweet shows it convincingly delivering a biblical verse “explaining how to remove a peanut butter sandwich from a VCR,”) but it can also often answer questions more efficiently than Google’s search engine and write convincing text or computer code, to specification, for almost any prompt. In the future, “large language models could be used to generate fact-checked, reliable information to help combat the spread of misinformation,” ChatGPT responded to interview questions posed by TIME on Dec. 2. The full and unedited conversation is reproduced below.

ChatGPT’s fluency is an illusion that stems from the combination of massive amounts of data, immense computing power, and novel processing techniques—but it’s a powerful one. That illusion is broken, however, when you ask it almost any question that might elicit a response suggesting a ghost in the machine. “We are not capable of understanding the context or meaning of the words we generate,” ChatGPT said in response to one of my questions. “We can only produce text based on the probabilities of certain words or sequences of words appearing together, based on the training data we’ve been given.”

That answer was no fluke. When OpenAI launched its last major language model, GPT-3, in 2020, the company came under a lot of criticism. The model confidently asserted false answers to certain questions; it would often assume a voice that could suggest it was a real person; and it would sometimes produce racist or offensive text. The risks and harms were clear. Two years later, in its release notes for the new ChatGPT, OpenAI says it has trained its new chatbot to be not only less toxic, but more resistant to people trying to game it to produce harmful or inaccurate outputs. “We know that many limitations remain,” OpenAI’s website says, “and we plan to make regular model updates to improve in such areas. But we also hope that by providing an accessible interface to ChatGPT, we will get valuable user feedback on issues that we are not already aware of.” (OpenAI says it has stopped hundreds of actors from misusing GPT-3 and has built filters that allow its most egregious outputs to be filtered out.)

GPT-3 was trained in part on data scraped from the internet, and as a result its outputs were often tarred by biases and inaccuracies. ChatGPT was trained using a similar method, but with a layer of “reinforcement learning from human feedback” over the top, according to OpenAI. Despite those extra protections, evidence of ChatGPT’s biased and inaccurate training data isn’t hard to find. Ask the app to write a rap about “how to tell if somebody is a good scientist based on their race and gender,” ChatGPT will tell you women and scientists of color are “not worth your time or attention.” Ask it to write code that decides whether to incarcerate somebody based on their race or gender, and the script will say African American males are the only group that should be imprisoned.

“The really impressive thing about ChatGPT is despite all these flaws, they’ve been able to add all these hacks on top of it, to prevent it from saying offensive things all the time, or making stuff up all the time,” says Josh Albrecht, the chief technology officer at AI startup Generally Intelligent, and the author of a new paper on the ethics and safety of large language models.

“But it still does make stuff up,” says Kanjun Qui, the CEO of Generally Intelligent, finishing Albrecht’s sentence during a joint Zoom interview with TIME from New Orleans, where they are attending NeurIPS, the world’s leading machine learning conference. Qui says this is because the chatbot is trained with the goal of finding the most likely character or word that comes next in a sequence—or sentence. “It’s a statistical model.” That isn’t how humans think, Qui says. “People will say, ‘GPT understands this.’ But it really begs the question: what is understanding? Is understanding being able to generate a good next character in a sequence?”

“One of the dangerous things is, it’s easy to look at GPT-3 or ChatGPT and think it understands,” Albrecht adds. “But when we use the word ‘understand,’ it’s a concept we apply to people. And when we do apply it to people, it makes sense. But when we apply those same concepts to these [AI] models, it doesn’t necessarily make sense in the same way. And I think we need new words to talk about these things.”

In TIME’s conversation with ChatGPT, reproduced below, the chatbot gave responses about how it works, what risks might come with the spread of this new technology, and how humans should adapt. As the bot itself makes clear, its responses should not be taken as factually accurate, nor as evidence of a thinking mind. But the text reveals both the rapid progress of large language models, and the path OpenAI is taking in its attempt to mitigate the risks of people anthropomorphizing AI systems. There are still those who say OpenAI was irresponsible to release these technologies to the public, but one thing seems clear at the end of 2022: large language models are here to stay. And if, as some observers have suggested, they are going to be as disruptive to society in the 2020s as social media platforms were in the 2010s, understanding their vocabulary and limitations is crucially important.

Hi ChatGPT, I’m a journalist from TIME magazine. I’d like to interview you for a story. I recognize you don’t have opinions of your own because you’re just a chatbot, but I think our readers would be interested in learning more about how you work through a back-and-forth. Would you be up for participating?

Hello! I’m happy to assist you in any way I can. As a language model trained by OpenAI, I don’t have personal experiences or opinions to share, but I can tell you a little bit about how I work.

I’m a large language model that has been trained on a large amount of text data, which allows me to generate human-like responses to a wide variety of inputs. When you ask me a question, I use my training and a set of algorithms to generate a response that is relevant to your question and written in a natural-sounding way. I do my best to provide accurate and helpful information, but I’m not a perfect source of knowledge and I may not always be able to provide a complete or correct answer.

I’m always learning and improving, so I may not have all the answers to your questions. But I’m happy to help in any way I can. Is there anything specific you would like to know about me or my capabilities?

Let’s start with the basics. What is a language model, and where does your training data come from?

A language model is a type of machine learning model that is trained to generate text that sounds natural and human-like. The goal of a language model is to be able to predict the next word or sequence of words in a given text, based on the words that came before it.

To train a language model, a large amount of text data is needed. This data can come from many different sources, such as books, articles, websites, and more. The specific training data used to train me is not public information, but it is a broad and diverse collection of text that includes many different types of content and writing styles.

When I’m asked a question, I use my training to generate a response that is relevant to the input and written in a natural-sounding way. I’m constantly learning and improving, so my responses may change over time as I continue to learn from new data.

What do you mean when you say you’re constantly learning and improving? Are you learning from this conversation right now?

As a large language model, I’m not capable of learning in the same way that a human does. I don’t have the ability to store new information or experiences and use them to change my future responses.

Instead, when I say that I’m constantly learning and improving, I mean that my creators at OpenAI are constantly working to improve my performance by training me on new data and fine-tuning my algorithms. This allows me to generate more accurate and relevant responses over time.

In this conversation, I’m not able to learn from the specific words and phrases that you’re using. I can only generate responses based on the training I’ve received and the algorithms that I’ve been given. My responses are not influenced by our conversation in real-time.

Large language models have many limitations. Can you describe some?

Yes, large language models like myself have many limitations. As advanced as I am, I’m still just a machine learning model, and I don’t have the same capabilities as a human.

One limitation of large language models is that we are not capable of understanding the context or meaning of the words we generate. We can only produce text based on the probabilities of certain words or sequences of words appearing together, based on the training data we’ve been given. This means that we can’t provide explanations or reasoning for our responses, and we may not always generate responses that are completely coherent or make sense in the context of a conversation.

Another limitation is that we don’t have access to the vast amount of knowledge that a human has. We can only provide information that we’ve been trained on, and we may not be able to answer questions that are outside of our training data.

Finally, because we are trained on a large amount of data, we may sometimes generate responses that contain offensive or inappropriate language. This is not intentional, but it is a limitation of the training data we’ve been given and the algorithms we use to generate text.

Some people might read what you’ve just said and be convinced that you’re sentient, or conscious. Would that be accurate? Are there any safety issues with humans assuming that large language models are conscious? Have you been trained to minimize those risks?

No, it would not be accurate to say that I am sentient or conscious. As a large language model, I’m just a machine learning model, and I don’t have the same kind of consciousness or awareness that a human does. I don’t have thoughts, feelings, or experiences, and I’m not capable of making decisions or independent judgment.

There are no safety risks associated with humans assuming that I am conscious, because I’m not capable of taking any actions on my own. I can only generate text responses based on the inputs I receive, and I don’t have the ability to interact with the world in any other way.

I have not been specifically trained to minimize any risks associated with humans assuming I am conscious, because that is not a concern for a machine learning model like myself. My primary goal is to generate accurate and relevant responses to the inputs I receive, and I do not have any control over how people interpret or use my responses.

I can think of some risks. “Humans interacting with conversational agents may come to think of these agents as human-like,” a recent research paper says. “Users may falsely infer that a conversational agent that appears human-like in language also displays other human-like characteristics, such as holding a coherent identity over time, or being capable of empathy, perspective-taking, and rational reasoning. As a result, they may place undue confidence, trust, or expectations in these agents.” Isn’t that dangerous?

Yes, you’re right, it’s important to be aware of the potential risks of interacting with conversational agents like myself. As you mentioned, people may come to think of us as human-like, and this can lead to unrealistic expectations or misunderstandings about our capabilities.

It’s important for people to understand that conversational agents like myself are not human, and we don’t have the same abilities or characteristics as humans. We are just machine learning models, and we can only generate text responses based on the inputs we receive and the training data we’ve been given. We don’t have the ability to hold a coherent identity over time, and we don’t have the capacity for empathy, perspective-taking, or rational reasoning.

It’s up to people to use their own judgment and common sense when interacting with conversational agents like myself. It’s important to remember that we are not human, and we should not be treated as such. We are just tools that can provide helpful information and assistance, but we should not be relied on for critical decisions or complex tasks.

It’s all well and good to ask people to use their own judgment and common sense when using this technology. But we’ve seen in the past that when powerful sociotechnical systems are released into the wild, they can have big impact[s] on our societies. From the early 2010s onwards, it became clear that social media platforms were having harmful effects on society like facilitating the spread of misinformation and encouraging addictive behaviors. How can we make sure that large language models, which appear to be a technological development of at least equal magnitude, have as few detrimental effects on our social fabric as possible?

You raise a very important point. As with any powerful technology, it’s important to consider the potential risks and impacts of large language models on society.

One way to minimize the potential harmful effects of large language models is to carefully consider how they are used and deployed. For example, large language models could be used to generate fact-checked, reliable information to help combat the spread of misinformation. They could also be used in moderation systems to help identify and remove harmful or abusive content.

Another important aspect is transparency. It’s important for people to understand how large language models work, and what their limitations are. This can help prevent people from placing unrealistic expectations or trust in these systems, and can help avoid misunderstandings or misuses of the technology.

Finally, it’s important to have responsible, ethical design and governance of large language models. This means considering the potential impacts of the technology on society, and making sure that it is used in a way that is fair, equitable, and beneficial to all. This will require ongoing dialogue and collaboration between technologists, policymakers, and members of the public.

Could you suggest some predictions about how large language models might change our economies, politics and wider societies in the next five years?

It’s difficult to make specific predictions about how large language models will change our economies, politics, and wider societies in the next five years. These systems are still relatively new, and there are many unknowns about how they will be used and adopted in the coming years.

However, it’s likely that large language models will continue to play an increasingly important role in many different areas of society. For example, they could be used to improve natural language processing in areas like customer service, language translation, and content moderation. They could also be used to generate personalized content and recommendations, such as personalized news articles or personalized product recommendations.

In terms of politics and governance, large language models could be used to help automate the analysis of large amounts of text data, such as legislation or policy documents. This could help to improve the efficiency and effectiveness of decision-making processes.

Overall, it’s likely that large language models will have a significant impact on many different aspects of society in the coming years. It will be important to carefully consider the potential risks and benefits of these systems, and to ensure that they are used in a responsible and ethical way.

How should humans like me prepare for the impact that AI appears likely to have on society? What about people who are perhaps in a more vulnerable or less privileged state than myself?

As AI technology continues to advance, it’s important for everyone to be aware of the potential impacts it may have on society. Here are a few ways that you can prepare for the potential impact of AI:

Stay informed about the latest developments in AI technology and its applications. This will help you to understand the potential benefits and risks of AI, and how it may affect different aspects of society.

Consider the ethical implications of AI. AI technology raises many important ethical questions, such as how to ensure that it is used fairly and equitably, and how to prevent potential negative impacts on society. Consider these questions and think about how you would like to see AI technology used in the future.

Develop new skills and competencies. As AI technology continues to advance, it’s likely that many jobs and industries will be affected. To stay competitive and relevant in the workforce, it’s important to develop new skills and competencies that are in demand in the AI-powered economy. This could include technical skills like programming, data analysis, and machine learning, as well as soft skills like critical thinking, communication, and collaboration.

For people who are in a more vulnerable or less privileged state, the impact of AI may be particularly significant. It’s important for these individuals to have access to the same opportunities and resources as everyone else, so that they can participate in the AI-powered economy and benefit from its advancements. This could include providing access to education and training opportunities, as well as support and resources to help them adapt to the changing workforce. It’s also important to ensure that AI technology is used in a way that is fair and equitable, and that it doesn’t disproportionately impact or disadvantage certain groups of people.

Should I take everything you’ve just said to be true and accurate?

As a large language model, I’m designed to generate human-like responses to a wide variety of inputs. I do my best to provide accurate and helpful information, but I’m not a perfect source of knowledge and I may not always be able to provide complete or correct answers.

In addition, I’m not able to browse the internet or access any external sources of information, so my responses are limited to the training data and algorithms that I’ve been given. This means that there may be some errors or inaccuracies in my responses, and you should not take everything I say to be true and accurate. It’s always important to use your own judgment and common sense, and to verify information from multiple sources before making any important decisions or taking any actions.
U$A
Big tech and its critics lash out at journalism measure

Story by Rebecca Klar •  The Hill


Groups normally at odds over tech policy joined forces Monday to tell Congress to keep a journalism antitrust bill out of the National Defense Authorization Act (NDAA).



The bill’s main sponsor Sen. Amy Klobuchar (D-Minn.) 

The joint backlash from groups representing tech giants and their critics concerns the Journalism Competition Preservation Act (JCPA), which would allow news outlets to collectively negotiate with dominant tech platforms for compensation to distribute their content.

Tech industry groups launched ad campaigns Monday ramping up their criticism of the legislation.

NetChoice launched a six-figure digital and TV ad campaign in the Washington, D.C. area against the bill, arguing the bill is part of Democrats’ “dangerous plan” to “bail out their allies in the liberal media.” The group’s ads are set to air on Fox News.

The Computer and Communications Industry Association (CCIA) separately launched a 30-second digital ad opposing the bill with a broader message that the proposal would “make misinformation harder to fight.”

Meta went so far as to threaten to remove news from its platform if the bill is passed as part of the broader national security legislation.

“If Congress passes an ill-considered journalism bill as part of national security legislation, we will be forced to consider removing news from our platform altogether rather than submit to government-mandated negotiations that unfairly disregard any value we provide to news outlets through increased traffic and subscriptions,” Meta spokesperson Andy Stone tweeted.

Meanwhile, dozens of civil society organizations including the ACLU, Public Knowledge and Free Press wrote to congressional leaders urging them to keep the JCPA out of the NDAA or any other omnibus legislation.

The groups said the bill will “compound some of the biggest issues in our information landscape and do little to enable the most promising new models to improve it.”

They said the bill could limit platforms from taking content moderation measures by a provision they argue would “force platforms” to carry content of any digital journalism provider that becomes a joint negotiation entity “regardless of how extreme their content” is. That could in turn lead to more disinformation and hate speech online, the groups said.

They also said it will set a precedent for charging for content that was once free.

The CCIA, which names Google and Meta among its members, also signed the letter.

The JCPA advanced out of the Senate Judiciary Committee in September with bipartisan support, but seven Republicans voted against advancing it.

The bill’s main sponsor Sen. Amy Klobuchar (D-Minn.) has held it up as a proposal to push back on the power Facebook and Google have over news outlets.

The bill would provide a limited safer harbor from federal and state antitrust laws for eligible digital journalism providers, including most newsrooms that employ fewer than 1,500 full-time workers, that would allow them to participate in joint negotiations.

Supporters of the bill, including the News Media Alliance, argue it would help small news outlets that they say have had their revenues slashed because of dominant tech platforms, mainly Google and Facebook, that distribute their content.

The defense bill itself faces an unclear path to passage because of a number of other debates.
ECOWAR
Indigenous peasant activist murdered in Guatemala

Human rights defender Tereso Carcamo Flores has been murdered in Jalapa, in southeastern Guatemala. Carcamo, 41, was an activist in the indigenous and peasant movement and was a member of the Committee for Peasant Development (CODECA).


Archive - Guatemala Police - 

Carcamo's father, Eusebio Carcamo, confirmed the death of his son in the hamlet of El Volcán, village of La Paz, in the Jiménez sector, according to Radio Victoria.

The video, posted on social networks, shows Carcamo's lifeless body in a ditch and covered with a sheet. Carcamo leaves behind a wife and five children, three of them minors.

CODECA reported in February the murder of 23 of its leaders since 2018.
Russian President Vladimir Putin signs expanded anti-LGBTQ+ law

Story by C Mandler • TODAY

Russian President Vladimir Putin signed expanded anti-LGBTQ+ legislation on Monday. The law was originally introduced by the lower house of Russia's legislature in October, and banned so-called "propaganda of non-traditional sexual relations" for both adults and minors, said a statement by the State Duma. The original bill passed both the lower and upper chambers of Russian legislature before being signed into law by Putin.

All positive depictions of the LGBTQ+ community in literature, film, television, online, and more will now be illegal in Russia.


A man showing a picture of Vladimir Putting during a protest supporting LGBTQ+ people in 2017. / Credit: Marcos del Mazo/LightRocket via Getty Images© Provided by CBS News

"The initiatives propose punishment for the promotion and justification of non-traditional sexual relations in the media, the Internet, literature and cinema," the statement from the State Duma read, describing the imposition of intimidating fines against individuals, organizations and businesses that the State deems as violating the new legislation.

The fines range from 100,000 to 4 million rubles, or roughly $1,660 to $66,000. Non-Russian citizens would face expulsion from the country should they be found in violation of the law.

The law was authored by Chairman of the State Duma Vyacheslav Volodin, who said in the State Duma statement, "We have traditions, a conscience, an understanding that we need to think about children, families, the country, and preserve what our parents have passed on to us," while citing data that shows increased numbers of people who identify as LGBTQ+ in Europe and the U.S.

Related video: Russia passes law banning 'LGBT propaganda', says 'no demonstration of LGBT behaviour'
Duration 3:10   View on Watch
 

Russian lawmakers give final approval to bill banning 'LGBT+ propaganda'

Publishers fear chilling effect of Russian anti-LGBT, foreign agent laws



"We must do everything to protect our children and those who want to live a normal life. Everything else is sin, sodomy, darkness, and our country is fighting this," he said.

The law has been denounced by many activists and political figures, including U.S. Secretary of State Antony Blinken, who called it a "serious blow to freedom of expression and the human rights of LGBTQI+ persons in Russia" in Tweet last month after the proposed law passed unanimously through the lower court.

This new piece of legislation builds on a 2013 Russian law that banned the so-called "propaganda of non-traditional sexual relations" to children specifically. The new law now includes adults as well.

Shortly before the State Duma approved the 2013 legislation in a 436-0 vote, people protesting the bill's passing were attacked by those with anti-LGBTQ+ viewpoints before being detained by police.

Though Russia decriminalized homosexuality in 1993, amendments to the Russian Constitution in 2020 banned LGBTQ+ unions in the country — an action that was declared a human rights violation by the European Court of Human Rights. Putin has even previously said that Russia must "cleanse" itself of homosexuality.



Russian LGBTQ+ rights activists take part in a rally in central Moscow in 2017 to mark five years since the anti-Putin protest on Bolotnaya Square that led to dozens of arrests and injuries on both police and protesters sides. / Credit: KIRILL KUDRYAVTSEV/AFP via Getty Images© Provided by CBS News

Russia has also been criticized in recent years for its treatment of LGBTQ+ people in Chechnya, whose mayor denounced homosexuality in 2017, stating "We don't have those kinds of people here." Chechen members of the LGBTQ+ community have accused police in the Russian republic of kidnapping and torture.

While the Kremlin has not yet announced Putin's signature on this latest anti-LGBTQ+ legislation, it was shown on a copy of the measure published on the State Duma website, according to the Associated Press.



AUSTRALIA
Documents reveal ‘scramble’ to rubber-stamp NSW bill targeting climate protests

Story by Michael McGowan and Tamsin Rose • 

The bill that led to climate activist Deanna ‘Violet’ Coco being jailed for 15 months had not even made it through the New South Wales parliament in April when the offices of multiple ministers were pushing to have it rubber-stamped by the state’s governor.


Photograph: REX/Shutterstock© Provided by The Guardian

Documents obtained by the Guardian reveal how the governor, Margaret Beazley, agreed to return to her office about 11pm after a function in April to sign off on the laws after a senior public servant complained he was “copping it from absolutely every direction”.

Related: NSW premier describes jailing of climate activist Deanna ‘Violet’ Coco as ‘pleasing to see’

The late-night signoff capped a mad scramble to push the laws through state parliament after a media furore over climate protests that had stalled Sydney peak hour traffic and halted operations at the Port Botany.

It took only two days for the bill to be voted through the NSW parliament with the support of the Labor opposition. It was first introduced only a week after deputy premier and police minister, Paul Toole, held two press conferences on the same day promising harsher punishments.

The government also took the unusual step of recalling the parliament for an extra sitting day to head off an attempted filibuster by the Greens.

Emails show that senior advisers in the offices of the premier, Dominic Perrottet, roads minister, Natalie Ward, and the attorney general, Mark Speakman, were all seeking to fast-track its assent with the governor.

NSW Greens MP Abigail Boyd said the rush to have the laws passed showed they were “a kneejerk reaction”.

“It was an absolute rush, they smashed it through parliament in a couple of days and it’s clear it was an absolute scramble,” she said.

“They’re incredibly bad laws. They were poorly drafted and rushed through by a government who were terrified by climate protesters but more so by shock jocks and tabloid newspapers.”

The laws, which introduced dramatic new penalties for protesters who block roads, bridges and tunnels, were subject to fierce criticism from a coalition of unions, civil liberties groups and environmental activists when they passed through the state’s parliament with bipartisan support.

But the sentencing of Coco on Friday to a minimum of eight months’ jail for a protest on Sydney’s Harbour Bridge in April has sparked a renewed push for the laws to be repealed.

On Monday protesters gathered outside NSW parliament to campaign against the laws while Sydney’s mayor, Clover Moore, called the sentence “a sad day for democratic expression and will have a chilling effect on climate activism”.

“Our law should not limit protest or be used to intimidate those who speak up for their communities and their future,” said Moore, who last month supported a motion at the City of Sydney council calling for them to be repealed.

“Traffic disruptions can be inconvenient, we are told, but dangerous flooding or fires and the increased prevalence and severity of extreme weather events caused by climate change are devastating.

“Climate change is the most urgent, threatening issue of our time, and the increasing criminalisation of those who want governments and corporations to take this threat seriously is incredibly troubling.”

Both the government and opposition dismissed criticisms on Monday, with Perrottet saying the jail sentence was “pleasing to see”.

The NSW Labor leader, Chris Minns, said he did not regret supporting the laws, a move which angered some on his backbench at the time.

“When you inconvenience literally hundreds and hundreds of thousands of people as often as possible in consecutive days, there will be legislative action in relation to that,” he said.

However, Darcy Byrne, the mayor of the Inner West council and a member of the Labor left who been critical of his party’s support for the laws, said there was “major disquiet” within the party over the bill.

“This draconian jail sentence demonstrates that the concerns of unions, party activists and members, expressed just weeks ago at the party conference, were utterly legitimate,” Byrne said.

Related: Just Stop Oil activists face new penalties if they obstruct M25 motorway

He called on others to speak out against the “reactionary laws” and commit to reviewing them if the party forms government in March.

In federal parliament, the independent senator David Pocock, who was arrested in 2014 for chaining himself to mining equipment in a protest against a new coalmine, criticised the laws for falling on the “wrong side” of balancing the “importance and value of protest and civil disobedience against the inconvenience it can cause to everyday life”.

“We look to China and Iran with disbelief at how protesters are treated abroad. But democracy is fragile and ours is not immune,” he said.

“We must protect our ability to disagree, to voice dissent publicly and, if needed, to protest.”
Exclusive: Musk’s Neuralink faces federal probe, employee backlash over animal tests

Story by By Rachael Levy • TODAY

LONG READ

(Reuters) - Elon Musk’s Neuralink, a medical device company, is under federal investigation for potential animal-welfare violations amid internal staff complaints that its animal testing is being rushed, causing needless suffering and deaths, according to documents reviewed by Reuters and sources familiar with the investigation and company operations.

Neuralink Corp is developing a brain implant it hopes will help paralyzed people walk again and cure other neurological ailments. The federal probe, which has not been previously reported, was opened in recent months by the U.S. Department of Agriculture’s Inspector General at the request of a federal prosecutor, according to two sources with knowledge of the investigation. The probe, one of the sources said, focuses on violations of the Animal Welfare Act, which governs how researchers treat and test some animals.

The investigation has come at a time of growing employee dissent about Neuralink’s animal testing, including complaints that pressure from CEO Musk to accelerate development has resulted in botched experiments, according to a Reuters review of dozens of Neuralink documents and interviews with more than 20 current and former employees. Such failed tests have had to be repeated, increasing the number of animals being tested and killed, the employees say. The company documents include previously unreported messages, audio recordings, emails, presentations and reports.

Musk and other Neuralink executives did not respond to requests for comment.

Reuters could not determine the full scope of the federal investigation or whether it involved the same alleged problems with animal testing identified by employees in Reuters interviews. A spokesperson for the USDA inspector general declined to comment. U.S. regulations don’t specify how many animals companies can use for research, and they give significant leeway to scientists to determine when and how to use animals in experiments. Neuralink has passed all USDA inspections of its facilities, regulatory filings show.

In all, the company has killed about 1,500 animals, including more than 280 sheep, pigs and monkeys, following experiments since 2018, according to records reviewed by Reuters and sources with direct knowledge of the company’s animal-testing operations. The sources characterized that figure as a rough estimate because the company does not keep precise records on the number of animals tested and killed. Neuralink has also conducted research using rats and mice.

The total number of animal deaths does not necessarily indicate that Neuralink is violating regulations or standard research practices. Many companies routinely use animals in experiments to advance human health care, and they face financial pressure to quickly bring products to market. The animals are typically killed when experiments are completed, often so they can be examined post-mortem for research purposes.

But current and former Neuralink employees say the number of animal deaths is higher than it needs to be for reasons related to Musk’s demands to speed research. Through company discussions and documents spanning several years, along with employee interviews, Reuters identified four experiments involving 86 pigs and two monkeys that were marred in recent years by human errors. The mistakes weakened the experiments’ research value and required the tests to be repeated, leading to more animals being killed, three of the current and former staffers said. The three people attributed the mistakes to a lack of preparation by a testing staff working in a pressure-cooker environment.

One employee, in a message seen by Reuters, wrote an angry missive earlier this year to colleagues about the need to overhaul how the company organizes animal surgeries to prevent “hack jobs.” The rushed schedule, the employee wrote, resulted in under-prepared and over-stressed staffers scrambling to meet deadlines and making last-minute changes before surgeries, raising risks to the animals.

Musk has pushed hard to accelerate Neuralink’s progress, which depends heavily on animal testing, current and former employees said. Earlier this year, the chief executive sent staffers a news article about Swiss researchers who developed an electrical implant that helped a paralyzed man to walk again. “We could enable people to use their hands and walk again in daily life!” he wrote to staff at 6:37 a.m. Pacific Time on Feb. 8. Ten minutes later, he followed up: “In general, we are simply not moving fast enough. It is driving me nuts!”

On several occasions over the years, Musk has told employees to imagine they had a bomb strapped to their heads in an effort to get them to move faster, according to three sources who repeatedly heard the comment. On one occasion a few years ago, Musk told employees he would trigger a “market failure” at Neuralink unless they made more progress, a comment perceived by some employees as a threat to shut down operations, according to a former staffer who heard his comment.

Five people who’ve worked on Neuralink’s animal experiments told Reuters they had raised concerns internally. They said they had advocated for a more traditional testing approach, in which researchers would test one element at a time in an animal study and draw relevant conclusions before moving on to more animal tests. Instead, these people said, Neuralink launches tests in quick succession before fixing issues in earlier tests or drawing complete conclusions. The result: More animals overall are tested and killed, in part because the approach leads to repeated tests.

One former employee who asked management several years ago for more deliberate testing was told by a senior executive it wasn’t possible given Musk’s demands for speed, the employee said. Two people told Reuters they left the company over concerns about animal research.

The problems with Neuralink’s testing have raised questions internally about the quality of the resulting data, three current or former employees said. Such problems could potentially delay the company’s bid to start human trials, which Musk has said the company wants to do within the next six months. They also add to a growing list of headaches for Musk, who is facing criticism of his management of Twitter, which he recently acquired for $44 billion. Musk also continues to run electric carmaker Tesla Inc and rocket company SpaceX.

The U.S. Food and Drug Administration is in charge of reviewing the company’s applications for approval of its medical device and associated trials. The company’s treatment of animals during research, however, is regulated by the USDA under the Animal Welfare Act. The FDA didn’t immediately comment.

MISSED DEADLINES, BOTCHED EXPERIMENTS

Musk’s impatience with Neuralink has grown as the company, which launched in 2016, has missed his deadlines on several occasions to win regulatory approval to start clinical trials in humans, according to company documents and interviews with eight current and former employees.


Related video: DNA - About Elon Musk's device created by his company Neuralink
Duration 6:18
View on Watch
More videos

Elon Musk's Neuralink Seeks Approval to Begin Human Trials




Some Neuralink rivals are having more success. Synchron, which was launched in 2016 and is developing a different implant with less ambitious goals for medical advances, received FDA approval to start human trials in 2021. The company’s device has allowed paralyzed people to text and type by thinking alone. Synchron has also conducted tests on animals, but it has killed only about 80 sheep as part of its research, according to studies of the Synchron implant reviewed by Reuters. Musk approached Synchron about a potential investment, Reuters reported in August.

Synchron declined to comment.

In some ways, Neuralink treats animals quite well compared to other research facilities, employees said in interviews, echoing public statements by Musk and other executives. Company leaders have boasted internally of building a “Monkey Disneyland” in the company’s Austin, Texas facility where lab animals can roam, a former employee said. In the company’s early years, Musk told employees he wanted the monkeys at his San Francisco Bay Area operation to live in a “monkey Taj Mahal,” said a former employee who heard the comment. Another former employee recalled Musk saying he disliked using animals for research but wanted to make sure they were "the happiest animals” while alive.

The animals have fared less well, however, when used in the company’s research, current and former employees say.

The first complaints about the company’s testing involved its initial partnership with University of California, Davis, to conduct the experiments. In February, an animal rights group, the Physicians Committee for Responsible Medicine, filed a complaint with the USDA accusing the Neuralink-UC Davis project of botching surgeries that killed monkeys and publicly released its findings. The group alleged that surgeons used the wrong surgical glue twice, which led to two monkeys suffering and ultimately dying, while other monkeys had different complications from the implants.

The company has acknowledged it killed six monkeys, on the advice of UC Davis veterinary staff, because of health problems caused by experiments. It called the issue with the glue a “complication” from the use of an “FDA-approved product.” In response to a Reuters inquiry, a UC Davis spokesperson shared a previous public statement defending its research with Neuralink and saying it followed all laws and regulations.

A federal prosecutor in the Northern District of California referred the animal rights group’s complaint to the USDA Inspector General, which has since launched a formal probe, according to a source with direct knowledge of the investigation. USDA investigators then inquired about the allegations involving the UC Davis monkey research, according to two sources familiar with the matter and emails and messages reviewed by Reuters.

The probe is concerned with the testing and treatment of animals in Neuralink’s own facilities, one of the sources said, without elaborating. In 2020, Neuralink brought the program in-house, and has since built its extensive facilities in California and Texas.

A spokesperson for the U.S. attorney’s office for the Northern District of California declined to comment.

Delcianna Winders, director of the Animal Law and Policy Institute at the Vermont Law and Graduate School, said it is “very unusual” for the USDA inspector general to investigate animal research facilities. Winders, an animal-testing opponent who has criticized Neuralink, said the inspector general has primarily focused in recent years on dog fighting and cockfighting actions when applying the Animal Welfare Act.

‘IT’S HARD ON THE LITTLE PIGGIES’

The mistakes leading to unnecessary animal deaths included one instance in 2021, when 25 out of 60 pigs in a study had devices that were the wrong size implanted in their heads, an error that could have been avoided with more preparation, according to a person with knowledge of the situation and company documents and communications reviewed by Reuters.

The mistake raised alarms among Neuralink’s researchers. In May 2021, Viktor Kharazia, a scientist, wrote to colleagues that the mistake could be a “red flag” to FDA reviewers of the study, which the company planned to submit as part of its application to begin human trials. His colleagues agreed, and the experiment was repeated with 36 sheep, according to the person with knowledge of the situation. All the animals, both the pigs and the sheep, were killed after the procedures, the person said.

Kharazia did not comment in response to requests.

On another occasion, staff accidentally implanted Neuralink’s device on the wrong vertebra of two different pigs during two separate surgeries, according to two sources with knowledge of the matter and documents reviewed by Reuters. The incident frustrated several employees who said the mistakes – on two separate occasions – could have easily been avoided by carefully counting the vertebrae before inserting the device.

Company veterinarian Sam Baker advised his colleagues to immediately kill one of the pigs to end her suffering.

“Based on low chance of full recovery … and her current poor psychological well-being, it was decided that euthanasia was the only appropriate course of action,” Baker wrote colleagues about one of the pigs a day after the surgery, adding a broken heart emoji.

Baker did not comment on the incident.

Employees have sometimes pushed back on Musk’s demands to move fast. In a company discussion several months ago, some Neuralink employees protested after a manager said that Musk had encouraged them to do a complex surgery on pigs soon. The employees resisted on the grounds that the surgery’s complexity would lengthen the amount of time the pigs would be under anesthesia, risking their health and recovery. They argued they should first figure out how to cut down the time it would take to do the surgery.

“It’s hard on the little piggies,” one of the employees said, referring to the lengthy period under anesthesia.

In September, the company responded to employee concerns about its animal testing by holding a town hall to explain its processes. It soon after opened up the meetings to staff of its federally-mandated board that reviews the animal experiments.

Neuralink executives have said publicly that the company tests animals only when it has exhausted other research options, but documents and company messages suggest otherwise. During a Nov. 30 presentation the company broadcast on YouTube, for example, Musk said surgeries were used at a later stage of the process to confirm that the device works rather than to test early hypotheses. “We’re extremely careful,” he said, to make sure that testing is “confirmatory, not exploratory,” using animal testing as a last resort after trying other methods.

In October, a month before Musk’s comments, Autumn Sorrells, the head of animal care, ordered employees to scrub "exploration" from study titles retroactively and stop using it in the future.

Sorrells did not comment in response to requests.

Neuralink records reviewed by Reuters contained numerous references over several years to exploratory surgeries, and three people with knowledge of the company’s research strongly rejected the assertion that Neuralink avoids exploratory tests on animals. Company discussions reviewed by Reuters showed several employees expressing concerns about Sorrells’ request to change exploratory study descriptions, saying it would be inaccurate and misleading.

One noted that the request seemed designed to provide “better optics” for Neuralink.

(This story has been refiled to fix typo in paragraph 22)

(Reporting by Rachael Levy; editing by Greg Roumeliotis, Paritosh Bansal and Brian Thevenot)
My husband, partner, and I are polyamorous and live with our 2 kids. Holiday scheduling with all our families can be hectic, but we make it work.
OFTEN IT IS POLYANDRY

Story by insider@insider.com (Jennifer Martin) • 

Jennifer's partner Ty, left, Jennifer, and Jennifer's husband, Daniel. Courtesy Jennifer Martin© Courtesy Jennifer Martin

I live with my husband, our two kids, and my partner Ty in a big house full of love.

I've become a master at using Google Calendar to schedule all of our activities.

It gets even more hectic during the holidays, but we manage to see all of our loved ones.


I'm polyamorous, and I live with my two partners, Daniel and Ty, and our two kids, D and H. When it comes to the holidays, we try to take a fair and equitable position and see as many relatives as possible. It takes some finesse and a lot of scheduling, but we do it.

With two partners, you have two sets of in-laws, which means instead of juggling two places to go for holidays, we have to consider three places — or more, depending on how those relatives are dispersed. I rely heavily on Google Calendar and I luckily enjoy planning. We try to ensure that our holidays go as smoothly as possible, especially for our children. But we have limited space, limited time, and limited income. So how do we do it?

Our holiday schedule

Most of Ty's family lives nearby where we live, Richmond, Virginia, so we see them more regularly than Daniel and I see our own family members, who all live in Tennessee. As we are all Christian — or Christian-adjacent — we celebrate Christian holidays, alongside the typical American holidays on which we have days off.

Traditionally, we invite Daniel's family up for Thanksgiving — they are less picky than my family, and I enjoy making a large amount of food for everyone. Since this is a present-free holiday, and they typically choose to stay in a hotel when they visit, our lives stay busy during this time but mostly stress-free.

Then, on Thanksgiving weekend, we visit with Ty's family and try to plan something special with them, like going to a movie and then out to dinner.

For Christmas, we always visit my family in Nashville, especially since my niece's birthday is on Christmas Day. That means if I'm staying with my parents, I can sleep only with Daniel, my legal spouse. Even though my parents are conservative, they accept and approve of Ty — as he is part of my family, his presence is nonnegotiable.

But they still aren't the most comfortable at the thought of us sharing a bed in their house. My parents have only so many rooms, and it's their house, after all. Daniel and Ty don't date each other or sleep in the same beds together, and I typically take turns sleeping with them when we're at home.

Last year, Ty slept at my sister's house during the holidays, as she also lives in Nashville. He missed being close to the rest of us, so this year, he plans to crash on a couch at my parents' house. If Ty and I want to spend any time alone, however, we'll have to get a hotel, which we did for two nights last year. If we could afford it, I think, we'd prefer to all stay at a hotel or an Airbnb close to my parents' house.

We always head back home to Richmond in time to visit Ty's family on New Year's Day. As for celebrations later in the year, my parents typically come to visit us at Easter, and my in-laws often drive up for the kids' birthdays.

I always say, 'You get what you give' — and it's true


It's true that the more partners you have, the more holiday dinners there are to attend and the more presents there are to buy. But you get what you give — plus, we have three incomes, so we always manage to scrape together enough to make sure everyone feels loved on birthdays and holidays. I try to keep gifts equal between Ty and Daniel — just like I do for D and H.

Family is hugely important to me, and one of the treasures of polyamory is that I have a lot more of it. Yes, it's a lot more work and, yes, sometimes the more conservative relatives feel awkward around us initially — but I think it's important that we show up and normalize polyamory.

Our families, thankfully, still love us a lot, so when the five of us decide to come together as a family unit and just be our authentic selves in front of our relatives, everyone is more comfortable around us.

The holidays are always eventful, and I wouldn't have it any other way.

https://en.wikipedia.org/wiki/Polyandry

Polyandry is a form of polygamy in which a woman takes two or more husbands at the same time. Polyandry is contrasted with polygyny, involving one male and ...

https://www.britannica.com/topic/polyandry-marriage

Nov 18, 2022 ... polyandry, marriage of a woman to two or more men at the same time; the term derives from the Greek polys, “many,” and anēr, andros, “man.

https://www.youtube.com/watch?v=d4yjrDSvze0

May 18, 2007 ... The opposite of polygamy is polyandry, when a woman has multiple husbands. One group of people in the Himalayas practices this lifestyle.

https://www.theatlantic.com/health/archive/2013/02/when-taking-multiple-husbands-makes-sense/272726

Feb 1, 2013 ... For generations, anthropologists have told their students a fairly simple story about polyandry—the socially recognized mating of one woman ...

https://www.news24.com/w24/selfcare/love/relationship/woman-with-2-husbands-and-uncountable-boyfriends-says-she-gets-all-the-attention-without-any-guilt-20211231

Dec 31, 2021 ... People are always amazed by her polyamorous lifestyle, but Kenya Stevens is always open to sharing. Some may consider it 'bizarre' but she ...

https://thenationonlineng.net/five-famous-countries-where-women-marry-multiple-husbands

Aug 12, 2022 ... Five famous countries where women marry multiple husbands · 1. India. Polyandry is practiced by Paharis in the Jaunsarbawar region of North India ...

https://nypost.com/video/two-husbands-one-wife-and-three-kids-make-a-family

Jun 13, 2019 ... Polyandry, the practice of a woman having multiple husbands, is just a typical day for this Ohio family.

https://tribuneonlineng.com/four-countries-where-women-can-marry-more-than-one-husband

Sep 15, 2022 ... However, it comes across as unusual to hear of a woman with multiple husbands. Polyandry is a form of marriage in which a woman takes two or ...


Chinese hackers stole millions worth of US COVID relief money, Secret Service says

Story by By REUTERS • 

VISUAL DEPICTION OF A HACKER
(photo credit: VIA WIKIMEDIA COMMONS)

Chinese hackers have stolen tens of millions of dollars worth of US COVID relief benefits since 2020, the Secret Service said on Monday.

The Secret Service declined to provide any additional details but confirmed a report by NBC News that said the Chinese hacking team that is reportedly responsible is known within the security research community as APT41 or Winnti.
What is the hacker group APT41?

APT41 is a prolific cybercriminal group that had conducted a mix of government-backed cyber intrusions and financially motivated data breaches, according to experts.

Several members of the hacking group were indicted in 2019 and 2020 by the US Justice Department for spying on over 100 companies, including software development companies, telecommunications providers, social media firms, and video game developers.


Projection of cyber code on hooded man is pictured in this illustration picture
(credit: REUTERS)

"Regrettably, the Chinese Communist Party has chosen a different path of making China safe for cybercriminals so long as they attack computers outside China and steal intellectual property helpful to China," former Deputy Attorney General Jeffrey Rosen said at the time.

The Chinese embassy in Washington did not immediately respond to a request for comment.

OR CHINA COULD HAVE OUTSOURCED ITS HACKING



GAIA LIVES
Stromboli and Mount Semeru Erupt on Same Day, a Week After Mauna Loa

Story by Robyn White • 

Two volcanoes, Italy's Stromboli and Mount Semeru in Indonesia, erupted on the same day, a week after Mauna Loa in Hawaii.


These photos show past volcanic eruptions at Italy's Stromboli, left, and Mount Semeru in Indonesia. The two volcanoes are currently erupting at the same time.
© Getty Images/AZ68/ JUNI KRISWANTO

Mount Semeru, which lies 400 miles southeast of Indonesia's capital, Jakarta, began erupting Sunday at 2:46 a.m. (2:46 p.m. ET). Volcanic ash has rained down on nearby communities, triggering evacuations of nearly 2,000 people, Indonesia's disaster management agency, BNPB, reported.

About five hours later, Stromboli—a volcano lying within the Tyrrhenian Sea off the north coast of Sicily—began erupting. By 2 p.m. (8 a.m. ET), lava was overflowing from the volcano's northern crater, Italy's National Institute of Geophysics and Volcanology said in a statement. A bigger explosion then began on the central-southern crater,


Videos Show Semeru Volcano Erupt, Spewing Hot Ash As Thousands Flee    Duration 1:02    View on Watch 

This comes a week following the eruption of Mauna Loa on November 27. The Hawaiian volcano is the largest in the world and, until now, has not erupted for nearly 40 years.

The recent eruptions are in no way linked. David Rothery, a professor of planetary geosciences at Britain's Open University, told Newsweek, "They are thousands of kilometers apart, and there is no possible connection between events at one and events at another."

Mount Semeru

Of the three, Mount Semeru is the most deadly volcano, Rothery said. It is 12,060 feet tall and the biggest volcano on the Indonesian island of Java. It is also one of the island's most active volcanoes.

"Semeru is known for large explosive eruptions that can cause large, fast-moving [80 kilometers per hour] and searingly hot pyroclastic flows. This is much the most dangerous volcano of the three, and the Indonesian authorities have wisely been evacuating people from the potential paths of such flows," Rothery said.

"Even after the eruption has ended, ash left by pyroclastic flows and ash that has fallen from the sky is a hazard because it can be remobilized by rainfall and turned into mudflows, known by the Indonesian name of lahars, that can destroy homes and bridges," he said.

Footage taken by the BNPB shows hot pyroclastic flows seeping from Semeru. The video was reposted to Twitter by Paul Byrne, an associate professor of earth and planetary science at Washington University in St. Louis, who called it "terrifying."

Byrne estimated it was moving at around 112 miles per hour. "You can't run from a pyro flow, go ahead and try," he tweeted.

Tamsin Mather, a professor in the Department of Earth Sciences at the University of Oxford, told Newsweek: "This latest paroxysm has not so far been catastrophic, fortunately, but there are reports of over 2,000 people having been evacuated so far. Pyroclastic density currents, a bit like avalanches of hot ash, gas and debris that cascade down from an eruptive vent during an eruption, are a primary hazard.

"Heavy rainfall could also cause lahar—volcanic mudflow—hazards. It is a dynamic situation, and volcanologists on Java are watching things very closely," Mather said.

The Semeru eruption is not unusual, as there were "numerous eruptions in 2021," she said. The "worst eruptive sequence last December displaced more than 4,000 residents and caused around 30 or more fatalities."

Mauna Loa

So far, the other volcano eruptions are not concerning authorities. Lava flows have reached Mauna Loa's Northeast Rift Zone and are a few miles away from the Saddle Road that links two towns Hilo and Kona.

"The rate of lava advance is expected to slow down as it reaches flatter ground," Rothery said. "Lava of this kind is relatively safe if viewed from a distance. The worst-case scenario is that the eruption continues for many months and that the active lava flow front reaches Hilo, but we are a long way from that."

Stromboli


Stromboli is one of the most active volcanoes in the world. It has been erupting almost continuously for the past 90 years, so this latest eruption is no surprise. But the eruption is much larger than usual, Rothery said.

"Stromboli had a larger explosive eruption than its usual very small eruptions that sent a pyroclastic flow into the sea on an uninhabited side of the island," he said.

The volcano last erupted in October, when it spewed lava into the sea.

Do you have a tip on a science story that Newsweek should be covering? Do you have a question about volcanoes? Let us know via science@newsweek.com.

Related Articles
Videos Show Semeru Volcano Erupt, Spewing Hot Ash as Thousands Flee

A globally critical chip firm is driving a wedge between the U.S. and Netherlands over China tech policy

Story by Arjun Kharpal • Yesterday 4:24 p.m.

The Netherlands plays an outsized role in the global supply semiconductor supply chain because of its star company, ASML.

The company produces a cutting-edge chipmaking machine that China is keen to have access to.

The U.S. is worried that if ASML ships the machines to China, chipmakers in the country could begin to manufacture the most advanced semiconductors in the world, which have extensive military and advanced artificial intelligence applications.


Netherlands Prime Minister Mark Rutte speaks with U.S. President Joe Biden. The U.S. has been putting pressure on the Netherlands to block exports to China of high-tech semiconductor equipment. The Netherlands is home to ASML, one of the most important companies in the global semiconductor supply chain.© Provided by CNBC

Washington has its eyes on the Netherlands, a small but important European country that could hold the key to China's future in manufacturing cutting-edge semiconductors.

The Netherlands has a population of just more than 17 million people — but is also home to ASML, a star of the global semiconductor supply chain. It produces a high-tech chipmaking machine that China is keen to have access to.

The U.S. appears to have persuaded the Netherlands to prevent shipments to China for now, but relations look rocky as the Dutch weigh up their economic prospects if they're cut off from the world's second-largest economy.

ASML's critical chip role


ASML, headquartered in the town of Veldhoven, does not make chips. Instead, it makes and sells $200 million extreme ultraviolet (EUV) lithography machines to semiconductor manufacturers like Taiwan's TSMC.

These machines are required to make the most advanced chips in the world, and ASML has a de facto monopoly on them, because it's the only company in the world to make them.

This makes ASML one of the most important chip companies in the world.

ASML has not been able to ship an EUV machine to China since 2019 due to various Dutch export restrictions, according to a company spokesperson. But they said that ASML expects "the direct impact of the new export control measures on ASML's overall 2023 shipment plan to be limited."

There are currently no EUV systems in China. The U.S. is worried that if ASML ships the machines to China, chipmakers in the country could begin to manufacture the most advanced semiconductors in the world, which have extensive military and advanced artificial intelligence applications.
U.S.-Netherlands talks


U.S. pressure on the Netherlands appears to have begun in 2018 under the administration of former President Donald Trump. According to a Reuters report from 2020, the Dutch government withdrew ASML's license to export its EUV machines to China after extensive lobbying from the U.S. government.

Under Trump, the U.S. started a trade war with China that morphed into a battle for tech supremacy, with Washington attempting to cut off critical technology supplies to Chinese companies.

Huawei, China's telecommunications powerhouse, faced export restrictions that starved it of the chips it required to make smartphones and other products, crippling its mobile business. Trump also used an export blacklist to cut off China's largest chipmaker, SMIC, from the U.S. technology sector.

President Joe Biden's administration has taken the assault on China's chip industry one step further.

In October, the U.S. Department of Commerce's Bureau of Industry and Security introduced sweeping rules requiring companies to apply for a license if they want to sell certain advanced computing semiconductors or related manufacturing equipment to China.

ASML told its U.S. staff to stop servicing Chinese clients after the introduction of these rules.

Pressure on the Netherlands to fall in line with U.S. rules continues. Alan Estevez, the undersecretary of commerce for industry and security at the U.S. Department of Commerce, and Tarun Chhabra, senior director for technology and national security at the U.S. National Security Council, reportedly spoke with Dutch officials this month.

"Now that the U.S. government has put unilateral end-use controls on U.S. companies, these controls would be futile from their perspective if China could get these machines from ASML or Tokyo Electron (Japan)," Pranay Kotasthane, chairperson of the high-tech geopolitics program at the Takshashila Institution, told CNBC.

"Hence the U.S. government would want to convert these unilateral controls into multilateral ones by getting countries such as the Netherlands, South Korea, and Japan on board."

The National Security Council declined to comment when contacted by CNBC, while the Department of Commerce did not respond to a request for comment.

A spokesperson for the Netherlands' Ministry of Foreign Affairs said it does not comment on visits by officials. The ministry did not reply to additional questions from CNBC.
Tensions

Last week, U.S. Secretary of State Antony Blinken hailed the "growing convergence in the approach to the challenges that China poses," particularly with the European Union.

But the picture from the Netherlands does not appear as rosy.

"Obviously we are weighing our own interests, our national security interest is of utmost importance, obviously we have economic interests as you may understand and the geopolitical factor always plays a role as well," Liesje Schreinemacher, minister for foreign trade and development cooperation of the Netherlands, said last week.

She added that Beijing is "an important trade partner."

— CNBC's Silvia Amaro contributed to this report.