Saturday, April 25, 2026



AI-powered robots offer new hope to German factories

“With AI, I also see the dark side of the force,”


By AFP
April 22, 2026


Industrial AI is seen as an area where Europe could compete against the United States and China - Copyright AFP RONNY HARTMANN


Clement Kasser

A blue-eyed humanoid robot carefully opens a box and places a tool inside as a crowd of visitors watch the demonstration of “physical AI” skills at a major industrial trade fair in Germany.

Made by German startup Agile Robots, it was among a host of robots showing off their moves at the event, underlining hopes of a coming AI-powered boost for Germany’s long-struggling factories.

Embedding the technology into industrial processes, where Europe already has deep expertise, is seen as a key route for the continent to catch up in the artificial intelligence race against the United States and China.

Such AI-boosted robots make it possible to “actually solve industrial problems,” Rory Sexton, chief executive of Agile Robots, told AFP in an interview.

From next year, he added, the company plans to begin fitting out German factories, particularly those in the automotive industry, a crucial sector for Europe’s biggest economy.

Artificial intelligence used for real-world, hands-on tasks — so-called physical AI — was in focus this year in Hanover at the world’s biggest industrial technology fair, which brings together more than 3,000 exhibitors.

Chancellor Friedrich Merz visited the Agile Robots stand, where he talked to Zhaopeng Chen, the Chinese founder of the Munich-based startup.

In a speech at the fair, Merz threw his support behind the drive to encourage German manufacturers, many of whom still rely on traditional techniques, to step up their use of AI.

AI should be “embedded in the key sectors of our industry and especially” in small- and medium-sized firms, the backbone of the German economy, to create “industrial added value and high-quality jobs”, he said.

– ‘Dark side’ of AI –

But, like in many other industries, German manufacturers are playing catch-up against China when it comes to making humanoid robots.

Merz witnessed China’s progress in the field first-hand during a visit to the country in February, when he saw displays of Chinese-made robots performing kung fu and boxing.

The maker of those robots, Unitree, and other Chinese manufacturers were also out in force at the Hanover fair, as they have been in previous years.

Still, Sexton of Agile Robots insisted that “we’ll soon be able to do what (Unitree) are doing”, and shrugged off such impressive public displays.

Rather than dancing or martial arts, Agile Robots is focused on “value-added tasks for industry”, such as electronic wiring in cars or phone assembly, he said.

He emphasised that Germany offers an “ecosystem of suppliers” and “very strong expertise in mechanical engineering and automation”, both crucial in the race for AI.

Companies are also hopeful about the technological developments — 58 percent of industrial firms surveyed by German digital business association Bitkom believe humanoid robots could help plug skilled labour shortages.

The country also has deep pools of industrial data to draw on from its factories, according to Antonio Krueger, head of the German Research Centre for Artificial Intelligence (DFKI).

“This is something we have at a level of quality far superior to the United States or China,” he told AFP.

But, critics say, the use of this data is still often too piecemeal and isolated, with no overarching strategy to bring it together cohesively.

Not everyone in Hanover was convinced that AI was the solution to the woes of Germany manufacturers, who have long been struggling with issues from high energy costs to weak demand.

Jochen Heinz, an executive from German factory machinery maker SW Machines, cautioned that AI can sometimes make mistakes by, for instance, giving misleading instructions for repairs or incorrectly claiming to have detected problems.

“With AI, I also see the dark side of the force,” he said.
















Did ChatGPT Aid And Abet A School Shooter? – OpEd





By 

By Elizabeth Lawrence

Florida Attorney General James Uthmeier announced on Tuesday, April 21, that the Office of Statewide Prosecution has launched a criminal investigation into the role OpenAI and the company’s artificial intelligence tool, ChatGPT, played in last year’s deadly Florida State University shooting.

“Florida is leading the way in cracking down on AI’s use in criminal behavior, and if ChatGPT were a person, it would be facing charges for murder,” said Uthmeier. “This criminal investigation will determine whether OpenAI bears criminal responsibility for ChatGPT’s actions in the shooting at Florida State University last year.”

ChatGPT: Aiding and Abetting?

According to Uthmeier, ChatGPT “offered significant advice” to gunman Phoenix Ikner before he opened fire on FSU’s campus in April 2025, killing two and injuring six. Authorities reviewed several exchanges between Ikner and the chatbot, including one in which the suspect requested information on ammunition and how a firearm will perform at short range.

Florida law states that anyone who aids, abets, or counsels any criminal offense against the state “is a principal in the first degree and may be charged, convicted, and punished as such, whether he or she is or is not actually or constructively present at the commission of such offense.”

“My prosecutors have looked at this, and they’ve told me if it was a person on the other end of the screen, we would be charging them with murder,” Uthmeier told reporters. His office issued subpoenas to OpenAI to obtain information on the company’s internal policies and training materials, specifically relating to user threats of harm and cooperation with law enforcement. Authorities are also seeking a list of executives, directors, department heads, and senior managers of OpenAI, and all employees of ChatGPT.

The announcement comes just weeks after Uthmeier revealed a civil investigation into OpenAI over the FSU shooting, teen suicide, and child exploitative material. The civil probe will continue, he noted.

“We are going to look at who knew what, designed what, or should have done what,” Uthmeier said. “And if it is clear that individuals knew that this type of dangerous behavior might take place, that these types of unfortunate, tragic events might take place, and nevertheless still turned to profit, still allowed this business to operate, then people need to be held accountable.”

OpenAI Responds

OpenAI said in a statement that neither the company nor its AI chatbot is responsible for the deadly shooting:

“Last year’s mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime. In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity.”

Earlier this year, in a letter to Canadian officials, OpenAI said it has “taken steps to strengthen our safeguards and made changes to our law enforcement referral protocol for cases involving violent activities” after an alleged shooter asked ChatGPT about a variety of gun violence scenarios before killing eight and injuring dozens in British Columbia. OpenAI also vowed to train its AI models to “respond appropriately when users are in distress or pursuing prohibited behavior, with an emphasis on de-escalation and user safety.”

Speaking with reporters, Uthmeier acknowledged that his decision to pursue a criminal investigation of a company is atypical, but he maintained that it’s justified: “We recognize here with OpenAI we’re venturing into uncharted territory, but we need to know whether or not OpenAI has criminal liability.”

  • About the author: Elizabeth “Liz” Lawrence is an author and Assistant Editor at Liberty Nation. Liz has over a decade of experience in media and journalism, including work with American Military News, PBS, and NBC Montana. Liz also wrote “Homesteading Kids,” a children’s book inspiring self-sufficient living and practical skills. Connect with Liz on X @lizlawrence2111


OpenAI says new model adept at making AI better


By AFP
April 23, 2026


OpenAI president Greg Brockman says the new GPT-5.5 model can tend to more computer work without human supervision - Copyright AFP Caroline Brehman

OpenAI released a new model it touts as its best yet for handling research work like making improved versions of itself, as rapid-fire releases by AI rivals pick up pace.

GPT-5.5 was billed as a “new class of intelligence” and comes just months after the launch of its predecessor.

“What is really special about this model is how much more it can do with less guidance,” OpenAI co-founder and president Greg Brockman said at a briefing with journalists.

“It can look at an unclear problem and figure out just what needs to happen next.”

The model is particularly adept at “agentic” coding and computer use in which digital assistants independently tend to tasks as directed, according to the San Francisco-based startup behind ChatGPT.

“It feels like it’s setting the foundation for how we’re going to do computer work going forward,” Brockman said.

In the short term, OpenAI is focused on letting humans act as “orchestrators” while AI models do the “heavy lifting,” chief research officer Mark Chen said at the briefing.

OpenAI was adamant that it built its strongest safeguards to date into GPT-5.5 “to reduce misuse, especially for bio and cyber capabilities.”

That means a ramped-up tendency for the latest model to refuse requests to attempt “cyber-related activities,” OpenAI executives said.

Rival company Anthropic has held back a new Claude Mythos AI model deemed so adept at finding vulnerabilities in software it could be a boon for hackers.

Anthropic restricted the release of Mythos to select major tech firms to give them a head start in fixing cybersecurity vulnerabilities and is looking into reports of unauthorized use of the model.

“There are enough model releases that it’s probably going to be hard to distinguish one from another,” Brockman mused during the briefing.

“This model is a real step forward towards the kind of computing that we expect in the future, but it is one step, and we expect to see many.”

According to OpenAI, artificial general intelligence in which computers think as well or better than people is no longer theoretical, and AI models that research how to essentially improve themselves take the world further in that direction.

The executives described GPT-5.5 “as one of the clearest steps yet toward models that can accelerate AI research itself.


Anthropic probes unauthorized access to Mythos AI model


By AFP
April 22, 2026


Anthropic has delayed a general release of its latest model Mythos, which it says can spot undiscovered security holes that have existed for decades - Copyright AFP SEBASTIEN BOZON

American AI developer Anthropic said Tuesday it was investigating unauthorized access to Mythos, its powerful model which the company itself worries could be a boon for hackers.

Anthropic said earlier this month it restricted the release of Mythos to 40 major tech firms to give them a head start in fixing cybersecurity vulnerabilities before they could be exploited by attackers.

According to Bloomberg, which first reported the probe, a small group of users in a private, online forum gained access to the model via the computer system reserved for Anthropic’s external vendors.

“We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments,” an Anthropic spokesperson told AFP.

The users got hold of Mythos by various means, including using access one of them had as a worker at a contractor for Anthropic, Bloomberg reported.

Anthropic works with a small number of third-party vendors who help with model development.

The firm has delayed a general release of Mythos, which it says can spot undiscovered security holes that have existed for decades, in systems tested by both human experts and automated tools.

It shared Mythos first with a few dozen key US tech and financial services players — such as Nvidia, Amazon and JP Morgan Chase — to allow them to improve their security infrastructure.

But the company has also been accused of overhyping the powers of a technology which is its stock in trade, and the subject of fierce competition with rival OpenAI


EssilorLuxottica sales slide as investors turn wary of AI glasses


By AFP
April 23, 2026


EssilorLuxottica has pushed hard into wearable tech and has a tie-up with Meta for its Ray-Ban AI glasses - Copyright AFP/File Julie JAMMOT

Shares in EssilorLuxottica, the world’s top maker of eyeglasses, slid on Thursday as analysts said investors have turned wary of AI glasses.

The French-Italian company has pushed hard into wearable tech and has a tie-up with Meta for its Ray-Ban AI glasses.

The nearly five percent drop in morning trading in Paris, whose overall market was up 0.1 percent, came despite the company posting Wednesday evening a 4.1 percent increase in first quarter sales to 7.1 billion euros ($8.3 billion).

EssilorLuxottica did not provide a breakdown of AI glasses, but said they supported growth and the launch of Ray-Ban Meta’s new optical-first styles had been a success.

EssilorLuxottica’s foray into “AI glasses is now seen as a source of risk, after initially being viewed as a major opportunity,” analysts at Oddo BHF said in a note.

“Following very satisfactory performance in previous quarters, the stock is going through a tougher patch in 2026,” they added.

While first quarter sales met analyst expectations, and represented a gain of 10.8 percent on an organic basis — that is stripping out the effects of changes in exchange rates and in business operations.

But “given the macroeconomic uncertainties, we are cautiously lowering our expectations for organic growth in 2026 from 10 to nine percent,” said analysts at Jefferies.

EssilorLuxottica does not provide detailed sales and earnings guidance.


No comments: