OpenAI's offices were sent thousands of paper clips in an elaborate prank to warn about an AI apocalypse
Tom Carter
Thu, November 23, 2023
Microsoft's much-maligned Clippy was one of the first "intelligent office assistants" – but never tried to wipe out humanity. SOPA Images/Getty Images
An employee at rival Anthropic sent OpenAI thousands of paper clips in the shape of their logo.
The prank was a subtle jibe suggesting OpenAI's approach to AI could lead to humanity's extinction.
Anthropic was formed by ex-OpenAI employees who split from the company over AI safety concerns.
One of OpenAI's biggest rivals played an elaborate prank on the AI startup by sending thousands of paper clips to its offices.
The paper clips in the shape of OpenAI's distinctive spiral logo were sent to the AI startup's San Francisco offices last year by an employee at rival Anthropic, in a subtle jibe suggesting that the company's approach to AI safety could lead to the extinction of humanity, according to a report from The Wall Street Journal.
They were a reference to the famous "paper clip maximizer" scenario, a thought experiment from philosopher Nick Bostrom, which hypothesized that an AI given the sole task of making as many paper clips as possible might unintentionally wipe out the human race in order to achieve its goal.
"We need to be careful about what we wish for from a superintelligence, because we might get it," Bostrom wrote.
Anthropic was founded by former OpenAI employees who left the company in 2021 over disagreements on developing AI safely.
Since then, OpenAI has rapidly accelerated its commercial offerings, launching ChatGPT last year to record-breaking success and striking a multibillion-dollar investment deal with Microsoft in January.
AI safety concerns have come back to haunt the company in recent weeks, however, with the chaotic firing and subsequent reinstatement of CEO Sam Altman.
Reports have suggested that concerns over the speed of AI development within the company, and fears that this could hasten the arrival of superintelligent AI that could threaten humanity, were reasons why OpenAI's non-profit board chose to fire Altman in the first place.
OpenAI's chief scientist Ilya Sutskever, who took part in the board coup against Altman before dramatically joining calls for him to be reinstated, has been outspoken about the existential risks artificial general intelligence could pose to humanity, and reportedly clashed with Altman on the issue.
According to The Atlantic, Sutskever commissioned and set fire to a wooden effigy representing "unaligned" AI at a recent company retreat, and he reportedly also led OpenAI's employees in a chant of "feel the AGI" at the company's holiday party, after saying: "Our goal is to make a mankind-loving AGI."
OpenAI and Anthropic did not immediately respond to a request for comment from Business Insider, made outside normal working hours.
No comments:
Post a Comment