Tuesday, June 04, 2024

OpenAI insiders blast lack of AI transparency


By AFP
June 4, 2024


The open letter criticizing AI transparency comes amid questions about OpenAI CEO Sam Altman's corporate leadership - Copyright AFP/File Jason Redmond

A group of current and former employees from OpenAI on Tuesday issued an open letter warning that the world’s leading artificial intelligence companies were falling short of necessary transparency and accountability to meet the potential risks posed by the technology.

The letter raised serious concerns about AI safety risks “ranging from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”

The 16 signatories, which also included a staff member from Google DeepMind, warned that AI companies “have strong financial incentives to avoid effective oversight” and that self-regulation by the companies would not effectively change this.

“AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm,” the letter said.

“However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”

That reality, the letter added, meant that employees inside the companies were the only ones who could notify the public, and the signatories called for broader whistleblower laws to protect them.

“Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,” the letter said.

The four current employees of OpenAI signed the letter anonymously because they feared retaliation from the company, The New York Times reported.

It was also signed by Yoshua Bengio, Geoffrey Hinton and Stuart Russell, who are often described as AI “godfathers” and have criticized the lack of preparation for AI’s dangers.

OpenAI in a statement pushed back at the criticism.

“We’re proud of our track record of providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,” a statement said.

“We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world.”

OpenAI also said it had “avenues for employees to express their concerns including an anonymous integrity hotline” and a newly formed Safety and Security Committee led by members of the board and executives, including CEO Sam Altman.

The criticism of OpenAI, which was first released to the Times, comes as questions are growing around Altman’s leadership of the company.

OpenAI has unveiled a wave of new products, though the company insists they will only get released to the public after thorough testing.

An unveiling of a human-like chatbot caused a controversy when Hollywood star Scarlett Johansson complained that it closely resembled her voice.

She had previously turned down an offer from Altman to work with the company.

Op-Ed: Deepfakes, dumb fakes, and just plain wrong fakes — AI screws up big time

By Paul Wallis
DIGITAL JOURNAL
June 3, 2024


Elon Musk tweeted "Yikes. Def not me" about a deepfake video of him supposedly. 
— © AFP

This is very much a visual culture. You’re hit with tens of thousands of images per minute, real and fake. Ai is at the forefront of this bombardment. The plague of fake images, brought to you courtesy of the misinformation industry, is all AI.

This is also nothing like a smart culture. You’re allowed to be an idiot because it’s expected of you. As usual, the warnings arrived well before the fact, and were ignored. The legal situation is as blurry and unfocused as ever.

All of a sudden and as usual, everything everyone was warned about years ago is now a problem. Monotonous, isn’t it?

There’s a major quality issue with the deepfakes. Australian Associated Press has a very good, clear article about how wrong these AI fakes can be. It impacts everything about AI fake images, including their much-vaunted training methods.

These AI pictures are truly absurd. The Wright brothers are replaced with what looks like Tweedledum and Tweedledee. There’s no similarity at all with the real Wright brothers.

The point about AI training is simple. You’d think these lazy image-makers would have trained the AI to at least compare with the real images. Apparently, that’s too much trouble.

The market reach of fake AI images is pretty much universal. That’s not good news for anyone trying to promote anything, including themselves.

By a strange coincidence, this brings us back to who owns images of people? The people own their own images. They’re very much part of top-tier proof of identity, and that shouldn’t even be questioned as a legal ownership right.

…But who owns fake images if they’re given different names? As long as you’re not infringing on someone’s identity, it should be OK, right?

Not necessarily. The famous Taylor Swift deepfakes are a case in point. In this case, the images are close enough, and they do actual damage to the person.

Facial recognition is well-known as a core human must-have social skill. If it looks enough like Taylor Swift, you’re likely to think that it is Taylor Swift. Damage is automatically done by the publication of the images. Even if the sole instruction is “make an image of an attractive brunette” and the image is generated innocently, it’s still a potential problem.

To explain – AI is trained on large numbers of images. People who generate a lot of imagery are unavoidably included in the training materials. Something that looks like Taylor Swift is inevitable.

Add a bit of lowbrow nastiness and the desire to get money out of porn-obsessed morons, and you get porn attached to anyone’s face. Hard to understand? No, it isn’t.

There are forensic ways of managing this sort of thing. Think of it as a “forensic blockchain for images”. You can establish whether an image is too much like a person fairly easily. You could even cross-check a too-similar image before publication using AI. It really is an “image by numbers” thing, quite simple.

Meanwhile, back in Dumb Fakes Land, nobody’s thinking about things like this. Fake images are replacing influencers, photographers, and reality.

There are serious problems with deepfaking anything, including privacy violations, breach of commercial image copyright, and way too many et ceteras. If you deepfake a trademark or use it without accreditation, or go beyond “fair use”, you may have just published a million-dollar lawsuit or several.

Remember, these are unquestionably bona fide legitimate privacy and property issues. The publishers and the AI don’t have a toenail clipping to stand on, even in theory. All they can hope for is that the images don’t match too closely under scrutiny.

Dumb, it is. If anyone thinks people will miss an opportunity to make money out of a deepfake, they’re out of their minds. …Which sorta raises the question of why do deepfakes at all?

There is a market for this garbage. It’s new. It’s cute. It’s stunningly predictable. It’s quick. It’s cheap. It’s godawful, therefore it’s mainstream media. It’s lowest common denominator, therefore it’s good.

This is as dumb as getting AI to do your accounts. You are literally assuming that an automated system can tell the difference between fraud and real numbers. In this case, you’re assuming that people whose lives are based on their images won’t fight tooth and nail to protect those images.

AI deepfakes are also very much a major high-toxicity thing on social media. Nobody seems to be too fussed that hate campaigns are based on a lot of fake imagery and spin. The endlessly-remarked issue that X is now probably inhabited by as many bots as people doesn’t seem to matter much.

It’s commercial suicide, but what’s new? Bots don’t buy sponsors’ products, but that’s obviously OK with someone. Bots don’t get threatened round the clock, either. The odd but real picture is that non-existent people are now generating fake images at the expense of publishers.

We’re now at the black hole formation stage of fakes. These things can destroy their reason for existence already. We now have an artificially stupid technology which can put itself and its publishers out of business and create liabilities every second. Happy?


No comments: