Sunday, November 27, 2022

Op-Ed: ‘Loab’ – AI-scripted ugliness and threats of ‘reality collapse’


ByPaul Wallis
Published November 26, 2022

Attendees take pictures and interact with the Engineered Arts Ameca humanoid robot with artificial intelligence as it is demonstrated during the Consumer Electronics Show (CES) on January 5, 2022 in Las Vegas, Nevada.
— © AFP

This is getting way too familiar. Loab is an AI “entity” with ugly biometric images and a dystopian side built in. So is the spiel that goes with Loab. Artificial intelligence could steal humanity’s mediocrity from it. All that banality wasted in a sea of self-generated realities.

I’m not going to regurgitate this tiresome scenario. Loab is another AI bogeyman thing dressed up as though it’s something new. Just be aware the bombardment of hideous imagery might interfere with your usual daily diet of hideous imagery.

Seems nothing’s too revolting to be posted online somewhere. Humanity doesn’t have enough disgusting things to look at, obviously. So artificially-generated garbage is required.

As pseudo-psychology goes, this is infantile. If you look at the biometric areas of the images generated and compare it to ancient face masks, you’ll see a lot of similarities with things thousands of years old.

This AI-generated horror was persistent. The face of Loab kept coming back, and it took a while to “dilute” the images of Loab. The name Loab was created by garbled text in an image.

Facial recognition is of course an auto-reflex for humans, so it’s no-brainer psych at best. The color backgrounds are also standard urban drab, so the scenes would look semi-familiar to anyone who’s ever been in a car park. Overall the look is quite similar to Heavy Metal Magazine art in the 1980s. It was brilliant then; now it looks like a yard sale of old comics.

The voice generator is supposedly advanced. It’s not. I heard a lot of similar stuff 20 years ago, and if the mix is anything to go by, Loab’s “voice” is inferior in quality.

“Loab can speak!”

“Oh, praise the press release!”

Loab is scripted heavily. I’m strongly reminded of the “sentient” AI issue Google had recently, another dribbling exercise in getting selective answers to prove your own point.

“Reality collapse”

This is an interesting idea or would be if it wasn’t qualified so much. The basic idea is that people will avoid a shared reality for a single, selfish reality. Oh, really?

Humans are not good at sharing realities with other people. They’re spectacularly bad at it. The more common result is conflict. In practice, you manipulate reality anyway, from your choice of society to your choice of décor. You create your own space and you are your own space, in fact.

Reality collapse in relation to fake images and environments, etc. is long since a thing of the past. The Metaverse is one of those subsequent evolutions. Nice to know someone’s paying attention, or in this case, not paying attention
.
The software draws on an artificial intelligence dialogue system dubbed ‘Buddhabot’
 – Copyright AFP Behrouz MEHRI

It’s a matter of opinion whether human beings are on speaking terms with reality. I don’t see why reality would bother.

Setting the bar for AI way too low

At about the point where the AI is asked whether humans shouldn’t be worried that “AI tools exceed our understanding”, all bets are off. Even the pronouns are in the wrong places. The AI refers to humans in context as “we” multiple times, for example. A super-intellect with syntax problems? Can’t tell “I” from “you”? Some threat.

Sure it exceeds our understanding, like toast, power bills and hamburgers. For example – A common factor in imagery has to be generated by common code and common parameters. Similarly, if you turn on a light switch, the light might go on. It’s almost that incomprehensible.

Almost exceeding our understanding much like asking an AI so many loaded questions, for another example.

Let’s be a little brutal:This entire exercise drags AI down to human experience level.
AI is unqualified to identify with human experience on any level.
Therefore AI is a threat to humanity.

Now – Where were you, damn spectators, for the last decade or so? The world and the tech have long since gone past this prehistoric stuff. What is the point of this exercise? Why are we wasting the time of useful tech on useless innuendo?

Liquid non-imagination


The expression “liquid imagination” is rather pointlessly grafted onto the Loab story. Somehow, the “low levels of public trust in information” (generated deliberately by sources of information) may become even less, as a result of the understanding of AI tech to reject all information as “unverifiable”.

A bit late there, mate. The public, quite rightly, doesn’t trust information even if it is verifiable, because the information sources are so sleazy. A lot of people also know how to verify information. It’s not that hard.

It’s a very inelegant argument if you can call it that. After citing a lot of high-stress imagery which is piled onto human consciousness every day deliberately, it’s AI that’s the future problem? Seems superfluous, to put it mildly.

The sheer amount of unnecessary stress inflicted by global media is barely describable. These disgusting images are everywhere. So is the disgusting news, and the not-very-coincidental news that nobody does a damn thing about anything.

…And AI is the issue? A word of advice to these useless purveyors of truly ancient science fiction ideas and pseudo-psychologists:

AI can replace you guys, too. All it needs is a script, you know.

No comments: