Sunday, October 20, 2024

Clickbait, AI and Global Boiling

By Michael Albert
October 18, 2024
Source: Originally published by Z. Feel free to share widely.



This may seem like a big step away from the day’s main topics, the U.S. Israeli genocidal warism, the rising tide of demented fascism, and the election to end all elections, but it isn’t. It may even be a step into these grotesque phenomena.

To start, I am confused about clickbait. And you might wonder, how could I not understand clickbait? Well, of course I understand why clickbaiters just want us to see their ads and don’t give a damn about anything else. I also know it isn’t just corporate advertisers with ads who offer clickbait. Trump fans do it. Harris fans do it. And the left too spends considerable effort on clickbait, especially when fund raising. But lying to attract eyeballs, however odious, is not what I don’t understand.

Clickbait has become so ubiquitous and undisguised that it’s a bit like Trump. No pretense. No shame. Right out front. Lie baby lie. If you go on YouTube, you will initially see an array of video offerings. The titles will typically say something will be in the video. You look. Whoa, it isn’t there. Or perhaps it’s there for one minute out of ten. No denial. No one is surprised. Everybody knows. Clickbait lies. So what don’t I get?

I don’t get why it works. I don’t get why we click clickbait. And I admit that I know it works not least because I know it works on me. When I first started watching YouTube videos, I was interested in various science topics and watched good science-related videos. Then I added some sports videos to see event highlights. Back then there were no ads or at most maybe one or two, and no clickbait.

But over the years clickbait has spread, so now even science stories and sports highlights have adopted an element of misdirection to gain attention, not to mention that once they hook an audience the hosts add more and still more ads.

A few months back, I started to watch videos about the election, war, and climate and for all three the clickbait was blatantly immense and worked. I would click a link to see what the title promised. I would angrily note that what was promised wasn’t there. I would click another link and be annoyed again. I would click another one, and another. Why the hell did I click on this misdirection? Why did I keep doing it? Was it because I just I wanted it to be honest? Was it because they had a hook in my head? Was it because—well, I don’t know what it was because of—but I do know it is hard to not click. Egad. I just clicked again. Having never been sucked into Facebook or Twitter or the rest of that, my clicking on clickbait confuses me.

And now, before returning to ubiquitous lying, we come to the topic of artificial intelligence. I have written considerably about AI before, but I bring it up again because an experience I had this past week confuses me. Steve Shalom and I both sent a Q&A article on the war that we recently published to various people and one of those people, an old friend of Steve’s, happened to have an AI product he could set loose on it. This product, Notebook-LM, calls itself a helpmate for people, particularly writers. You feed what you write into it and then prompt Notebook-LM to change the tone of your writing, fix its style, or summarize it. You can ask it to do research for you. You can have it rewrite parts or even write the whole piece in the first place. You can ask it for criticism. You can ask it to create a podcast episode. Yes, you could feed it this essay, or anything, and ask for a podcast version. Down the road a way, no doubt we will be able to verbally converse with it. We will ask it not only for all the above, but to submit our work to publishers. Then the publishers will have their own AI receive and decide on it. So what? Where’s my confusion?

A step back before two steps forward may help. AI has provoked two kinds of consternation from a great many people. The first fear is that AI would attain a capacity for very high level independent activity called artificial general intelligence, where AGI would be as capable of or outshine humans in all manner of activities. The fear is that AGI would then escape human control and hurt, and enslave, or annihilate us.

The second fear is that bad actors would create videos of people saying and doing things that they’d never said or done. Scammers and grifters would cheat people. Evil empires, like the U.S., would additionally spy or create biological weapons or guide tactical weapons right up our nostrils with it.

And the truth is that barring a technical or social impediment, these two types of fears are each valid. Running wild against humans may occur. Nefarious use already occurs and steadily escalates. The social possibility to abort these dangers is that societies get unbridgeable control of AI and use that control to wisely and effectively limit AI uses. It’s a long shot, but it’s conceivable. The technical possibility to abort these dangers is that while AI’s growth has been incredibly startling for a considerable period of time, maybe there’s a lid on AI’s capacity. Maybe there is no AGI.

You can think of AI as a ton of connected nodes where each node has attached numbers. Programmers increase the number of nodes by packing more in. They increase the effectivity of the attached numbers by enlarging the quality and quantity of information they train the AI on. So the lucky technical hope is that maybe the computing power needed to add more nodes will become too costly or AI firms will just run out of new information for further training. If not that, then maybe AI’s capabilities will soon no longer increase as its handlers add to the number of nodes or improve the training materials.

An eye blink ago, concerned computer scientists saw all that and put forward a number of proposed social guidelines for AI innovations that should not be permitted without first developing a tremendous amount of preventive clarity and effective control. One such step was that AIs should not roam free on the Internet. AIs should not cross borders, should not link up one to the next, to the next, etc. But despite the warning, AIs now roam and link.

A second warning from founders in the field was that AIs shouldn’t prompt other AIs or access other programs. But that is now a built-in capability, not blocked but utilized. It seems the threat of AI going rogue still exists, but it doesn’t constrain the people working on AI. The nefarious bad actor worry about AI misuse also still exists, but there’s not much being done to curtail that either.

My own worries about AI are, nonetheless, mostly about its impacts when used by so-called good actors trying to do so-called good things. One form of my concern which did exist widely for awhile but is now declining as a worry even though the danger is still exactly what it was earlier, or worse, is about the impact AGI would have on jobs and, as a result, on society’s distribution of income and circumstances. The concern still exists, but the people who fund AI seek profits and thus work to silence whatever might interfere with profits. Their unimpeded effectivity is pretty remarkable, given that one of the jobs to be decimated is programming. And yet I have another concern that I fear almost nobody voices.

As AI becomes more and more capable, I fear it will do more and more human-like things and will very significantly reduce what humans do that is human-like. And this brings me to the AI experience that Steve Shalom and I recently had.

Steve sent the Q&A, and it is about 6,000 words, to a friend who had on his computer the capacity to work with Notebook-LM, an AI offering from Google at the cutting edge of certain capabilities. So this friend loaded our Q&A into his Notebook-LM which is exactly what people using that AI are supposed to do. Writers, for example, are supposed to put in everything they have ever written—books, articles, letters, musings, notes, whatever it might be—to then be able to not only instantly access forgotten views, but also cull from them, ask questions of them, request new versions of them, and much more.

So, Steve’s friend prompted his Notebook-LM to create a podcast making the Q&A’s points. His AI helpmate received the Q&A and in about five minutes developed a lengthy podcast in which two people, one man and one woman, discussed the issues of the election in accord with the content of the Q&A.

And here’s the thing, the voices were indistinguishable from human voices. They were AI voices but you could not listen, or at least I could not listen and say to myself oh, that’s an AI that’s talking. The presentation was human-like. It had pauses. It had interruptions. It had humor. It had intonation. It had more of all that than I have. It was like when I used to do RevolutionZ episodes with a friend, Alexandria Shaner, and we would banter a bit even as we would try to address the issues that we were trying to convey. So it was that Notebook-LM hatched two AI alter egos, two AI manifestations, and they bantered while they presented the Q&A content. Soon it will be human-looking robots doing not only audio, but video too.

The thing that was mind-boggling wasn’t only that it sounded so human but also that it so closely repackaged the AI’s every argument more accessibly than the original. It wasn’t just the style and words that were startling, and the bantering that was startling, but it got the content right too and it took five minutes to get the script and all else ready. If Steve and I who actually wrote the Q&A tried to turn it into a podcast in which the two of us were bantering back and forth to amuse and entertain as well as to get across the information, it would not take us five minutes to get a script. More like five days or five weeks. And I’m not sure we could do it at all. Now imagine Notebook-LM Version 2, or say, version 10….

So what’s the problem? What has confused me? Many people would say that’s terrific. That’s fantastic. It’s a super helpmate. It’s a buddy that gets something done which you want to have done and it does it really effectively, really well, and incredibly fast. Great! More time to be me! Really?

What confuses me is that response. What confuses me is that the AI did things that make humans human and it did them quickly and effectively and it may soon be doing them better than humans. Think about writing songs or stories, or about painting pictures, or even about producing films. Think about AGI, much more powerful than Notebook-LM, creating daily, weekly, or monthly plans for you to enact, or keeping your or the country’s finances, or going shopping for you. Think about it being your doctor, at your bedside having better bedside manners than your last doctor. AI can’t already do all that—I don’t think—but it can do lots, with more and indeed, barring a roadblock, with all of it coming. And for me that raises a question.

If AGI arrives and does more and more, what do humans still do? How many humans still multiply numbers and add big piles of them? Nobody. Calculators do it so well and so quickly that we use the calculator. Not to worry. The calculator has been what people say AGI will be—a welcome helpmate able to free us from some tedium (supposing the calculators haven’t also unintentionally weakened our minds like social media has weakened our attention spans).

Go back in time. Consider the invention of cameras. Most artists, before cameras, tried to exactly embody appearances in paintings. After cameras, direct representation declined and more abstract impressionist agendas emerged. Thus, in that case, like for the calculators, the innovation was like what people say about AI. Alarming at first, it became a useful helpmate that even spurred new human innovation. But what if AGI does replication but it also does abstraction and impression all better than people do?

Or consider when computers became by far the best chess players in the world. And then the best chess teachers. And then the source of most chess originality and innovation. Will the only problem be an increased ease of humans cheating with AIs while playing? Or will the trajectory be more concerning when there is literally a human-like robot sitting there, moving the pieces vastly more effectively than any human can? Is this like the fact that tractors can lift vastly more weight than people, but so what, weight lifters still compete? So, machines calculate, remember, replicate and refashion better than people. Bit deal. Tractors dig, pull, and lift better than people. So?

Another example? Consider all the paraphernalia that is now used to create music, both vocal and instrumental, so that humans don’t have to bother getting pitch right, or capturing the lyrics’ emotions, or even writing lyrics in the first place, or tunes, much less practicing for years on guitar, violin or flute. Same for acting and movies. Will creators get steadily lazier and lazier about developing skills that they can request from AGIs? Will the next Leonard Bernstein stand in front of a symphony of AGIs? Will the next Leonard Bernstein be an AGI? Will the human side of these crafts, indeed of all crafts, evaporate?

Too dramatic? Okay, what about when AGI reads in place of us, expresses content for us, and then also writes letters, manuals, essays, reviews, and books in place of us—in time without our even prompting it to do so? That time is effectively already ten seconds away, even without AGI. And what about when our own private AGI plans our days and weeks for us, shops for us, calls and talks to friends for us, baby sits our kids for us, and teaches them what little they still need or want to know?

What about when the AGI instantly answers all our questions and then starts providing us with answers without us even asking for them, so we then ask steadily less often because we get used to the AGI knowing everything and acting on “our knowledge,” so why do we need to even know—or act? Emotionally troubled about that? No problem. Consult your AI helpmate wearing “her” therapist hat.

If all that unfolds, will what’s happening be that the AGI is a helpful aid and with more free time we’re becoming more human? Or will what’s happening be that the AGI is at our request while we applaud taking over ever more activities that make us human, while we become more machine-like?

So what’s my confusion? It is that I wonder why other people don’t have my concern. Not the concern that AGI goes rogue. Not the concern that AGI is intentionally misused. Not even the concern that AGI has unintended consequences, like generating unemployment—all of which concerns are warranted and voiced—but my concern that AGI has attributes which humans will seek out and celebrate even while using those attributes will infantilize us.

To get still more graphic about it, will AGI be a bit like clickbait or heroin or any other drug that’s addicting? Is human infantilization an inexorable process? Will we use our “‘freed” time to flourish or veg out like in an opium den? Will AGI capabilities keep expanding, or will there be a point of diminishing returns for adding nodes and providing better training? Imagine when AGIs build new AGIs. So have I got this all wrong?

If people want to explain to me what I’m missing about clickbait and AI that will curtail my confusions, please do. I would like to be edified. For one thing, I would love it if your corrections would shine a brighter, happier light on immediate circumstances than the darkness that currently inhabits my own view of these trends.

And here’s how weird the AI thing already is. I’m working on turning my recent RevolutionZ oral history podcast episodes into a book. I’m doing all sorts of editing, adding, and deleting to make it a novel of a strange sort, an oral history of a future revolution. When I listened to the AI-created podcast based upon the Q&A, I thought to myself, yikes, self, should I feed the 180,000 words of the oral history into this thing. Should I ask it to do various work using its “talents” that I don’t have, but that maybe it now has—or if not now, may have soon. For example, I could prompt it to give the many interviewees each their own unique voice. Or maybe to make a movie from it, with AI actors, sets, and all. And some will think, great, do it. Just do it. We want to read the result. We want to see the result. In fact, you boob, why didn’t you just give it a prompt to write the thing in the first place? But others, including myself, may think. oh hell, is that really where we’re headed?

So to ease a confusion you may now have, why did I write this somewhat vague, quite disturbing, somewhat confused essay? I partly hoped to mirror what I think is probably going on in many people’s minds. Do you feel similar confusions? If so, you are not alone. Many people are upset at the clickbait dynamic which is getting to the point where to lie is not only normal, it’s so expected and so out front, that if people don’t do it, they’re deemed naive, foolish wimps. Lying is becoming the way to deal with daily life, not just to hide the fact that you’ve been nasty, or you’ve stolen stuff, or you’ve, I don’t know, done something untoward. Lie, or ghost, I guess. How long until parents aren’t good parents unless they teach their kids to lie about, or avoid, almost everything? But wait, there’s rogue, nefarious, and human infantilizing AI—to teach them for us.

I am sorry for all the downers, but of course bad outcomes are not inevitable. We can win better. Will we?

Finally, the planet warns us of impending gargantuan calamity by showing us immediate large calamities. Do we put our eyes in our pockets? Do we sound proof our ears? Do we embrace superficially calm and quiet, but actually blood pressure exploding desperation? Or perhaps instead do we lash out at whoever is handy especially if they won’t or even can’t hit back? Do we studiously avoid taking real action? Are we frogs boiling in a big pot even though we can hop out and turn off the stove?

Clickbait, AI and Global Boiling. They have one source. The institutions around us. There is no alternative to conceiving, sharing, seeking, and attaining an alternative. What we do about the institutions that pervert us, infantilize us, and cook us is our choice. We better choose wisely.


ZNetwork is funded solely through the generosity of its readers.  Donate

 


Michael Albert

Michael Albert`s radicalization occurred during the 1960s. His political involvements, starting then and continuing to the present, have ranged from local, regional, and national organizing projects and campaigns to co-founding South End Press, Z Magazine, the Z Media Institute, and ZNet, and to working on all these projects, writing for various publications and publishers, giving public talks, etc. His personal interests, outside the political realm, focus on general science reading (with an emphasis on physics, math, and matters of evolution and cognitive science), computers, mystery and thriller/adventure novels, sea kayaking, and the more sedentary but no less challenging game of GO. Albert is the author of 21 books which include: No Bosses: A New Economy for a Better World; Fanfare for the Future; Remembering Tomorrow; Realizing Hope; and Parecon: Life After Capitalism. Michael is currently host of the podcast Revolution Z and is a Friend of ZNetwork.

No comments: