Thursday, December 08, 2022

ChatGPT Is Astonishing, But Human Jobs Are Safe (For Now)

Story by Jackson Ryan •


If you've spent any time browsing social media feeds over the last week (who hasn't), you've probably heard about ChatGPT. The mesmerizing and mindblowing chatbot, developed by OpenAI and released last week, is a nifty little AI that can spit out highly convincing, human-sounding text in response to user-generated prompts.


Should you worry about ChatGPT coming for your job? Getty Images© Provided by CNET

You might, for example, ask it to write a plot summary for Knives Out, except Benoit Blanc is actually Foghorn Leghorn (just me?) and it will spit out something relatively coherent. It can also help fix broken code and write essays so convincing some academics claim they'd score an A on college exams.

Its responses have astounded to such a degree some have even proclaimed "Google is dead." Then there are those that think it goes beyond Google: Human jobs are in trouble, too.

The Guardian, for instance, proclaimed "professors, programmers and journalists could all be out of a job in just a few years." Another take, from the Australian Computer Society's flagship publication Information Age, suggested the same. The Telegraph announced the bot could "do your job better than you.


I'd say hold your digital horses. ChatGPT is not going to put you out of a job.

A great example of why is provided by the story published in Information Age. The publication utilized ChatGPT to write an entire story about ChatGPT and posted the finished product with a short introduction. The piece is about as simple as you can ask for — ChatGPT provides a basic recounting of the facts of its existence — but in "writing" the piece, ChatGPT also generated fake quotes and attributed them to an OpenAI researcher, John Smith (who is real, apparently).

This underscores the key failing of a large language model like ChatGPT: It does not know how to separate fact from fiction. It cannot be trained to do so. It is a word organizer, an AI programmed in such a way that it can write coherent sentences.

That's an important distinction. It essentially prevents ChatGPT (or the underlying large language model it's built on, OpenAI's GPT 3.5) from writing news or speaking on current affairs (It also isn't trained on up to the minute data, but that's another thing). It definitely can't do the job of a journalist. To say so diminishes the act of journalism itself.

Related video: ChatGPT Crosses 1 MILLION Users | This Bot Answers Everything | What's This AI From Elon Musk? (Moneycontrol)
Duration 5:57

ChatGPT will not be heading out into the world to talk to Ukrainians about the Russian invasion. It will not be able to read the emotion on Kylian Mbappe's face when he wins the World Cup. It certainly isn't jumping on a ship to Antarctica to write about its experiences. It can't be surprised by a quote, completely out of character, that unwittingly reveals a secret about a CEO's business. Hell, it would have no hope of covering Musk's takeover of Twitter — it is no arbiter of truth and it just can't read the room.

It's interesting to see how positive the response to ChatGPT has been. It's absolutely worthy of praise and the documented improvements OpenAI have made over its last product, GPT-3, are interesting in their own right. But the major reason it's really captured attention is because it's so readily accessible.

GPT-3 didn't have a slick and easy-to-use online framework and, while publications like the Guardian used it to generate articles, it only made a brief splash online. Developing a chatbot you can interact with, and share screenshots from, completely changes the way the product is used and talked about. That's also contributed to the bot being a little overhyped.

Strangely enough, this is the second AI to cause a stir in recent weeks.

On Nov. 15, Meta AI released its own artificial intelligence, dubbed "Galactica." Like ChatGPT, it's a large language model and was hyped as a way to "organize science." Essentially, it could generate answers to questions like "what is quantum gravity?" or explain math equations. Much like ChatGPT, you drop in a question and it provides an answer.

Galactica was trained on over 48 million scientific papers and abstracts and provided convincing-sounding answers. The development team hyped the bot as a way to organize knowledge, noting it could generate Wikipedia articles and scientific papers.

Problem was, it was mostly pumping out garbage — nonsensical text that sounded official and even included references to scientific literature, though those were made up. The sheer volume of misinformation it was producing in response to simple prompts, and how insidious that misinformation was, bugged academics and AI researchers, who let their thoughts fly on Twitter. The backlash saw the project shut down by the Meta AI team after two days.

ChatGPT doesn't seem like it's headed in the same direction. It feels like a "smarter" version of Galactica with a much stronger filter. Where Galactica was offering up ways to build a bomb, for instance, ChatGPT weeds out requests that are discriminatory, offensive or inappropriate. ChatGPT has also been trained to be conversational and admit to its mistakes.

And yet, ChatGPT is still limited the same way all large language models are. Its purpose is to construct sentences or songs or paragraphs or essays by studying billions (trillions?) of words that exist across the web. It then puts those words together, predicting the best way to configure them.

In doing so, it writes some pretty convincing essay answers, sure. It also writes garbage, just like Galactica. How can you learn from an AI that might not be providing a truthful answer? And how can you know the AI is not truthful, especially if it sounds convincing? The OpenAI team acknowledges the bot's shortcomings but these are outstanding questions that limit the capabilities of an AI like this.

So, even though the tiny chatbot is entertaining, as evidenced by this wonderful exchange about a guy who brags about pumpkins, it's hard to see how this AI would put professors, programmers or journalists out of a job. Instead, in the short term, ChatGPT and its underlying model will likely complement what journalists, professors and programmers do. It's a tool, not a replacement. Just like journalists use AI to transcribe long interviews, they might use a ChatGPT-style AI to, let's say generate a headline idea.

Because that's exactly what we did with this piece. The headline you see on this article was, in part, suggested by ChatGPT. But it's suggestions weren't perfect. It suggested using terms like "Human Employment" and "Humans Workers." Those felt too official, too… robotic. Emotionless. So, we tweaked its suggestions until we got what you see above.

Does that mean a future iteration of ChatGPT or its underlying AI model (which may release as early as next year) won't come along and make us irrelevant?

Maybe! For now, I'm feeling like my job as a journalist is pretty secure.



No comments:

Post a Comment