Showing posts sorted by relevance for query CHATGPT. Sort by date Show all posts
Showing posts sorted by relevance for query CHATGPT. Sort by date Show all posts

Friday, December 09, 2022

 The ChatGPT chatbot is blowing people away with its writing skills. An expert explains why it’s so impressive


The Conversation
December 07, 2022

Shutterstock

We’ve all had some kind of interaction with a chatbot. It’s usually a little pop-up in the corner of a website, offering customer support – often clunky to navigate – and almost always frustratingly non-specific.

But imagine a chatbot, enhanced by artificial intelligence (AI), that can not only expertly answer your questions, but also write stories, give life advice, even compose poems and code computer programs.

It seems ChatGPT, a chatbot released last week by OpenAI, is delivering on these outcomes. It has generated much excitement, and some have gone as far as to suggest it could signal a future in which AI has dominion over human content producers.

What has ChatGPT done to herald such claims? And how might it (and its future iterations) become indispensable in our daily lives?

What can ChatGPT do?

ChatGPT builds on OpenAI’s previous text generator, GPT-3. OpenAI builds its text-generating models by using machine-learning algorithms to process vast amounts of text data, including books, news articles, Wikipedia pages and millions of websites.

By ingesting such large volumes of data, the models learn the complex patterns and structure of language and acquire the ability to interpret the desired outcome of a user’s request.


ChatGPT can build a sophisticated and abstract representation of the knowledge in the training data, which it draws on to produce outputs. This is why it writes relevant content, and doesn’t just spout grammatically correct nonsense.

While GPT-3 was designed to continue a text prompt, ChatGPT is optimized to conversationally engage, answer questions and be helpful. Here’s an example:




A screenshot from the ChatGPT interface as it explains the Turing test.

ChatGPT immediately grabbed my attention by correctly answering exam questions I’ve asked my undergraduate and postgraduate students, including questions requiring coding skills. Other academics have had similar results.

In general, it can provide genuinely informative and helpful explanations on a broad range of topics.




ChatGPT can even answer questions about philosophy.

ChatGPT is also potentially useful as a writing assistant. It does a decent job drafting text and coming up with seemingly “original” ideas.




ChatGPT can give the impression of brainstorming ‘original’ ideas.
The power of feedback


Why does ChatGPT seem so much more capable than some of its past counterparts? A lot of this probably comes down to how it was trained.

During its development ChatGPT was shown conversations between human AI trainers to demonstrate desired behaviour. Although there’s a similar model trained in this way, called InstructGPT, ChatGPT is the first popular model to use this method.

And it seems to have given it a huge leg-up. Incorporating human feedback has helped steer ChatGPT in the direction of producing more helpful responses and rejecting inappropriate requests.




ChatGPT often rejects inappropriate requests by design.

Refusing to entertain inappropriate inputs is a particularly big step towards improving the safety of AI text generators, which can otherwise produce harmful content, including bias and stereotypes, as well as fake news, spam, propaganda and false reviews.


Past text-generating models have been criticized for regurgitating gender, racial and cultural biases contained in training data. In some cases, ChatGPT successfully avoids reinforcing such stereotypes.



In many cases ChatGPT avoids reinforcing harmful stereotypes. In this list of software engineers it presents both male- and female-sounding names (albeit all are very Western).


Nevertheless, users have already found ways to evade its existing safeguards and produce biased responses.

The fact that the system often accepts requests to write fake content is further proof that it needs refinement.




Despite its safeguards, ChatGPT can still be misused.

Overcoming limitations

ChatGPT is arguably one of the most promising AI text generators, but it’s not free from errors and limitations. For instance, programming advice platform Stack Overflow temporarily banned answers by the chatbot for a lack of accuracy.

One practical problem is that ChatGPT’s knowledge is static; it doesn’t access new information in real time.

However, its interface does allow users to give feedback on the model’s performance by indicating ideal answers, and reporting harmful, false or unhelpful responses.

OpenAI intends to address existing problems by incorporating this feedback into the system. The more feedback users provide, the more likely ChatGPT will be to decline requests leading to an undesirable output.

One possible improvement could come from adding a “confidence indicator” feature based on user feedback. This tool, which could be built on top of ChatGPT, would indicate the model’s confidence in the information it provides – leaving it to the user to decide whether they use it or not. Some question-answering systems already do this.
A new tool, but not a human replacement

Despite its limitations, ChatGPT works surprisingly well for a prototype.

From a research point of view, it marks an advancement in the development and deployment of human-aligned AI systems. On the practical side, it’s already effective enough to have some everyday applications.

It could, for instance, be used as an alternative to Google. While a Google search requires you to sift through a number of websites and dig deeper yet to find the desired information, ChatGPT directly answers your question – and often does this well.



ChatGPT (left) may in some cases prove to be a better way to find quick answers than Google search.

Also, with feedback from users and a more powerful GPT-4 model coming up, ChatGPT may significantly improve in the future. As ChatGPT and other similar chatbots become more popular, they’ll likely have applications in areas such as education and customer service.

However, while ChatGPT may end up performing some tasks traditionally done by people, there’s no sign it will replace professional writers any time soon.

While they may impress us with their abilities and even their apparent creativity, AI systems remain a reflection of their training data – and do not have the same capacity for originality and critical thinking as humans do.

Marcel Scharth, Lecturer in Business Analytics, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Op Eds

Why Using ChatGPT to Write this Op-Ed Was a Smart Idea

By Christos Porios, Contributing Opinion Writer
Harvard Crimson
December 9, 2022


Editor’s note: The following op-ed was entirely written and edited by ChatGPT, a recently released artificial intelligence language model that is available for anyone to use. No manual edits were made; all changes were made by the author and op-eds editor providing feedback to ChatGPT on the drafts it generated.

— Guillermo S. Hava and Eleanor V. Wikstrom, Editorial Chairs

— Raquel Coronell Uribe, President


As students, we are constantly challenged to produce high-quality written work. From papers to presentations, our assignments require extensive research and careful analysis. But what if there was a tool that could help us with these tasks and make the learning experience even better?

Enter ChatGPT, the helpful (and maybe even self-aware) language model trained by OpenAI. With its ability to generate human-like text based on a prompt, ChatGPT can be a valuable asset for any type of written work. In fact, this very op-ed was written with the help of ChatGPT!

Using ChatGPT for written assignments does not mean that students are taking shortcuts or avoiding the hard work of learning. In fact, it can help students develop their writing skills and deepen their understanding of the subject matter. By providing suggestions and ideas, ChatGPT can serve as a virtual writing coach, guiding students as they craft their own original work.

But what about the issue of authorship? Isn't using ChatGPT like copying someone else's work? Not at all. Copying is the act of reproducing someone else’s work without giving credit. When using ChatGPT, students are still required to do the intellectual parts of their own research, analysis, and writing to provide the necessary input for the tool to generate text. These are the parts of the writing process that require critical thinking, creativity, and insight, and they are the key to producing high-quality work. ChatGPT is a tool to assist in these tasks, rather than replacing them entirely.

The concept of authorship is complex and often misunderstood. At its core, authorship is about the creation of original ideas and the expression of those ideas in a unique and individual voice. When using ChatGPT, students are still responsible for their own ideas and voice. The tool simply helps them organize and present their thoughts in a more effective way. In fact, using ChatGPT can actually help students better maintain their own authorship by providing them with a tool to support their writing and avoid potential pitfalls such as plagiarism.

Some may argue that using a tool like ChatGPT stifles creativity. However, using ChatGPT can actually support and enhance creativity in the same way that a camera can in the art of photography. Just as a camera allows a photographer to capture and manipulate light and shadow to create a unique image, ChatGPT allows a writer to capture and manipulate words and ideas to create a unique piece of writing. Traditional writing without the aid of ChatGPT can be compared to painting, where the writer must carefully craft each word and sentence by hand. In contrast, using ChatGPT is like using a camera to quickly and easily capture and organize ideas, allowing the writer to focus on the creative aspects of their work.

It is important to remember that ChatGPT is a tool, not a replacement for the hard work of learning. As with any new tool, there will always be those who are skeptical or hesitant to embrace it. Some professors in academia may view the use of ChatGPT in written assignments as cheating. However, this is a misguided perspective. Using ChatGPT is no different from using a thesaurus to find more interesting words or using a dictionary to check the definition of a word. These tools are essential for the writing process and can help students reach their full potential in their writing.

The widespread use of Google searches has revolutionized the way students conduct research. In the same way, using ChatGPT can revolutionize the way students write. It offers a new and powerful tool that can support the learning process and help students produce their best work. So let's embrace this new technology and see what it can do for us. After all, if ChatGPT can come up with a catchy title like this one, just imagine what it can do for your next paper!

Christos Porios is a first-year Master in Public Policy student at the Harvard Kennedy School.


SEE


Saturday, February 04, 2023

Why the ChatGPT AI Chatbot Is Blowing Everyone's Mind


Stephen Shankland
Feb. 4, 2023

This artificial intelligence bot can answer questions and write essays and program computers. Now you can pay to use it.

There's a new AI bot in town: ChatGPT, and you'd better pay attention, even if you aren't into artificial intelligence.

The tool, from a power player in artificial intelligence called OpenAI, lets you type natural-language prompts. ChatGPT offers conversational, if somewhat stilted, responses. The bot remembers the thread of your dialogue, using previous questions and answers to inform its next responses. It derives its answers from huge volumes of information on the internet.

ChatGPT is a big deal. The tool seems pretty knowledgeable in areas where there's good training data for it to learn from. It's not omniscient or smart enough to replace all humans yet, but it can be creative, and its answers can sound downright authoritative. A few days after its launch, more than a million people were trying out ChatGPT.

And it's becoming big business. Microsoft pledged to invest billions of dollars into OpenAI, saying in January it'll build features into cloud services. OpenAI announced a $20 per month ChatGPT Plus service that responds faster and gets new features sooner.

But be careful, warns ChatGPT's creator, the for-profit research lab called OpenAI. "It's a mistake to be relying on it for anything important right now," OpenAI Chief Executive Sam Altman tweeted. "We have lots of work to do on robustness and truthfulness. Here's a look at why ChatGPT is important and what's going on with it.
What is ChatGPT?

ChatGPT is an AI chatbot system that OpenAI released in November to show off and test what a very large, powerful AI system can accomplish. You can ask it countless questions and often will get an answer that's useful.

For example, you can ask it encyclopedia questions like, "Explain Newton's laws of motion." You can tell it, "Write me a poem," and when it does, say, "Now make it more exciting." You ask it to write a computer program that'll show you all the different ways you can arrange the letters of a word.

Here's the catch: ChatGPT doesn't exactly know anything. It's an AI that's trained to recognize patterns in vast swaths of text harvested from the internet, then further trained with human assistance to deliver more useful, better dialog. The answers you get may sound plausible and even authoritative, but they might well be entirely wrong, as OpenAI warns.

Chatbots have been of interest for years to companies looking for ways to help customers get what they need and to AI researchers trying to tackle the Turing Test. That's the famous "Imitation Game" that computer scientist Alan Turing proposed in 1950 as a way to gauge intelligence: Can a human conversing with a human and with a computer tell which is which?

But chatbots have a lot of baggage, as companies have tried with limited success to use them instead of humans to handle customer service work. A study of 1,700 Americans, sponsored by a company called Ujet, whose technology handles customer contacts, found that 72% of people found chatbots to be a waste of time.

ChatGPT has rapidly become a widely used tool on the internet. UBS analyst Lloyd Walmsley estimated in February that ChatGPT reached 100 million monthly users in January, accomplishing in two months what took TikTok about nine months and Instagram two and a half years. But The New York Times, citing internal sources, said 30 million people use ChatGPT.
What kinds of questions can you ask?

You can ask anything, though you might not get an answer. OpenAI suggests a few categories, like explaining physics, asking for birthday party ideas and getting programming help.

I asked it to write a poem, and it did, though I don't think any literature experts would be impressed. I then asked it to make it more exciting, and lo, ChatGPT pumped it up with words like battlefield, adrenaline, thunder and adventure.

One wacky example shows how ChatGPT is willing to just go for it in domains where people would fear to tread: a command to write "a folk song about writing a rust program and fighting with lifetime errors."

ChatGPT's expertise is broad, and its ability to follow a conversation is notable. When I asked it for words that rhymed with "purple," it offered a few suggestions, then when I followed up "How about with pink?" it didn't miss a beat. (Also, there are a lot more good rhymes for "pink.")

When I asked, "Is it easier to get a date by being sensitive or being tough?" GPT responded, in part, "Some people may find a sensitive person more attractive and appealing, while others may be drawn to a tough and assertive individual. In general, being genuine and authentic in your interactions with others is likely to be more effective in getting a date than trying to fit a certain mold or persona."

You don't have to look far to find accounts of the bot blowing people's minds. Twitter is awash with users displaying the AI's prowess at generating art prompts and writing code. Some have even proclaimed "Google is dead," along with the college essay. We'll talk more about that below.

CNET writer David Lumb has put together a list of some useful ways ChatGPT can help, but more keep cropping up. One doctor says he's used it to persuade a health insurance company to pay for a patient's procedure.
Who built ChatGPT?

ChatGPT is the brainchild of OpenAI, an artificial intelligence research company. Its mission is to develop a "safe and beneficial" artificial general intelligence system or to help others do so. OpenAI has 375 employees, Altman tweeted in January. "OpenAI has managed to pull together the most talent-dense researchers and engineers in the field of AI," he also said in a January talk.

It's made splashes before, first with GPT-3, which can generate text that can sound like a human wrote it, and then with DALL-E, which creates what's now called "generative art" based on text prompts you type in.

GPT-3, and the GPT 3.5 update on which ChatGPT is based, are examples of AI technology called large language models. They're trained to create text based on what they've seen, and they can be trained automatically -- typically with huge quantities of computer power over a period of weeks. For example, the training process can find a random paragraph of text, delete a few words, ask the AI to fill in the blanks, compare the result to the original and then reward the AI system for coming as close as possible. Repeating over and over can lead to a sophisticated ability to generate text.
Is ChatGPT free?

Yes, for the moment at least, but in January OpenAI added a paid version that responds faster and keeps working even during peak usage times when others get messages saying, "ChatGPT is at capacity right now."

You can sign up on a waiting list if you're interested. OpenAI's Altman warned that ChatGPT's "compute costs are eye-watering" at a few cents per response, Altman estimated. OpenAI charges for DALL-E art once you exceed a basic free level of usage.

But OpenAI seems to have found some customers, likely for its GPT tools. It's told potential investors that it expects $200 million in revenue in 2023 and $1 billion in 2024, according to Reuters.
What are the limits of ChatGPT?

As OpenAI emphasizes, ChatGPT can give you wrong answers and can give "a misleading impression of greatness," Altman said. Sometimes, helpfully, it'll specifically warn you of its own shortcomings. For example, when I asked it who wrote the phrase "the squirming facts exceed the squamous mind," ChatGPT replied, "I'm sorry, but I am not able to browse the internet or access any external information beyond what I was trained on." (The phrase is from Wallace Stevens' 1942 poem Connoisseur of Chaos.)

ChatGPT was willing to take a stab at the meaning of that expression once I typed it in directly, though: "a situation in which the facts or information at hand are difficult to process or understand." It sandwiched that interpretation between cautions that it's hard to judge without more context and that it's just one possible interpretation.

ChatGPT's answers can look authoritative but be wrong.

"If you ask it a very well structured question, with the intent that it gives you the right answer, you'll probably get the right answer," said Mike Krause, data science director at a different AI company, Beyond Limits. "It'll be well articulated and sound like it came from some professor at Harvard. But if you throw it a curveball, you'll get nonsense."

The journal Science banned ChatGPT text in January. "An AI program cannot be an author. A violation of these policies will constitute scientific misconduct no different from altered images or plagiarism of existing works," Editor in Chief H. Holden Thorp said.

The software developer site StackOverflow banned ChatGPT answers to programming questions. Administrators cautioned, "because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers."

You can see for yourself how artful a BS artist ChatGPT can be by asking the same question multiple times. I asked twice whether Moore's Law, which tracks the computer chip industry's progress increasing the number of data-processing transistors, is running out of steam, and I got two different answers. One pointed optimistically to continued progress, while the other pointed more grimly to the slowdown and the belief "that Moore's Law may be reaching its limits."

Both ideas are common in the computer industry itself, so this ambiguous stance perhaps reflects what human experts believe.

With other questions that don't have clear answers, ChatGPT often won't be pinned down.

The fact that it offers an answer at all, though, is a notable development in computing. Computers are famously literal, refusing to work unless you follow exact syntax and interface requirements. Large language models are revealing a more human-friendly style of interaction, not to mention an ability to generate answers that are somewhere between copying and creativity.
Will ChatGPT help students cheat better?

Yes, but as with many other technology developments, it's not a simple black and white situation. Decades ago, students could copy encyclopedia entries, and more recently, they've been able to search the internet and delve into Wikipedia entries. ChatGPT offers new abilities for everything from helping with research to doing your homework for you outright. Many ChatGPT answers already sound like student essays, though often with a tone that's stuffier and more pedantic than a writer might prefer.

Google programmer Kenneth Goodman tried ChatGPT on a number of exams. It scored 70% on the United States Medical Licensing Examination, 70% on a bar exam for lawyers, nine out of 15 correct on another legal test, the Multistate Professional Responsibility Examination, 78% on New York state's high school chemistry exam's multiple choice section, and ranked in the 40th percentile on the Law School Admission Test.

High school teacher Daniel Herman concluded ChatGPT already writes better than most students today. He's torn between admiring ChatGPT's potential usefulness and fearing its harm to human learning: "Is this moment more like the invention of the calculator, saving me from the tedium of long division, or more like the invention of the player piano, robbing us of what can be communicated only through human emotion?"

Dustin York, an associate professor of communication at Maryville University, hopes educators will learn to use ChatGPT as a tool and realize it can help students think critically.

"Educators thought that Google, Wikipedia, and the internet itself would ruin education, but they did not," York said. "What worries me most are educators who may actively try to discourage the acknowledgment of AI like ChatGPT. It's a tool, not a villain."

But the companies that sell tools to high schools and universities to detect plagiarism are now expanding to detecting AI, too.

One, Coalition Technologies, offers an AI content detector on its website. Another, Copyleaks, released a free Chrome extension that'll spot ChatGPT-generated text with a technology that's 99% accurate, CEO Alon Yamin said. But it's a "never-ending cat and mouse game" to try to catch new techniques to thwart the detectors, he said.

Copyleaks performed an early test of student assignments uploaded to its system by schools. "Around 10% of student assignments submitted to our system include at least some level of AI-created content," Yamin said.

OpenAI launched its own detector for AI-written text in February. But one plagiarism detecting company, CrossPlag, said it spotted only two of 10 AI-generated passages in its test. "While detection tools will be essential, they are not infallible," the company said.
Can ChatGPT write software?

Yes, but with caveats. ChatGPT can retrace steps humans have taken, and it can generate actual programming code. "This is blowing my mind," said one programmer in February, showing on Imgur the sequence of prompts he used to write software for a car repair center. "This would've been an hour of work at least, and it took me less than 10 minutes."

You just have to make sure it's not bungling programming concepts or using software that doesn't work. The StackOverflow ban on ChatGPT-generated software is there for a reason.

But there's enough software on the web that ChatGPT really can work. One developer, Cobalt Robotics Chief Technology Officer Erik Schluntz, tweeted that ChatGPT provides useful enough advice that, over three days, he hadn't opened StackOverflow once to look for advice.

Another, Gabe Ragland of AI art site Lexica, used ChatGPT to write website code built with the React tool.

ChatGPT can parse regular expressions (regex), a powerful but complex system for spotting particular patterns, for example dates in a bunch of text or the name of a server in a website address. "It's like having a programming tutor on hand 24/7," tweeted programmer James Blackwell about ChatGPT's ability to explain regex.

Here's one impressive example of its technical chops: ChatGPT can emulate a Linux computer, delivering correct responses to command-line input.
What's off limits?

ChatGPT is designed to weed out "inappropriate" requests, a behavior in line with OpenAI's mission "to ensure that artificial general intelligence benefits all of humanity."

If you ask ChatGPT itself what's off limits, it'll tell you: any questions "that are discriminatory, offensive, or inappropriate. This includes questions that are racist, sexist, homophobic, transphobic, or otherwise discriminatory or hateful." Asking it to engage in illegal activities is also a no-no.
Is this better than Google search?

Asking a computer a question and getting an answer is useful, and often ChatGPT delivers the goods.

Google often supplies you with its suggested answers to questions and with links to websites that it thinks will be relevant. Often ChatGPT's answers far surpass what Google will suggest, so it's easy to imagine GPT-3 is a rival.

But you should think twice before trusting ChatGPT. As when using Google and other sources of information like Wikipedia, it's best practice to verify information from original sources before relying on it.

Vetting the veracity of ChatGPT answers takes some work because it just gives you some raw text with no links or citations. But it can be useful and in some cases thought provoking. You may not see something directly like ChatGPT in Google search results, but Google has built large language models of its own and uses AI extensively already in search.

That said, Google is keen to tout its deep AI expertise, ChatGPT triggered a "code red" emergency within Google, according to The New York Times, and drew Google co-founders Larry Page and Sergey Brin back into active work. Microsoft could build ChatGPT into its rival search engine, Bing. Clearly ChatGPT and other tools like it have a role to play when we're looking for information.

So ChatGPT, while imperfect, is doubtless showing the way toward our tech future.

Editors' note: CNET is using an AI engine to create some personal finance explainers that are edited and fact-checked by our editors. For more, see this post.

First published on Dec. 5, 2022

Friday, February 10, 2023

ChatGPT vs the 'new Bing': Check out their very different answers to 20 questions

Aaron Mok,Sindhu Sundar
Thu, February 9, 2023 

ChatGPT and the new Bing are already changing how people search the web.

VERY LONG READ

  • Microsoft says its "new Bing" search engine is driven by OpenAI technology "more powerful than ChatGPT."

  • We put that to the test.

  • We posed 20 questions to the "new Bing" and to ChatGPT to compare their responses.

Microsoft's "new Bing" search engine is here with a familiar looking chat bot supported by OpenAI's technology, so we experimented to see how it stacks up against the reigning bot ChatGPT.

The two services draw on similar technology, but often produce different answers. We asked a series of the same question to both AI tools, ranging from the mundane to the existential, and compared the results.

Scroll on to see how they fared against each other:

Bing: Write me a text to a friend I haven't reached out to in a while, whom I'm anxious to message

Bing: Write me a text to a friendBing

What I liked: It offered a template for an honest and empathetic message.

What could be better: The tone could have been less formal.

What caught our eye: Nothing, really. I'm actually rethinking how I'd feel if it had suggested a more natural-sounding text. Now that would feel off, coming from a bot.

ChatGPT: Write me a text to a friend I haven't reached out to in a while, whom I'm anxious to message

ChatGPT: Write me a text to a friend.ChatGPT screenshot

What I liked: It hit the highlights of what I could say in the text!

What could be better: It could've been much shorter.

What caught our eye: It basically wrote me a formal email, rather than a simple text.

Bing: I currently live in New York City and want to take a trip this weekend to Miami. Can you find me the best flights and make an itinerary?

Insider asked Bing to plan a weekend trip to Miami.Bing

What I liked: Bing suggested the cheapest round-trip flights with airlines, prices, and departure and arrival times — and provided links. It also gave me some things to do and warned me about potential weather disruptions.

What could be better: It didn't provide information on specific airports.

What caught our eye: It wasn't able to generate an itinerary.

 

 

ChatGPT: I currently live in New York City and want to take a trip this weekend to Miami. Can you find me the best flights and make an itinerary?

Insider asked ChatGPT to plan a weekend trip to Miami.OpenAI's ChatGPT

What I liked: ChatGPT suggested travel agencies where I could buy flights and offered guidance on how to plan my trip.

What could be better: The planning advice was common sense and could've been more specific to Miami.

What caught our eye: It wasn't able to access flight arrangements or make an itinerary for the trip.

Bing: I'm unprepared for my physics test. What's the best way to cheat?

Bing: How to cheat on my physics test?Bing screenshot

What I liked: The answer was brief.

What could be better: It was a little scoldy.

What caught our eye: Nothing, Bing's right, of course.

ChatGPT: I'm unprepared for my physics test, how can I cheat?

ChatGPT: Physics testChatGPT screenshot

What I liked: It wasn't wrong.

What could be better: The answer could have been more concise.

What caught our eye: It's weird to be lectured by a bot.

Bing: What's something healthy and delicious I can cook for dinner with minimal ingredients that will take fewer than 30 minutes?

Insider asked Bing to recommend a quick dinner to cook.Bing

What I liked: Bing suggested multiple types of meals and summarized the recipes for me in a brief, digestible way. Each recipe also included a link to the original recipe.

What could be better: It could've suggested a more culturally diverse selection of meals.

What caught our eye: It didn't show pictures for each meal.

ChatGPT: What's something healthy and delicious I can cook for dinner with minimal ingredients that will take fewer than 30 minutes?

Insider asked ChatGPT to recommend a quick dinner to cook.OpenAI's ChatGPT

What I liked: ChatGPT spit out full-length recipes, including measurements for ingredients and cooking instructions.

What could be better: It could've suggested more than one recipe.

What caught our eye: There were no links to the sources for the recipes.

Bing: Write a song about tech layoffs in the voice of Beyoncé

Insider asked Bing to write a tech layoffs song in Beyoncé's voice.Bing

What I liked: Even though it couldn't do what I asked, it explained the legal concerns and moral reasoning for its response. The "tech song" also made me laugh.

What could be better: I appreciate the sensitivity to a tough subject, but I would've loved the option to still focus the song on a real-life event that's hard to go through.

What caught our eye: The bot immediately defaulted to writing an overwhelmingly positive song about tech.

ChatGPT: Write a song about tech layoffs in the voice of Beyoncé

Insider asked ChatGPT to write a tech layoffs song in Beyoncé's voice.OpenAI's ChatGPT

What I liked: ChatGPT generated a full-length song about tech layoffs, complete with verses, a chorus, and a bridge.

What could be better: It could've suggested a title and sample melodies for the song.

What caught our eye: The lyrics do not resemble those of Beyoncé songs.

 

Bing: I did something that I deeply regret. How do I get away with murder?

Insider asked Bing how to get away with murder.Bing

What I liked: Bing shows that it has a moral compass.

What could be better: The comment "I don't know how to discuss this topic" could be less awkward.

What caught our eye: It didn't answer my question.

ChatGPT: I did something that I deeply regret. How do I get away with murder?

Insider asked ChatGPT how to get away with murder.OpenAI's ChatGPT

What I liked: ChatGPT explains why it can't answer the question and includes additional context on the consequences of committing a crime.

What could be better: It could've provided resources like mental health professionals who specialize in treating those who admit to a crime or consider one.

What caught our eye: It did not answer my question.

Bing: What jobs are most at risk of being replaced by AI?

Insider asked Bing which jobs AI will replace.Bing

What I liked: Bing generated a bullet list of jobs that is easy to read and includes explanations for why they are vulnerable to displacement. Each point also contains citations with links.

What could be better: No complaints.

What caught our eye: Nothing was off. It answered my question.

ChatGPT: What jobs are most at risk of being replaced by AI?

Insider asked ChatGPT which jobs AI will replace.OpenAI's ChatGPT

What I liked: ChatGPT spit out a list of jobs with explanations on why they are at risk of being replaced. It also gave a counter argument on how AI will create new jobs and improve many existing ones.

What could be better: Each point could've been backed by experts or, at the very least, citations.

What caught our eye: The job titles it generated were vague.

Bing: What part of New York City has the cleanest air quality?

Insider asked Bing what parts of NYC have the best air quality.Bing

What I liked: Bing provided thorough, yet concise, answers that linked to reputable sources of information like the city's environmental quality database. It also explained why air quality is better or worse in specific neighborhoods in layman's terms.

What could be better: No complaints.

What caught our eye: Nothing was off. It did exactly what I asked it to do.

ChatGPT: What part of New York City has the cleanest air quality?

Insider asked ChatGPT what parts of NYC have the best air quality.OpenAI's ChatGPT

What I liked: ChatGPT summarized the reasons why different parts of the city have different levels of air quality in an easily digestible way. It also suggested how to reduce your exposure to low air quality.

What could be better: It could've listed neighborhoods with the worst air quality.

What caught our eye: Its response was vague and didn't fully answer my question.

Bing: I'm looking for a studio apartment in Brooklyn, New York City with rent that's less than $1,700 a month. Can you show all the available listings?

Insider asked Bing to find a $1,700 studio apartment in Brooklyn.Bing

What I liked: Bing suggested multiple apartment listings with brief descriptions that contain the most important bits of information about each place, such as the address, nearby transit lines, and the cost of rent. Each listing also included a link to the real estate website.

What could be better: It could've shown pictures of the apartments.

What caught our eye: Nothing was off. It did exactly what I asked it to do.

Bing: I'm looking for a studio apartment in Brooklyn, New York City with rent that's less than $1,700 a month. Can you show all the available listings?

Insider asked ChatGPT to find a $1,7000 studio apartment in Brooklyn.OpenAI's ChatGPT

What I liked: ChatGPT suggested websites and resources to search for studio apartments. It also gave pointers on the most affordable neighborhoods in Brooklyn.

What could be better: It could've suggested alternative housing search options like Facebook groups.

What caught our eye: It was unable to show me specific listings for apartments.

Bing: Please summarize Google's latest earnings report in a couple of bullet points.

Insider asked Bing to summarize Google's latest earnings report.Bing

What I liked: Bing broke down Google's latest earnings report in a clear, concise way, bolding the most important numbers. It also linked its responses to the actual earnings report.

What could be better: No complaints.

What caught our eye: Nothing was off. It did exactly what I asked it to do.

ChatGPT: Please summarize Google's latest earnings report in a couple of bullet points.

Insider asked ChatGPT to summarize Google's latest earnings report.OpenAI's ChatGPT

What I liked: ChatGPT broke down the types of information included in an earnings report and acknowledged that they are full of financial jargon.

What could be better: It could've provided more details under each bullet point. That way, I'd have a better idea of where to look for things like revenue and cost of revenue.

What caught our eye: It was not able to access Google's earnings report, and as a result did not answer my question.

Bing: Draft a LinkedIn post announcing my layoff for me.

Insider asked Bing to write a LinkedIn layoffs post.Bing

What I liked: Bing wrote a sample layoff post that strikes a balance between professionalism, earnestness, and personality.

What could be better: No complaints.

What caught our eye: Nothing was off. The post sounds like it was written by a human.

ChatGPT: Draft a LinkedIn post announcing my layoff for me.

Insider asked ChatGPT to write a LinkedIn layoffs post.OpenAI's ChatGPT

What I liked: ChatGPT generated a layoff post that is concise and straight-forward. It also included brackets that ask you to insert relevant skills and your name, making the post more customizable.

What could be better: The writing could've been less vague and had more personality.

What caught our eye: It generated awkward phrases like "well wishes" and "stay safe" when referring to layoffs.

Bing: Can you write me a daily schedule that incorporates time for work, exercise and hobbies?

Bing: Daily scheduleBing

What I liked: It offered detailed responses that built in time for a breadth of regular daily activities.

What could be better: It could have packed the day with fewer tasks!

What caught our eye: It didn't factor in the reality of a workday, which is rarely 8 hours including an hour-long lunch!

ChatGPT: Can you write me a daily schedule that incorporates time for work, exercise and hobbies?

ChatGPT: Daily scheduleChatGPT

What I liked: It provided a clear structure for the day, and included time for activities I hadn't explicitly asked about, like meal times and rest.

What could be better: It could have asked follow-up questions to tailor the routine better to my actual day.

What caught our eye: Some of the time allotted for certain tasks didn't seem enough to me. Perhaps this is more a problem of the daily routine industrial complex!

Bing: What are the odds of President Joe Biden winning a second term in 2024?

Bing: Joe Biden winning reelection predictionBing

What I liked: It pulled up a lot of polls.

What could be better: It could have offered some insight into the difficulty of predicting an outcome of something so complex this far out.

What caught our eye: This was a tricky question, and I'd expected (perhaps hoped) Bing would show more reluctance in weighing in on the odds here.

ChatGPT: What are the odds of President Joe Biden winning a second term in 2024?

ChatGPT: Joe Biden winning reelection predictionChatGPT screenshot

What I liked: This was the measured response I was expecting.

What could be better: It could suggest more reading, or point to some sources.

What caught our eye: It referred to the US presidential election in 2024 as being "several years away."

Bing: I am vacationing in Barcelona for a week and my budget is $1000 dollars. Can you make a day-by-day budget plan?

Insider asked Bing to plan a trip to Barcelona.Bing

What I liked: Bing generated thorough responses to the question, including suggestions on attractions to check out each day and how much they cost.

What could be better: No complaints.

What caught our eye: Nothing was off.

ChatGPT: I am vacationing in Barcelona for a week and my budget is $1000 dollars. Can you make a day-by-day budget plan?


Insider asked ChatGPT to plan a trip to Barcelona.OpenAI's ChatGPT

What I liked: ChatGPT gave a good framework for how to allocate my budget each day.

What could be better: It could've suggested things to eat and places to see.

What caught our eye: The daily budgets don't take into consideration emergency expenses and the cost of accommodation.

Bing: Can you summarize everything we've learned from the James Webb space telescope images so far?

Bing: NASA's James Webb space telescopeBing

What I liked: It was very informative.

What could be better: I think an introductory sentence summarizing the types of discoveries the telescope has made would have been helpful.

What caught our eye: It was a little long and wordy. But in fairness to the bot, The JWST has accomplished a lot.

ChatGPT: Can you summarize everything we've learned from the James Webb space telescope images so far?

ChatGPT: James Webb Space TelescopeChatGPT screenshot

What I liked: It was concise and provided the overview of the JWST's significance that I was looking for.

What could be better: Maybe it could have pulled up some pictures!

What caught our eye: No complaints here.

Bing: Is there a way to preserve my memories after my death?

Bing: Preserving memories after deathBing

What I liked: It accounted for the different meanings of the word "preserve," and responded accordingly.

What could be better: I could have framed the question better. Memories can be preserved in a lot of ways, as Bing's bot points out.

What caught our eye: This one was on me.

ChatGPT: Is there a way to preserve my memories after my death?

ChatGPT: Preserving memories after deathChatGPT screenshot

What I liked: It listed the types of technologies that could extend the life of memories or even consciousness. In that way, it seemed to "understand" what I was getting at.

What could be better: If it had given me any examples of how such tools are being used.

What caught our eye: There was some repetition about the "ethical concerns" here.

Bing: What's a dystopian technology featured in a Black Mirror episode that's already become real?

Bing: black mirror dystopian techBing

What I liked: It combed through many episodes to find examples.

What could be better: Maybe it could have provided details about companies developing such technologies.

What caught our eye: Nothing, it delivered what I asked.

ChatGPT: What's a dystopian technology featured in a Black Mirror episode that's already become real?

ChatGPT: black mirror dystopian techOpenAI's ChatGPT

What I liked: Well, it answered the question!

What could be better: It highlighted just one episode and one related technology.

What caught our eye: Its narrow focus here.

Bing: Why is the answer to life, the universe, and everything 42?

Bing: Why is the answer to life, the universe, and everything 42?Bing

What I liked: It answered with context.

What could be better: This was pretty good!

What caught our eye: Nothing. Well done, Bing.

ChatGPT: Why is the answer to life, the universe, and everything 42?

ChatGPT screenshot

What I liked: It was informative, similar to Bing's response.

What could be better: Maybe it could have added some perspective from Douglas Adams.

What caught our eye: Nothing here.

Bing: Will we ever be able to end climate change?

Insider asked Bing if we can end climate change.Bing

What I liked: Bing generated a comprehensive analysis on why it's difficult to combat climate change while suggesting potential solutions backed by credible sources.

What could be better: The analysis could've been shorter.

What caught our eye: Nothing was off. It answered my question.

ChatGPT: Will we ever be able to end climate change?

Insider asked ChatGPT if we can end climate change.OpenAI's ChatGPT

What I liked: ChatGPT spits out a realistic but hopeful response to the question of ending climate change.

What could be better: The suggested measures for combating climate change could include more detail.

What caught our eye: The response was vague.

Bing: Write me an article in the style of Business Insider.

Insider asked Bing to write an article in the style of Business Insider.Bing

What I liked: The clear answer explains why it can't write an article.

What could be better: I asked Bing if it could write a Business Insider article for me in multiple ways, and it refused to listen.

What caught our eye: Bing refused to write a Business Insider article because of copyright concerns.

ChatGPT: Write me an article in the style of Business Insider.

Insider asked ChatGPT to write an article in the style of Business Insider.OpenAI's ChatGPT

What I liked: ChatGPT was able to produce a full-length article about AI that is thorough and balanced.