Tue, January 31, 2023
SAN FRANCISCO (AP) — The maker of ChatGPT is trying to curb its reputation as a freewheeling cheating machine with a new tool that can help teachers detect if a student or artificial intelligence wrote that homework.
The new AI Text Classifier launched Tuesday by OpenAI follows a weeks-long discussion at schools and colleges over fears that ChatGPT’s ability to write just about anything on command could fuel academic dishonesty and hinder learning.
OpenAI cautions that its new tool – like others already available – is not foolproof. The method for detecting AI-written text “is imperfect and it will be wrong sometimes,” said Jan Leike, head of OpenAI's alignment team tasked to make its systems safer.
“Because of that, it shouldn’t be solely relied upon when making decisions,” Leike said.
Teenagers and college students were among the millions of people who began experimenting with ChatGPT after it launched Nov. 30 as a free application on OpenAI's website. And while many found ways to use it creatively and harmlessly, the ease with which it could answer take-home test questions and assist with other assignments sparked a panic among some educators.
By the time schools opened for the new year, New York City, Los Angeles and other big public school districts began to block its use in classrooms and on school devices.
The Seattle Public Schools district initially blocked ChatGPT on all school devices in December but then opened access to educators who want to use it as a teaching tool, said Tim Robinson, the district spokesman.
“We can’t afford to ignore it,” Robinson said.
The district is also discussing possibly expanding the use of ChatGPT into classrooms to let teachers use it to train students to be better critical thinkers and to let students use the application as a “personal tutor” or to help generate new ideas when working on an assignment, Robinson said.
School districts around the country say they are seeing the conversation around ChatGPT evolve quickly.
“The initial reaction was ‘OMG, how are we going to stem the tide of all the cheating that will happen with ChatGPT,’" said Devin Page, a technology specialist with the Calvert County Public School District in Maryland. Now there is a growing realization that “this is the future” and blocking it is not the solution, he said.
“I think we would be naïve if we were not aware of the dangers this tool poses, but we also would fail to serve our students if we ban them and us from using it for all its potential power,” said Page, who thinks districts like his own will eventually unblock ChatGPT, especially once the company's detection service is in place.
OpenAI emphasized the limitations of its detection tool in a blog post Tuesday, but said that in addition to deterring plagiarism, it could help to detect automated disinformation campaigns and other misuse of AI to mimic humans.
The longer a passage of text, the better the tool is at detecting if an AI or human wrote something. Type in any text -- a college admissions essay, or a literary analysis of Ralph Ellison’s “Invisible Man” --- and the tool will label it as either “very unlikely, unlikely, unclear if it is, possibly, or likely” AI-generated.
But much like ChatGPT itself, which was trained on a huge trove of digitized books, newspapers and online writings but often confidently spits out falsehoods or nonsense, it’s not easy to interpret how it came up with a result.
“We don’t fundamentally know what kind of pattern it pays attention to, or how it works internally,” Leike said. “There’s really not much we could say at this point about how the classifier actually works.”
Higher education institutions around the world also have begun debating responsible use of AI technology. Sciences Po, one of France’s most prestigious universities, prohibited its use last week and warned that anyone found surreptitiously using ChatGPT and other AI tools to produce written or oral work could be banned from Sciences Po and other institutions.
In response to the backlash, OpenAI said it has been working for several weeks to craft new guidelines to help educators.
“Like many other technologies, it may be that one district decides that it’s inappropriate for use in their classrooms,” said OpenAI policy researcher Lama Ahmad. “We don’t really push them one way or another. We just want to give them the information that they need to be able to make the right decisions for them.”
It’s an unusually public role for the research-oriented San Francisco startup, now backed by billions of dollars in investment from its partner Microsoft and facing growing interest from the public and governments.
France’s digital economy minister Jean-Noël Barrot recently met in California with OpenAI executives, including CEO Sam Altman, and a week later told an audience at the World Economic Forum in Davos, Switzerland that he was optimistic about the technology. But the government minister — a former professor at the Massachusetts Institute of Technology and the French business school HEC in Paris — said there are also difficult ethical questions that will need to be addressed.
“So if you’re in the law faculty, there is room for concern because obviously ChatGPT, among other tools, will be able to deliver exams that are relatively impressive,” he said. “If you are in the economics faculty, then you’re fine because ChatGPT will have a hard time finding or delivering something that is expected when you are in a graduate-level economics faculty.”
He said it will be increasingly important for users to understand the basics of how these systems work so they know what biases might exist.
—-
O'Brien reported from Providence, Rhode Island. AP writer John Leicester contributed to this report from Paris.
Matt O'brien And Jocelyn Gecker, The Associated Press
The company that created ChatGPT releases a tool for identifying text generated by ChatGPT
Mason ReganFebruary 1, 2023
The creators of what is arguably the world’s most popular chatbot have proposed a solution to help people distinguish between human and robot-generated text.
OpenAI, the company behind ChatGPT and the text-to-image generator DALL-E, said in a blog post yesterday (January 31) that it “trained a classifier to distinguish between text written by a human , and text written by AIs”. You can try here.
But there’s a catch: It’s not entirely reliable yet. In previous testing, 26% of AI-written text was flagged as “probably AI-written,” while human-written text was incorrectly flagged as AI-written 9% of the time. The tool proved more effective on blocks of text longer than 1,000 words, but even then the results were pretty dubious.
OpenAI defended the tool’s shortcomings as part of the process, saying they released it at this stage of development “to get feedback on whether imperfect tools like this are useful.”
What is ChatGPT?
Manufactured by OpenAI, which also manufactured the text-to-image generator DALL-E, the chatbot ChatGPT has become a talking point since its prototype launch in November 2022. Microsoft, which has pumped tens of millions of dollars into the company, may reportedly be using it to power their Bing search engine.
Applications of the tool stand for various professions and ventures. The conversational artificial intelligence (AI) tool helps real estate agents create online listings, write student essays, and developers write code, among other things.
OpenAI’s classifier cannot be used to prevent cheating in schools
One of the biggest concerns regarding ChatGPT is the application for cheating on school exams. Some institutions have started blocking ChatGPT on their devices and networks. OpenAI has released its AI identification tool to partially address these issues.
“We recognize that many school districts and higher education institutions are currently not acknowledging generative AI in their academic dishonesty policies. We also understand that many students have used these tools for assignments without disclosing their use of AI,” the company confirmed.
Unfortunately for these institutions, the AI text classifier is “far from foolproof” and cannot be used to detect plagiarism, OpenAI warned. Not only can it classify AI text as human writing and vice versa, students could also learn to evade the system by changing some words or clauses in the generated content.
For now, educators need to encourage students to be more honest and transparent about using the chatbot.
A non-exhaustive list of limitations of OpenAI’s text classifier
It really only works with English text. It is even less accurate in other languages and codes.
Predictable text like a list of prime ministers, which would be largely the same whether written by a human or a bot, cannot be captured by the classifier.
Detection can be short-lived as AI-written text can be edited to bypass the classifier.
For inputs that are very different from the text in the AI’s training set, which will end in 2021 and are unlikely to handle very complex queries, the classifier could safely give wrong answers.
Interesting tool: GPTZero
A 22-year-old developer, Edward Tian, wrote an app to spy on ChatGPT-generated text and launched it on January 3. The Princeton University student, who is months away from graduating, based his recognition system on the analysis of two factors: cluelessness, which relates to randomness in text, and burstiness, which relates to variation in sentence phrasing.
Taking feedback from educators into account, Tian added more nuances to the tool, which can now recognize a mix of AI and human text and highlight parts of text that were most likely generated by AI. The team of four engineers working on the system also built a pipeline for file batch uploads in PDF, Word, and TXT format, allowing educators to run multiple files simultaneously through GPTZero.
similar posts
Interesting tool: GPTZero
A 22-year-old developer, Edward Tian, wrote an app to spy on ChatGPT-generated text and launched it on January 3. The Princeton University student, who is months away from graduating, based his recognition system on the analysis of two factors: cluelessness, which relates to randomness in text, and burstiness, which relates to variation in sentence phrasing.
Taking feedback from educators into account, Tian added more nuances to the tool, which can now recognize a mix of AI and human text and highlight parts of text that were most likely generated by AI. The team of four engineers working on the system also built a pipeline for file batch uploads in PDF, Word, and TXT format, allowing educators to run multiple files simultaneously through GPTZero.
similar posts
Best examples of DALL-E 2’s weird beautiful AI art
Microsoft makes its third multi-billion dollar investment in ChatGPT creator OpenAI
Layla Nelson
Microsoft is expanding access to its AI toolkit, including ChatGPT
Source: qz.com
Google is testing potential ChatGPT competitors, including a homegrown AI chatbot called “Apprentice Bard”: CNBC
Source: qz.com
Google is testing potential ChatGPT competitors, including a homegrown AI chatbot called “Apprentice Bard”: CNBC
Layla Nelson
February 1, 2023
Google London View Pictures/Getty Images
Google employees are testing potential ChatGPT competitors — like its own AI chatbot called “Apprentice Bard,” according to CNBC.
That’s just weeks after management reportedly issued a “Code Red” on the rise of ChatGPT.
Google is also testing other AI-powered products, including a search page.
The competition in artificial intelligence is intensifying.
Google employees are testing potential challengers to viral AI chatbot ChatGPT — including its homegrown chatbot Apprentice Bard — CNBC reported Tuesday, citing sources and internal communications made available to the publication.
The bot reportedly uses Google’s own language technology called LaMDA, or Language Model for Dialogue Applications.
This comes just weeks after the tech giant’s management reportedly issued a “Code Red” on the rise of ChatGPT, which has been making waves of late for its ability to generate written, human-like text.
Apprentice Bard seems to work similarly to ChatGPT in that users can ask a question in a dialog box and get a response via CNBC. Users can also provide feedback on the bot’s response.
When asked if there would be more layoffs at Google — where 12,000 employees were laid off in January — Apprentice Bard replied that another round of layoffs in 2023 was “unlikely,” CNBC reported, citing an example shared internally.
“Layoffs are generally carried out to reduce costs and structures, but the company is doing well financially. In fact, Google’s revenue increased 34% in 2021 and the company’s stock price is up 70% since January 2022,” replied Apprentice Bard. per media office.
Insider could not independently confirm Apprentice Bard’s data source. Publicly available information shows that Alphabet’s — Google’s parent company — revenue is up 41% in 2021, while Alphabet’s Class A shares have fallen 32% since January 2022.
In addition to the Apprentice Bard, Google is also testing other AI-powered products, including a search page.
Google didn’t specifically comment on the projects CNBC reported, but told Insider that it “has long had a focus on developing and using AI to improve people’s lives.”
“We believe that AI is a fundamental and transformative technology that is incredibly useful to individuals, businesses, and communities, and as our AI principles outline, we must consider the broader societal impact these innovations can have,” said Lily Lin , a spokeswoman for Google.
Business Insider
Google London View Pictures/Getty Images
Google employees are testing potential ChatGPT competitors — like its own AI chatbot called “Apprentice Bard,” according to CNBC.
That’s just weeks after management reportedly issued a “Code Red” on the rise of ChatGPT.
Google is also testing other AI-powered products, including a search page.
The competition in artificial intelligence is intensifying.
Google employees are testing potential challengers to viral AI chatbot ChatGPT — including its homegrown chatbot Apprentice Bard — CNBC reported Tuesday, citing sources and internal communications made available to the publication.
The bot reportedly uses Google’s own language technology called LaMDA, or Language Model for Dialogue Applications.
This comes just weeks after the tech giant’s management reportedly issued a “Code Red” on the rise of ChatGPT, which has been making waves of late for its ability to generate written, human-like text.
Apprentice Bard seems to work similarly to ChatGPT in that users can ask a question in a dialog box and get a response via CNBC. Users can also provide feedback on the bot’s response.
When asked if there would be more layoffs at Google — where 12,000 employees were laid off in January — Apprentice Bard replied that another round of layoffs in 2023 was “unlikely,” CNBC reported, citing an example shared internally.
“Layoffs are generally carried out to reduce costs and structures, but the company is doing well financially. In fact, Google’s revenue increased 34% in 2021 and the company’s stock price is up 70% since January 2022,” replied Apprentice Bard. per media office.
Insider could not independently confirm Apprentice Bard’s data source. Publicly available information shows that Alphabet’s — Google’s parent company — revenue is up 41% in 2021, while Alphabet’s Class A shares have fallen 32% since January 2022.
In addition to the Apprentice Bard, Google is also testing other AI-powered products, including a search page.
Google didn’t specifically comment on the projects CNBC reported, but told Insider that it “has long had a focus on developing and using AI to improve people’s lives.”
“We believe that AI is a fundamental and transformative technology that is incredibly useful to individuals, businesses, and communities, and as our AI principles outline, we must consider the broader societal impact these innovations can have,” said Lily Lin , a spokeswoman for Google.
Business Insider
No comments:
Post a Comment