Will ChatGPT replace computational materials scientists?
“ChatGPT is a very impressive tool,” said paper author Zijian Hong, professor at the School of Materials Science and Engineering, Zhejiang University, China. “As a computational materials scientist, I’m always eager to embrace new tools, in particular, new tools in computer science and AI. Since the born of the new ChatGPT, I’m just wondering whether such a tool can assist us in computational materials science”
Hong explained that for a computational materials task, there are three main steps: building a model or a structure, writing codes for specific scientific software, and preparing data visualization scripts. To test the capability of ChatGPT, he examined it from these aspects.
“ChatGPT can help us prepare scripts to build atomic structure, i.e., the cif file, scripts for running a DFT calculation, and scripts for data visualization”, Hong said. “At least it is trying to help us from chat, although the scripts are not working at all when I accessed on Feb. 20, 2023.”
“But what surprised me is the ability to evolve and learn from communications”, Hong added, “When I accessed 20 days later, it gives me different answers, towards the correct answer. And if I give more hints, such as the correct lattice structure, it can correct by itself, just like a human being.”
“It is still not perfect for sure. For example, it still makes simple mistakes, the consistency of the output is not guaranteed, and the ethical concerns are still there.” Hong said. “But change is really near the corner, for computational materials science. We should embrace it rather than avoid it.”
This work is supported by the Fundamental Research Funds for the Central Universities and a startup fund from Zhejiang University.
JOURNAL
Energy Material Advances
METHOD OF RESEARCH
Computational simulation/modeling
SUBJECT OF RESEARCH
Not applicable
ARTICLE TITLE
ChatGPT for Computational Materials Science: A Perspective
ARTICLE PUBLICATION DATE
12-Apr-2023
Japanese universities become latest to restrict use of ChatGPT
Vishwam Sankaran
Mon, 10 April 2023
Japanese universities are curtailing students’ use of the OpenAI chatbot ChatGPT amid concerns about information leaks from the use of the artificial intelligence tool.
Tokyo’s Sophia University banned the use of the chatbot by students to write assignments such as essays, reports, and theses.
“The use of text, programme source code, calculation results generated by ChatGPT and other AI chatbots is not permitted on any assignments such as reaction papers, reports, essays and thesis, as they are not created by the student themselves,” the university noted in new guidelines published recently on its official website.
“If the use is confirmed by detection tools, etc, strict measures will be taken in accordance with the University’s Disciplinary Rules on Misconduct,” it added.
OpenAI released ChatGPT in November last year and the chatbot immediately gained prominence online with experts praising the AI tool’s ability to respond to user queries with human-like output.
The chatbot has demonstrated the ability to summarise research studies and answer logical questions and has also cracked business school and medical exams crucial for students to pass.
Citing some of these advances, several AI experts also have warned that there could be significant disruptions, especially in academia due to the breakthrough technology.
Others have also warned the AI’s nature to provide plausible-sounding but incorrect responses with glaring mistakes to some queries.
In areas where the AI does not have sufficient knowledge, experts have flagged that it may confidently provide incorrect answers that may mislead people, adding that its use may also result in copyright infringement.
Holden Thorp, editor-in-chief of Science journals, had warned earlier this year that text generated by “ChatGPT (or any other AI tools) cannot be used in the work submitted to the outlets, adding that violation of the new policies may constitute “scientific misconduct” in the same league as plagiarism.
The University of Tokyo also published a new document on its internal website with updated guidelines on the use of AI chatbots which noted that “reports must be created by students themselves and cannot be created solely with the help of AI.”
In a set of instructions for teachers, Tohoku University noted that while it is “not realistic” to completely eliminate the use of tools like ChatGPT, “especially after class hours,” there can be major problems for students’ own learning if they use generative AI for compiling reports that may lead to “strict grade evaluation.”
“Assuming that many students will use it, we will take measures as necessary,” the university noted.
The university also warned its teachers that while using generation AI tools for assessing and translating unpublished research results, the data can be unintentionally leaked to the service provider, “partially or completely”.
“By using input to generation AI and translation sites as learning data, it is presented as an answer to other users, and there is also a concern that information will be leaked. Please be very careful,” Tohoku University said, adding that a similar scenario may play out for other confidential information.
“There is a risk that information that should not be leaked to the outside, such as information about the entrance examination and personal information of students and faculty members, will be transmitted to service providers through generation AI, etc, and there is a risk that it will be presented as an answer to other users,” it said.
The university also advised faculty to check “how AI will respond before assigning exercises and reports” to students.
Italy previously banned the use of ChatGPT last month with data protection authorities saying the AI service would be investigated over privacy concerns.
Authorities said the AI system does not have a proper legal basis to collect personal information about the people using it.
Last week Germany also said it is considering a ban of ChatGPT due to privacy concerns.
Other data regulators across Europe, including watchdogs in France and Ireland, are also reportedly in conversation with Italian authorities to understand the basis for their ban.