ByDr. Tim Sandle
June 20, 2025
DIGITAL JOURNAL

The EU this year approved the world's first comprehensive rules to govern AI systems like ChatGPT - Copyright AFP/File SAUL LOEB
The firm Indusface has issued some warnings in relation to data sharing with AI. Here, it was found that ChatGPT fails 35% of finance questions, raising concerns over the use of the tool as a financial advisor.
This coincides with related findings that over a third of U.S. adults who use the tool find themselves “dependent” on it for answers, revealing an over-reliance on AI for work-related matters.
Indusface has sought to investigate what personal and professional data people might be oversharing with LLMs, and where the boundaries should be drawn.
ChatGPT No-Gos: Never Share This Data
Work files, such as reports and presentations
One of the most common categories of information shared with AI is work-related files and documents. Over 80% of professionals in Fortune 500 enterprises use AI tools, such as ChatGPT, to assist with tasks such as refining emails, reports, and presentations.
However, 11% of the data that employees paste into ChatGPT is strictly confidential, such as internal business strategies. It is therefore recommended to remove sensitive data from files such as business reports and presentations ahead of inputting the file into ChatGPT, as LLMs hold onto this information indefinitely and might share your information with other users if prompted.
Passwords and access credentials
From a young age, we are taught not to share our passwords with others, and that’s why we rely on notepads, phones, or even our memories to remember the password we have chosen. 24% of Americans store their passwords on a note on their device, whilst 18% save them in an internet browser. As LLMs regularly perform both roles, it’s important to remember that they are not designed with confidentiality in mind, but rather the purpose to learn from what users input, the questions they ask, and the information they provide.
Personal details, such as your name and address
Although this ‘data’ might seem invaluable day-to-day, sharing personal details such as your name, address, and recognizable photos makes you vulnerable to fraud. It is critical to avoid feeding LLMs information that might allow fraudsters to either
1) impersonate you, or
2) create deepfakes, which depict people saying or doing something they never said or did.
If either situation were to happen, it could damage both personal and professional reputations. If the above information is shared about a colleague without their knowledge and fraud or deepfakes were to happen, it would create severe distrust and lead to legal action against the company.
This is why AI literacy and education is critical for business operations in the age of technology.
Financial information
LLMs like ChatGPT can be a useful tool to explain financial topics or even conduct some level of financial analysis, but should never be used as a tool for a business’s financial decisions. LLMs are lacking in numerical literacy as they are primarily a word-processing tool, so inputting financial figures into ChatGPT is likely to output mistakes and potentially harmful business strategies.
It is best practice to use LLMs as an aid in your understanding of finance, rather than a tool to calculate solutions or make important financial decisions.
Company codebases and intellectual property (IP)
Developers and employees increasingly turn to AI for coding assistance; however, sharing company codebases can pose a major security risk, as it is a business’s core intellectual property. If proprietary source codes are pasted into AI platforms, they may be stored, processed, or even used to train future AI models, potentially exposing trade secrets to external entities.

The EU this year approved the world's first comprehensive rules to govern AI systems like ChatGPT - Copyright AFP/File SAUL LOEB
The firm Indusface has issued some warnings in relation to data sharing with AI. Here, it was found that ChatGPT fails 35% of finance questions, raising concerns over the use of the tool as a financial advisor.
This coincides with related findings that over a third of U.S. adults who use the tool find themselves “dependent” on it for answers, revealing an over-reliance on AI for work-related matters.
Indusface has sought to investigate what personal and professional data people might be oversharing with LLMs, and where the boundaries should be drawn.
ChatGPT No-Gos: Never Share This Data
Work files, such as reports and presentations
One of the most common categories of information shared with AI is work-related files and documents. Over 80% of professionals in Fortune 500 enterprises use AI tools, such as ChatGPT, to assist with tasks such as refining emails, reports, and presentations.
However, 11% of the data that employees paste into ChatGPT is strictly confidential, such as internal business strategies. It is therefore recommended to remove sensitive data from files such as business reports and presentations ahead of inputting the file into ChatGPT, as LLMs hold onto this information indefinitely and might share your information with other users if prompted.
Passwords and access credentials
From a young age, we are taught not to share our passwords with others, and that’s why we rely on notepads, phones, or even our memories to remember the password we have chosen. 24% of Americans store their passwords on a note on their device, whilst 18% save them in an internet browser. As LLMs regularly perform both roles, it’s important to remember that they are not designed with confidentiality in mind, but rather the purpose to learn from what users input, the questions they ask, and the information they provide.
Personal details, such as your name and address
Although this ‘data’ might seem invaluable day-to-day, sharing personal details such as your name, address, and recognizable photos makes you vulnerable to fraud. It is critical to avoid feeding LLMs information that might allow fraudsters to either
1) impersonate you, or
2) create deepfakes, which depict people saying or doing something they never said or did.
If either situation were to happen, it could damage both personal and professional reputations. If the above information is shared about a colleague without their knowledge and fraud or deepfakes were to happen, it would create severe distrust and lead to legal action against the company.
This is why AI literacy and education is critical for business operations in the age of technology.
Financial information
LLMs like ChatGPT can be a useful tool to explain financial topics or even conduct some level of financial analysis, but should never be used as a tool for a business’s financial decisions. LLMs are lacking in numerical literacy as they are primarily a word-processing tool, so inputting financial figures into ChatGPT is likely to output mistakes and potentially harmful business strategies.
It is best practice to use LLMs as an aid in your understanding of finance, rather than a tool to calculate solutions or make important financial decisions.
Company codebases and intellectual property (IP)
Developers and employees increasingly turn to AI for coding assistance; however, sharing company codebases can pose a major security risk, as it is a business’s core intellectual property. If proprietary source codes are pasted into AI platforms, they may be stored, processed, or even used to train future AI models, potentially exposing trade secrets to external entities.
Justice at stake as generative AI enters the courtroom
By AFP
June 19, 2025

Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court - Copyright POOL/AFP Jefferson Siegel
Thomas URBAIN
Generative artificial intelligence (GenAI) is making its way into courts despite early stumbles, raising questions about how it will influence the legal system and justice itself.
Judges use the technology for research, lawyers utilize it for appeals and parties involved in cases have relied on GenAI to help express themselves in court.
“It’s probably used more than people expect,” said Daniel Linna, a professor at the Northwestern Pritzker School of Law, about GenAI in the US legal system.
“Judges don’t necessarily raise their hand and talk about this to a whole room of judges, but I have people who come to me afterward and say they are experimenting with it.”
In one prominent instance, GenAI enabled murder victim Chris Pelkey to address an Arizona courtroom — in the form of a video avatar — at the sentencing of the man convicted of shooting him dead in 2021 during a clash between motorists.
“I believe in forgiveness,” said a digital proxy of Pelkey created by his sister, Stacey Wales.
The judge voiced appreciation for the avatar, saying it seemed authentic.
“I knew it would be powerful,” Wales told AFP, “that that it would humanize Chris in the eyes of the judge.”
The AI testimony, a first of its kind, ended the sentencing hearing at which Wales and other members of the slain man’s family spoke about the impact of the loss.
Since the hearing, examples of GenAI being used in US legal cases have multiplied.
“It is a helpful tool and it is time-saving, as long as the accuracy is confirmed,” said attorney Stephen Schwartz, who practices in the northeastern state of Maine.
“Overall, it’s a positive development in jurisprudence.”
Schwartz described using ChatGPT as well as GenAI legal assistants, such as LexisNexis Protege and CoCounsel from Thomson Reuters, for researching case law and other tasks.
“You can’t completely rely on it,” Schwartz cautioned, recommending that cases proffered by GenAI be read to ensure accuracy.
“We are all aware of a horror story where AI comes up with mixed-up case things.”
The technology has been the culprit behind false legal citations, far-fetched case precedents, and flat-out fabrications.
In early May, a federal judge in Los Angeles imposed $31,100 in fines and damages on two law firms for an error-riddled petition drafted with the help of GenAI, blasting it as a “collective debacle.”
The tech is also being relied on by some who skip lawyers and represent themselves in court, often causing legal errors.

Image: — © AFP
And as GenAI makes it easier and cheaper to draft legal complaints, courts already overburdened by caseloads could see them climb higher, said Shay Cleary of the National Center for State Courts.
“Courts need to be prepared to handle that,” Cleary said.
– Transformation –
Law professor Linna sees the potential for GenAI to be part of the solution though, giving more people the ability to seek justice in courts made more efficient.
“We have a huge number of people who don’t have access to legal services,” Linna said.
“These tools can be transformative; of course we need to be thoughtful about how we integrate them.”
Federal judges in the US capitol have written decisions noting their use of ChatGPT in laying out their opinions.
“Judges need to be technologically up-to-date and trained in AI,” Linna said.
GenAI assistants already have the potential to influence the outcome of cases the same way a human law clerk might, reasoned the professor.
Facts or case law pointed out by GenAI might sway a judge’s decision, and could be different than what a legal clerk would have come up with.
But if GenAI lives up to its potential and excels at finding the best information for judges to consider, that could make for well-grounded rulings less likely to be overturned on appeal, according to Linna.
By AFP
June 19, 2025

Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court - Copyright POOL/AFP Jefferson Siegel
Thomas URBAIN
Generative artificial intelligence (GenAI) is making its way into courts despite early stumbles, raising questions about how it will influence the legal system and justice itself.
Judges use the technology for research, lawyers utilize it for appeals and parties involved in cases have relied on GenAI to help express themselves in court.
“It’s probably used more than people expect,” said Daniel Linna, a professor at the Northwestern Pritzker School of Law, about GenAI in the US legal system.
“Judges don’t necessarily raise their hand and talk about this to a whole room of judges, but I have people who come to me afterward and say they are experimenting with it.”
In one prominent instance, GenAI enabled murder victim Chris Pelkey to address an Arizona courtroom — in the form of a video avatar — at the sentencing of the man convicted of shooting him dead in 2021 during a clash between motorists.
“I believe in forgiveness,” said a digital proxy of Pelkey created by his sister, Stacey Wales.
The judge voiced appreciation for the avatar, saying it seemed authentic.
“I knew it would be powerful,” Wales told AFP, “that that it would humanize Chris in the eyes of the judge.”
The AI testimony, a first of its kind, ended the sentencing hearing at which Wales and other members of the slain man’s family spoke about the impact of the loss.
Since the hearing, examples of GenAI being used in US legal cases have multiplied.
“It is a helpful tool and it is time-saving, as long as the accuracy is confirmed,” said attorney Stephen Schwartz, who practices in the northeastern state of Maine.
“Overall, it’s a positive development in jurisprudence.”
Schwartz described using ChatGPT as well as GenAI legal assistants, such as LexisNexis Protege and CoCounsel from Thomson Reuters, for researching case law and other tasks.
“You can’t completely rely on it,” Schwartz cautioned, recommending that cases proffered by GenAI be read to ensure accuracy.
“We are all aware of a horror story where AI comes up with mixed-up case things.”
The technology has been the culprit behind false legal citations, far-fetched case precedents, and flat-out fabrications.
In early May, a federal judge in Los Angeles imposed $31,100 in fines and damages on two law firms for an error-riddled petition drafted with the help of GenAI, blasting it as a “collective debacle.”
The tech is also being relied on by some who skip lawyers and represent themselves in court, often causing legal errors.

Image: — © AFP
And as GenAI makes it easier and cheaper to draft legal complaints, courts already overburdened by caseloads could see them climb higher, said Shay Cleary of the National Center for State Courts.
“Courts need to be prepared to handle that,” Cleary said.
– Transformation –
Law professor Linna sees the potential for GenAI to be part of the solution though, giving more people the ability to seek justice in courts made more efficient.
“We have a huge number of people who don’t have access to legal services,” Linna said.
“These tools can be transformative; of course we need to be thoughtful about how we integrate them.”
Federal judges in the US capitol have written decisions noting their use of ChatGPT in laying out their opinions.
“Judges need to be technologically up-to-date and trained in AI,” Linna said.
GenAI assistants already have the potential to influence the outcome of cases the same way a human law clerk might, reasoned the professor.
Facts or case law pointed out by GenAI might sway a judge’s decision, and could be different than what a legal clerk would have come up with.
But if GenAI lives up to its potential and excels at finding the best information for judges to consider, that could make for well-grounded rulings less likely to be overturned on appeal, according to Linna.
No comments:
Post a Comment