Monday, May 01, 2023

Workers are secretly using ChatGPT, AI and it will pose big risks for tech leaders

Story by Mikaela Cohen • 
 CNBC

Chief information security officers need to approach generative AI with caution and prepare with necessary cyber defense measures.

Not every company has its own GPT, or generative pretrained transformer, so they need to monitor how workers use this technology.

Even when it's not sanctioned by the IT department, employees are finding ways to use chatbots to make their jobs easier.




Soaring investment from big tech companies in artificial intelligence and chatbots — amid massive layoffs and a growth decline — has left many chief information security officers in a whirlwind.

With OpenAI's ChatGPT, Microsoft's Bing AI, Google's Bard and Elon Musk's plan for his own chatbot making headlines, generative AI is seeping into the workplace, and chief information security officers need to approach this technology with caution and prepare with necessary security measures.

The tech behind GPT, or generative pretrained transformers, is powered by large language models (LLMs), or algorithms that produce a chatbot's human-like conversations. But not every company has its own GPT, so companies need to monitor how workers use this technology.

People are going to use generative AI if they find it useful to do their work, says Michael Chui, a partner at the McKinsey Global Institute, comparing it to the way workers use personal computers or phones.

"Even when it's not sanctioned or blessed by IT, people are finding [chatbots] useful," Chui said.

"Throughout history, we've found technologies which are so compelling that individuals are willing to pay for it," he said. "People were buying mobile phones long before businesses said, 'I will supply this to you.' PCs were similar, so we're seeing the equivalent now with generative AI."

As a result, there's "catch up" for companies in terms of how the are going to approach security measures, Chui added.

Whether it's standard business practice like monitoring what information is shared on an AI platform or integrating a company-sanctioned GPT in the workplace, experts think there are certain areas where CISOs and companies should start.

Start with the basics of information security

CISOs — already combating burnout and stress — deal with enough problems, like potential cybersecurity attacks and increasing automation needs. As AI and GPT move into the workplace, CISOs can start with the security basics.

Chui said companies can license use of an existing AI platform, so they can monitor what employees say to a chatbot and make sure that the information shared is protected.

"If you're a corporation, you don't want your employees prompting a publicly available chatbot with confidential information," Chui said. "So, you could put technical means in place, where you can license the software and have an enforceable legal agreement about where your data goes or doesn't go."

Licensing use of software comes with additional checks and balances, Chui said. Protection of confidential information, regulation of where the information gets stored, and guidelines for how employees can use the software — all are standard procedure when companies license software, AI or not.

"If you have an agreement, you can audit the software, so you can see if they're protecting the data in the ways that you want it to be protected," Chui said.

Most companies that store information with cloud-based software already do this, Chui said, so getting ahead and offering employees an AI platform that's company-sanctioned means a business is already in-line with existing industry practices.
How to create or integrate a customized GPT

One security option for companies is to develop their own GPT, or hire companies that create this technology to make a custom version, says Sameer Penakalapati, chief executive officer at Ceipal, an AI-driven talent acquisition platform.

In specific functions like HR, there are multiple platforms from Ceipal to Beamery's TalentGPT, and companies may consider Microsoft's plan to offer customizable GPT. But despite increasingly high costs, companies may also want to create their own technology.

If a company creates its own GPT, the software will have the exact information it wants employees to have access to. A company can also safeguard the information that employees feed into it, Penakalapati said, but even hiring an AI company to generate this platform will enable companies to feed and store information safely, he added.

Whatever path a company chooses, Penakalapati said that CISOs should remember that these machines perform based on how they have been taught. It's important to be intentional about the data you're giving the technology.

"I always tell people to make sure you have technology that provides information based on unbiased and accurate data," Penakalapati said. "Because this technology is not created by accident."


Uh Oh, Chatbots Are Getting a Teeny Bit Sentient

Story by Tim Newcomb • POPMECH

Oxford philosopher Nick Bostrom believes AI has already started to show small amounts of sentience.© Ole_CNX - Getty Images
Oxford philosopher Nick Bostrom believes AI has already started to show small amounts of sentience.
He cautions against the next step in AI, which could grow into the programs beginning to understand their place in society.
A growing body of experts is debating the ramifications of sentience in AI.

The great artificial-intelligence-sentience debate ramps up with leading AI philosopher Nick Bostrom—director of Oxford's Future of Humanity Institute—weighing in via a New York Times interview. He claims that AI chatbots have already started the process toward sentience, the capability to experience feelings and sensations.

He's not alone in this line of thinking. Bostrom's voice is loud in the AI-consciousness debate, but it's not the only one, with a host of philosophers and tech experts already saying that AI's qualities associated with sentience are growing.

And if the path has already started, Bostrom claims, it will only continue.

Now, it's important to keep in mind that almost all AI experts say that AI chatbots are not sentient. They're not about to spontaneously develop consciousness in the way that we understand it in humans.

Bostrom's claims do not amount to an opinion that AI is further along than we think it is. Rather, they suggest that we should be thinking about sentience in a different way—more like a spectrum and less like a switch.

Related video: Snapchat Rolls Out Chatbot to All Users (Cover Media)  View on Watch


"If you admit that it's not an all-or-nothing thing," the philosopher tells the New York Times, "then it's not so dramatic to say that some of these [AI] assistants might plausibly be candidates for having some degrees of sentience."

That "some degrees" comment is the one worth focusing on. If true that AI chatbots have developed even the slightest degree of sentience, then it stands to reason there's room for continued growth, with Bostrom adding that the large language models (LLMs) "may soon develop a conception of self as persisting through time, reflect on desires, and socially interact and form relationships with humans."

He further claims via the article that the LLMs aren't merely grabbing and showing blocks of text, instead saying, "they exhibit glimpses of creativity, insight, and understanding that are quite impressive and may show the rudiments of reasoning."

Bostrom has called for the awareness of what a sentient AI could mean for society for roughly a decade, including his 2014 example of an advanced AI with the lone goal of making paperclips potentially turning it into a human-erasing machine.

"The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off," he said in 2014. "Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear toward would be one in which there were a lot of paper clips but no humans."

Dealing with a thought-creating AI requires a level of oversight different than dealing with basic technology. "If an AI showed sins of sentience, it plausibly would have some degree of moral status," Bostrom tells the New York Times. "This means there would be certain ways of treating it that would be wrong, just as it would be wrong to kick a dog or for medical researchers to perform surgery on a mouse without anesthetizing it."

This echoes earlier thoughts Bostrom has shared about issues of governance and moral status of AI, if cognitively capable systems do become a reality.

If we're measuring AI sentience in degrees, hopefully someone out there is watching to see how fast these degrees grow.


A doctor’s new sidekick? How ChatGPT may change the role of physicians

Story by Katie Dangerfield • Yesterday

Doctor diagnose human brain© Getty Images

Can AI and ChatGPT help healthcare?View on Watch

The emergence of artificial intelligence (AI) chatbots has opened up new possibilities for doctors and patients — but the technology also comes with the risk of misdiagnosis, data privacy issues and biases in decision-making.

One of the most popular examples is ChatGPT, which can mimic human conversations and create personalized medical advice. In fact, it recently passed the U.S. Medical Licensing Exam.

And because of its ability to generate human-like responses, some experts believe ChatGPT could help doctors with paperwork, examine X-rays (the platform is capable of reading photos) and weigh in on a patient's surgery.

The software could potentially become as crucial for doctors as the stethoscope was in the last century for the medical field, said Dr. Robert Pearl, a professor at the Stanford University School of Medicine.

"It just won't be possible to provide the best cutting-edge medicine in the future (without it)," he said, adding the platform is still years away from reaching its full potential.

"The current version of ChatGPT needs to be understood as a toy," he said. "It's probably two per cent of what's going to happen in the future."


This is because generative AI can increase in power and effectiveness, doubling every six to 10 months, according to researchers.

Developed by OpenAI, and released for testing to the general public in November 2022, ChatGPT had explosive uptake. After its release, over a million people signed up to use it in just five days, according to OpenAI CEO Sam Altman.

The software is currently free as it sits in its research phase, though there are plans to eventually charge.

“We will have to monetize it somehow at some point; the compute costs are eye-watering,” Altman said online on Dec. 5, 2022.

Although ChatGPT is a relatively new platform, the idea of AI and health care has been around for years.

In 2007, IBM created an open-domain question–answering system, named Watson, which won first place on the television game show Jeopardy!

Ten years later, a team of scientists used Watson to successfully identify new RNA-binding proteins that were altered in the disease amyotrophic lateral sclerosis (ALS), highlighting the use of AI tools to accelerate scientific discovery in neurological disorders.

During the COVID-19 pandemic, researchers from the University of Waterloo developed AI models that predicted which COVID-19 patients were most likely to have severe kidney injury outcomes while they are in hospital.

What sets ChatGPT apart from the other AI platforms is its ability to communicate, said Huda Idrees, founder and CEO of Dot Health, a health data tracker.

"Within a health-care context, communicating with clients — for example, if someone needs to write a longish letter describing their care plan — it makes sense to use ChatGPT. It would save doctors a lot of time," she said. "So from an efficiency perspective, I see it as a very strong communication tool."

Its communication is so effective that a JAMA study published April 28 found ChatGPT may have better bedside manners than some doctors.

The study had 195 randomly drawn patient questions and compared physicians’ and the chatbot’s answers. The chatbot responses were preferred over physician responses and rated significantly higher for both quality and empathy.

On average, ChatGPT scored 21 per cent higher than physicians for the quality of responses and 41 per cent more empathetic, according to the study.

In terms of the software taking over a doctor's job, Pearl said he does not see that happening, but rather he believes it will act like a digital assistant.

"It becomes a partner for the doctor to use," he said. "Medical knowledge doubles every 73 days. It's just not possible for a human being to stay up at that pace. There's also more and more information about unusual conditions that ChatGPT can find in the literature and provide to the physician."

By using ChatGPT to sift through the vast amount of medical knowledge, it can help a physician save time and even help lead to a diagnosis, Pearl explained.

It's still early days, but people are looking at using the platform as a tool to help monitor patients from home, explained Carrie Jenkins, a professor of philosophy at the University of British Columbia.

"We're already seeing that there is work in monitoring patient's sugars and automatically filing out the right insulin they should have if they need it for their diabetes," he told Global News in February.

"Maybe one day it will help with our diagnostic process, but we are not there yet," he added.

Previous studies have shown that physicians vastly outperform computer algorithms in diagnostic accuracy.

For example, a 2016 research letter published in JAMA Internal Medicine, showed that physicians were correct more than 84 per cent when diagnosing a patient, compared to a computer algorithm, which was correct 51 per cent of the time.

More recently, an emergency room doctor in the United States put ChatGPT to work in a real-world medical situation.

In an article published in Medium, Dr. Josh Tamayo-Sarver said he fed the AI platform anonymized medical history of past patients and the symptoms that brought them to the emergency department.

"The results were fascinating, but also fairly disturbing," he wrote.

If he entered precise, detailed information, the chatbot did a "decent job" of bringing up common diagnoses he wouldn't want to miss, he said.

But the platform only had about a 50 per cent success rate in correctly diagnosing his patients, he added.

"ChatGPT also misdiagnosed several other patients who had life-threatening conditions. It correctly suggested one of them had a brain tumor — but missed two others who also had tumors. It diagnosed another patient with torso pain as having a kidney stone — but missed that the patient actually had an aortic rupture," he wrote.

Its developers have acknowledged this pitfall.

"ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers," OpenAI stated on its website.

The potential for misdiagnosis is just one of the fallbacks of using ChatGPT in the health-care setting.

ChatGPT is trained on vast amounts of data made by humans, which means there can be inherent biases.

"There's a lot of times where it's factually incorrect, and that's what gives me pause when it comes to specific health queries," Idrees said, adding that not only does the software get facts wrong, but it can also pull biased information.

"It could be that there is a lot of anti-vax information available on the internet, so maybe it actually will reference more anti-vax links more than it needs to," she explained.

Idrees pointed out that another limit the software has is the difficulty in accessing private health information.

From lab results, and screening tests, to surgical notes, there is a "whole wealth" of information that is not easily accessible, even when it's digitally captured.

"In order for ChatGPT to do anything ... really impactful in health care, it would need to be able to consume and have a whole other set of language in order to communicate that health-care data," she said.

"I don't see how it's going to magically access these treasure troves of health data unless the industry moves first."

— with files from the Associated Press and Global News' Kathryn Mannie

No comments:

Post a Comment