By Dr. Tim Sandle
DIGITAL JOURNAL
December 2, 2024
Online shopping begins. Image by Tim Sandle.
As AI technology advances, scams become more realistic and harder to detect. Recently the firm Psono.com has highlighted modern scams like AI-powered phishing, clone emails, and gift card fraud that use personal data to create highly convincing attacks. Digital Journal has drawn out the key points from the report.
Understanding how these scams work can aid those seeking to protect personal information and money.
AI-Powered Scams
Scammers now use AI to impersonate family or friends, creating realistic voice recordings or videos from social media content. These deepfakes are used to ask for money or personal information, making the scams feel alarmingly real.
What to Do: If you receive an unexpected request, ask questions or details only the real person would know. A wrong or vague answer is a strong sign of a scam.
Gift Card Scams
Scammers analyse online shopping habits to target victims with gift card requests from stores they frequently use, especially during busy shopping seasons. The cards are quickly redeemed once the codes are shared, leaving the victim with financial loss.
What to Do: If someone asks for gift card codes, especially for payment or problem resolution, it’s likely a scam. Always verify requests directly with the person or organization before taking action.
Vishing
Vishing involves phone scams where attackers impersonate trusted organizations, like banks or government agencies, creating urgency—such as reporting “suspicious activity”—to pressure victims into sharing sensitive details.
What to Do: No legitimate organization will ever ask over the phone for sensitive information, like PINs or card details. If unsure, hang up and contact the institution directly using a verified number. Always take a moment to verify before acting on any request.
Smishing
Smishing scams use fake text messages that mimic delivery updates or account alerts, often targeting online shoppers, to steal credentials or spread malware.
What to Do: Always check the sender’s number. If it doesn’t match the official organization, it’s likely a scam. Verify messages directly with the company before taking action.
Clone Phishing
Clone phishing replicates real emails, like receipts or notifications, but replaces links or attachments with malicious ones. The familiarity makes them easy to fall for.
What to Do: Check the sender’s email address and double-check any links by hovering over them. If the email feels off, contact the sender directly using their official contact details.
Social Media Phishing
Social media phishing uses fake or hacked profiles to send messages that mimic giveaways or urgent requests. These scams aim to steal login credentials or personal information.
What to Do: Avoid clicking links in unsolicited messages. Verify requests directly with the sender and double-check login pages for authenticity.
Man-in-the-Middle Attacks
Man-in-the-middle attacks happen when hackers, like passwords or banking details, intercept what you send or receive on public Wi-Fi. Using Wi-Fi at places like cafés or airports can make your data a target.
What to Do: Avoid logging into important accounts on public Wi-Fi. Use a VPN for extra security and look for “https://” on websites to ensure they are encrypted.
Ransomware
Ransomware blocks access to files or devices by encrypting them and then demands payment to unlock them. These attacks often start with phishing emails or fake downloads and target personal data like photos or documents.
What to Do: Back up important files offline and avoid clicking on suspicious links or attachments. If attacked, report the incident to relevant authorities and seek professional advice on the next steps.
DNS Spoofing
DNS spoofing redirects users to fake websites that look like real ones. These sites are designed to steal sensitive information like passwords or credit card details.
What to Do: Always check the website address carefully before entering any information. Use secure websites with “https://” and consider tools that protect against DNS attacks.
Fake Job Offers
Scammers post fake job offers, often promising high pay or remote work, to steal personal details or money. They may ask for fees or sensitive information, pretending to be real companies.
What to Do: Before paying or sharing personal information, ensure the request comes from the right source. Research the company and confirm details through official channels.
AI is changing how scammers operate, making their attacks more personal and harder to spot. They use tools to mimic voices, create fake videos, or send messages that seem to come from trusted contacts.
Misinformation expert cites bogus studies — likely due to AI — in court case: court docs
Judge with Gavel (Shutterstock)
An expert in misinformation has been accused of using artificial intelligence to craft an expert declaration in a court case — and cited a study that doesn't exist.
Communication professor Jeff Hancock is the founding director of Stanford’s Social Media Lab stands accused of using AI to craft an expert declaration in a Minnesota court case.
The effort was about a 2023 state law that criminalizes using deepfakes to influence an election, The Stanford Daily noted Monday. Hancock handed in a 12-page declaration defending the law with 15 legal citations. Two of those couldn't be found, however.
The reason: ChatGPT appeared to make them up.
"No article by the title exists," court documents allege. "The publication exists, but the cited pages belong to unrelated articles. Likely, the study was a 'hallucination' generated by an AI large language model like ChatGPT. A part-fabricated declaration is unreliable."
The lawsuit is between Republican Minnesota state Rep. Mary Franson and a conservative social media satirist named Christopher Kohls against the state. The latter claimed that the state law limited free speech when the use of AI media could be used to explain false information.
ALSO READ: FBI uncovers deceptive AI deepfakes in 2024 election's final hours
Hancock was given $600 an hour for writing up his comments and was required to swear under penalty of perjury that what he said in the document was "true and correct."
A Nov. 16 filing cited errors Hancock made and requested the judge exclude it from the state's case.
“The citation bears the hallmarks of being an artificial intelligence (AI) ‘hallucination,’ suggesting that at least the citation was generated by a large language model like ChatGPT,” lawyer Frank Berdnarz wrote. “The existence of a fictional citation Hancock (or his assistants) didn’t even bother to click calls into question the quality and veracity of the entire declaration.”
Read the full report here.
Sarah K. Burris
December 2, 2024
RAW STORY
December 2, 2024
RAW STORY
Judge with Gavel (Shutterstock)
An expert in misinformation has been accused of using artificial intelligence to craft an expert declaration in a court case — and cited a study that doesn't exist.
Communication professor Jeff Hancock is the founding director of Stanford’s Social Media Lab stands accused of using AI to craft an expert declaration in a Minnesota court case.
The effort was about a 2023 state law that criminalizes using deepfakes to influence an election, The Stanford Daily noted Monday. Hancock handed in a 12-page declaration defending the law with 15 legal citations. Two of those couldn't be found, however.
The reason: ChatGPT appeared to make them up.
"No article by the title exists," court documents allege. "The publication exists, but the cited pages belong to unrelated articles. Likely, the study was a 'hallucination' generated by an AI large language model like ChatGPT. A part-fabricated declaration is unreliable."
The lawsuit is between Republican Minnesota state Rep. Mary Franson and a conservative social media satirist named Christopher Kohls against the state. The latter claimed that the state law limited free speech when the use of AI media could be used to explain false information.
ALSO READ: FBI uncovers deceptive AI deepfakes in 2024 election's final hours
Hancock was given $600 an hour for writing up his comments and was required to swear under penalty of perjury that what he said in the document was "true and correct."
A Nov. 16 filing cited errors Hancock made and requested the judge exclude it from the state's case.
“The citation bears the hallmarks of being an artificial intelligence (AI) ‘hallucination,’ suggesting that at least the citation was generated by a large language model like ChatGPT,” lawyer Frank Berdnarz wrote. “The existence of a fictional citation Hancock (or his assistants) didn’t even bother to click calls into question the quality and veracity of the entire declaration.”
Read the full report here.
No comments:
Post a Comment