Software Development

The angler phishing scam and other ChatGPT-driven cyber scams to avoid

Generative AI platforms make cybercriminals' work easier.

AI can be prompted to generate content that can be used for cybercrime. (Image by Holly Quinn, made with SDXL)

You’re scrolling through social media when you’re alerted that a favorite brand account has tagged you in a post. As a thank you for being a customer, the company wants to reward your loyalty — all you have to do is DM them your name, address and email.

You imagine free gifts arriving at your door. Loyalty rewards are a thing, right?

They are. But if you send that DM, you will have fallen for the angler phishing scam, a common type of cybercrime that is easily facilitated by generative AI bots like ChatGPT.

New York-based multi-factor authentication (MFA) system Beyond Identity put ChatGPT’s hacking capabilities to the test by prompting it to create fake messages, passwords and apps. The AI was able to form fake security alerts, requests for assistance, emails from a CEO, social media posts and Bitcoin trading alerts nearly instantly.

To gauge whether these AI-generated scams might fool people into sharing private information, the Beyond Identity team then surveyed more than 1,000 Americans, asking if they would fall for it.

In some cases, such as the well known “urgent request for assistance” email scam, fewer than 10% said they’d fall for it. Slightly more — 12 to 15% — could see themselves responding to a security alert email or text “informing” them that their data may have been compromised by a security breach and they need to click a link to verify their account information.

It was the social media post offering a loyalty reward — that angler phishing scam — that the most respondents said they’d fall for, at 21%.

Scams like these are going to continue to become more common, and stealthier.

Image of ChatGPT-generated angler phishing message from a social media account.

ChatGPT-generated angler phishing message from a social media account. (Courtesy image)

“With ChatGPT, scams are only going to increase in quality and quantity,” Beyond Identity CTO Jasson Casey told Technical.ly. “Businesses and organizations should continue to invest in education and tools to help users understand and identify phishing scams that come into their inbox, but there’s a non-zero chance of these scams eventually getting through to a user and that user clicking a malicious link. Very often the purpose of this links it to [scam] a user out of his password or to enable a user to bypass weak MFA.”

Once a cyber criminal has collected enough private information about a victim, they can use ChatGPT to make a list of likely passwords. This method takes advantage of the fact that users may use some combination of easy-to-remember personal information when creating passwords. Respondents in the Beyond Identity survey were on the password-savvy side: Only a quarter used personal information when creating passwords. Of that quarter, 35% of respondents said they used their birth date in their passwords, 34% used a pet’s name and 28% used their own name.

To demonstrate how easy it is for ChatGPT to generate passwords, the team created a fake person and fed the AI their personal information, including their alma mater, favorite sports team and favorite band. Immediately, it gave a list of probable passwords, at least for people in that group who used personal info.

Avoiding that trap is fairly easy: Don’t use personal information in passwords. Even a mishmash of your birthday, child’s name and school mascot with special characters thrown in is risky.

And when possible, don’t use passwords at all.

“The best thing you can do is make the impact of a user clicking on a phishing link harmless,” Casey said. “That means moving away from passwords and towards passkeys, which are becoming increasingly adopted across a variety of platforms.”

You may already use passkeys without realizing it, such as when you use your phone’s fingerprint sensor to access your bank account or pay a bill.

Still, generative AI-powered cybercrime can sneak up on you at any time, including voice cloning scams that can deepfake your boss, a client or even family members.

Some warning signs that should always give you pause include:

  • Something feels off — That email says it’s from your boss, but does it really read like an email they would write? Trust your gut and verify whether or not it’s real by contacting them directly.
  • The request is unusual — Your company CEO sends you an email saying they lost their wallet out of town and need your help with a money transfer? That kind of request is — or should be — unusual enough to tell you it’s fake.
  • The brand name is a little wrong — If that social media brand you love tags you saying you’ve won something, double check the social media handle and the logo. It’s likely a fake account posing as the real brand.
  • You receive an unsolicited text — If you didn’t sign up for texts from a company, they shouldn’t be sending you texts, period. If they break through your spam filter, report them.
  • Any message from your bank or other financial institution — Yes, your bank will call or text you if a suspicious charge is made to your account. If you get a text or call claiming unusual activity, do not respond, even if everything looks right. Instead call the bank and ask if there is an issue with your account that triggered a call.
  • Suspicious links — Not clicking them is cybersecurity 101, but as cybercrime technology evolves, suspicious links may look less and less suspicious.

P.S. One other digital scam to watch out for? This remote work scam that’s blowing up in the recruiting industry.

Before you go...

Please consider supporting Technical.ly to keep our independent journalism strong. Unlike most business-focused media outlets, we don’t have a paywall. Instead, we count on your personal and organizational support.

Our services Preferred partners The journalism fund
Engagement

Join our growing Slack community

Join 5,000 tech professionals and entrepreneurs in our community Slack today!

Trending

The person charged in the UnitedHealthcare CEO shooting had a ton of tech connections

From rejection to innovation: How I built a tool to beat AI hiring algorithms at their own game

Where are the country’s most vibrant tech and startup communities?

The looming TikTok ban doesn’t strike financial fear into the hearts of creators — it’s community they’re worried about

Technically Media