Scamming and Artificial Intelligence: Hand in Hand

The use of Artificial intelligence has rapidly increased over the past few years, and it is slowly taking over our day-to-day lives. Most of the time, people are only aware of the many positive effects AI has; for example, its fascinating capability of helping you study for an exam, or to write an original and catchy song in under 5 seconds. But as the amount of scamming occurring around the world increases rapidly, people have realized that not everything AI provides us with is positive and that because of it, there are so many risks associated with the online world. Because of the constant application of AI tools in our modern-day lives, it has become nearly impossible to tell whether something is original or made by AI. The main way that this newly introduced tool has affected us is its facilitation of scamming across the globe.  

Recently, the UK’s cybersecurity agency issued an alert that artificial intelligence has made it much more complicated to determine whether the emails you receive are authentic (from reliable companies) or from a scammer. Although it doesn’t seem like such a big deal at first glance, as the Guardian states, “people struggle to identify phishing messages – where users are tricked into handing over passwords or personal details – due to the sophistication of AI tools.” Prior to the ubiquitous appearance of AI, scammer emails were easy to spot, since most of them contained unprofessional vocabulary, incorrect grammar, or wrong spelling. But now, because of our persistent use and increased dependency on these tools, scammers can use AI to enhance their emails and replicate the emails of safe sites and companies in just a few seconds. Unfortunately, many people end up giving away private information, which then leads them to many further issues. The main consequence is that phishing emails have successfully targeted many more victims, which has never caused this much trouble before. 

To determine how efficient these fake emails have become, an association called SoSafe, known as Europe’s leading provider of security awareness and training, recently held a study in Cologne, Germany. Their research provided much useful information, and they were able to conclude that AI-written emails are opened by approximately 78% of the people they are sent to. In other words, fewer people save themselves from these dangerous emails than people who do. In addition, SoSafe also concluded that within that percentage, 28% of people proceeded to click on content such as links and attachments; many of them ended up revealing personal information. This study, along with the many others that have been conducted, has been able to demonstrate that AI is capable of tricking the human mind into believing something that isn’t true. Not only that, AI’s evolution has allowed us to easily adapt and replicate human-generated text, which permits scammers to make spam emails that blend in easily with the rest of the emails in one’s inbox. In just one year,  the amount of spam messages being sent weekly went from 1.2 to 1.5 billion, meaning that it increased by a total of 25%. These drastic changes have created significant damage to the safety of many people’s private information, but have also allowed many people to become educated about the information that they should and should not provide on online databases. Because AI-written emails have become a much more serious issue and have led to so many victims giving their personal information away, it has been frequently commented on and taught how we can use AI as a beneficial instead of a harmful tool. 

AI is an amazing tool that will most likely continue to evolve and grow throughout the future. While it continues to present humans with many different opportunities, we just have to make sure that it is used to do the incredible things it was made to do. 

Works Cited

“A.I. is helping hackers make better phishing emails.” CNBC, 8 June 2023, https://www.cnbc.com/2023/06/08/ai-is-helping-hackers-make-better-phishing-emails.html. Accessed 12 March 2024.

Allisat, Arne. “AI Arms Race: the evolving battle between email spam and spam filters.” TechRadar, 7 March 2024, https://www.techradar.com/pro/ai-arms-race-the-evolving-battle-between-email-spam-and-spam-filters. Accessed 12 March 2024.

“Generative AI financial scammers are getting very good at duping work email.” CNBC, 14 February 2024, https://www.cnbc.com/2024/02/14/gen-ai-financial-scams-are-getting-very-good-at-duping-work-email.html. Accessed 12 March 2024.

Milmo, Dan, and Alex Hern. “AI will make scam emails look genuine, UK cybersecurity agency warns.” The Guardian, 23 January 2024, https://www.theguardian.com/technology/2024/jan/24/ai-scam-emails-uk-cybersecurity-agency-phishing. Accessed 12 March 2024.“One in five people click on AI-generated phishing emails, SoSafe data reveals.” SoSafe, https://sosafe-awareness.com/company/press/one-in-five-people-click-on-ai-generated-phishing-emails-sosafe-data-reveals/. Accessed 12 March 2024.

1 thought on “Scamming and Artificial Intelligence: Hand in Hand

Comments are closed.