Five AI-Enabled Cyberattacks to Watch
top of page
  • Writer's pictureChaz Vossburg

Five AI-Enabled Cyberattacks to Watch

What a year 2020 has been so far.  In a year where a global pandemic rages, economies are seesawing daily, Tiger King was one of the most streamed documentary series, and the Dodgers won a World Series, do we also need to be dealing with machines turning on us?  Are we doomed to servitude in a world run by out of control AI?  Was Stanley Kubrick right and HAL 9000 is going to dominate humanity?


No, and though my ridiculous ideas above (Dodgers winning the World Series notwithstanding) are a giant exaggeration, we need to begin shifting our security mindsets to the new threats on the horizon. Many people probably believe they’re more or less familiar with the nature of traditional cyberattacks that involve system hacking, viruses, and ransomware. However, as the world of AI and machine learning technology grows more complex, the risks posed to firms and individuals grow increasingly potent. This growing sophistication of the latest software and algorithms has allowed malicious hackers, scammers, and cybercriminals who work tirelessly behind the scenes to stay one step ahead of the authorities, making the threat of attacks increasingly difficult to both prepare for and defend against.

With that in mind, Wellforce presents a list of five new AI-based cybersecurity threats for 2020 that have IT departments and security administrators’ eyes open and heads spinning, and remember, 2001: A Space Odyssey is only a movie, right?

Deepfakes: Deepfakes are synthetic media in which a person in an existing image or video is replaced with someone else’s likeness.  In other words, a fake video made to look real.  According to CSO, to this point, there have been relatively few instances of verified deepfake attacks used for nefarious or malicious intent. However, experts do not believe this trend will continue.  Creating a deepfake of an emergency alert warning an attack is imminent, destroying someone’s personal life with fake videos, or disrupting a close election by dropping fake video or audio recordings of one of the candidates days before voting starts are but a few of the examples of how deepfakes can be weaponized.

Deepfake voice technology: This technology allows people to spoof the voices of others by utilizing legitimate audio samples and applying machine learning to clone incredibly accurate but fraudulent files.  The realm of cybercrime took a vast leap forward in 2019 when the CEO of a UK-based energy firm fell victim to a scam built upon a phone call using deepfake voice technology.  Believing he was speaking to his boss, the CEO victim sent almost $250k as a result of being told to do so by an AI-generated deepfake audio file.

Hackers attacking AI while is still learning: Artificial intelligence evolves and learns as more data is collected.  The success associated with AI and machine learning points directly to the integrity of the data and algorithms that it uses.  It is most vulnerable to cyberattacks when it is learning a new model or system though.  In this type of attack, also called a poisoning attack or training time attack, cybercriminals inject bad data into an AI algorithm, causing it to learn differently than intended.  In fact, “there is a whole spectrum depending on which phase of the machine-learning model generation pipeline the attacker is sitting at,” said Earlence Fernandes, a computer security researcher at the University of Washington, which cause confusion and lost time trying to track the bad data.

Synthetic identities:  While these are not necessarily a new type of fraud, the technology and sophistication now available allows a scammer to use a complex set of real and fabricated credentials to create the illusion of a real person, and to a much deeper level than ever before. Creating a legitimate physical address and birthdate in order to associate a Social Security number and credit history that can stand up to heavier scrutiny is but one of the many possibilities that are possible.

Machine Learning Enabled Attacks: Machine learning refers to the ability for computers to learn, adapt, and respond without being specifically programmed to execute certain tasks.  Machine learning-enabled attacks happen when cybercriminals use this artificial intelligence technology to carry out a cyberattack.

Using machine learning, hackers can automate some or all steps of a data breach process, including:

  1. Vulnerability discovery – finding a weakness in the targeted network.

  2. Initial exploitation – exploiting the weakness to gain access to the network.

  3. Targeted exploitation – finding and exploiting vulnerabilities within the network.

  4. Data theft – removing sensitive or valuable data from the network.

So, what can you do to protect against Machine Learning and AI-Enabled Attacks?

This may sound bleak, but there are steps you can take to help protect yourself against these attacks.

  1. Keep software and systems up-to-date

  2. Strengthen credentials and revisit permissions

  3. Implement multifactor authentication

Perhaps the most important thing that you can do is to enlist a trusted Managed Services Provider to help you keep your employees safe and protect your data.  For more information, contact Wellforce today!

Recent Posts
Categories
bottom of page