To effectively manage the risk that your business is under due to cybercriminals and their activities, it is important to acknowledge what attacks your business may soon have to deal with. Due to the increased accessibility of artificial intelligence and related processes, we predict that cybercrimes will likely use AI to their advantage in the very near future.
We aren’t alone in believing so, either. A recent study examined twenty such AI-integrating cybercrimes to see where the biggest threats would lie.
Here, we’re looking at the results of this study to see what predictions can be made about the next 15 years where AI-enhanced crime is concerned. Here’s a sneak preview: Deepfakes (fake videos of celebrities and political figures) will be very believable, which is very bad.
To compile their study, researchers identified 20 threat categories from academic papers, current events, pop culture, and other media to establish how AI could be harnessed. These categories were then reviewed and ranked during a conference attended by subject matter experts from academia, law enforcement, government and defense, and the public sector. These deliberations resulted in a catalogue of potential AI-based threats, evaluated based on four considerations:
Split amongst themselves, the group ranked the collection of threats to create a bell-curve distribution through q-sorting. Less-severe threats and attacks fell to the left, while the biggest dangers were organized to the right.
When the group came back together, their distributions were compiled to create their conclusive diagram.
In and of itself, the concept of crime is a very diverse one. A crime could potentially be committed against assorted targets, for several different motivating reasons, and the impact that the crime has upon its victims could be just as assorted. Bringing AI to the party—either in practice or even as an idea—only introduces an additional variable.
Having said that, some crimes are much better suited to AI than others are. Sure, we have pretty advanced robotics at this point, but that doesn’t mean that using AI to create assault-and-battery-bots is a better option for a cybercriminal than a simple phishing attack would be. Not only is phishing considerably simpler to do, there are far more opportunities to profit from it. Unless there is a very specific purpose to a crime, AI seems most effective in the criminal sense when used repeatedly, on a wide scope.
This has also made cybercrime an all-but-legitimate industry. When data is just as valuable as any physical good, AI becomes a powerful tool for criminals, and a significant threat to the rest of us.
One of the authors of the study we are discussing, Professor Lewis Griffin of UCL Computer Science, put the importance of such endeavors as follows: “As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation. To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”
When the conference had concluded, the assembly of experts had generated a bell curve that ranked 20 threats, breaking each down by describing the severity of the four considerations listed above—specifically, whether or not they were to a criminal’s benefit. Threats were grouped in the bell curve based on similar severity, and so the results neatly split into three categories:
As you might imagine, those crimes ranked as low threats suggested little value to the cybercriminal, creating little harm and bringing no profit while being difficult to pull off and easy to overcome. In ascending order, the conference ranked low threats as such:
(In case you were wondering, “burglar bots” referred to the practice of using small remote drones to assist with a physical break-in by stealing keys and the like.)
Overall, these threats leveled themselves out. The considerations for most canceled each other out, generally providing no advantage or disadvantage to the cybercriminal. The threats included here were as follows:
Finally, we come to those AI-based attacks that the experts felt the most concerned about as sources of real damage. These columns broke down as such:
Deepfakes are a digital recreation of someone’s appearance to make it appear as though they said or did something that they didn’t or were present somewhere that they never were. You can find plenty of examples on YouTube of Deepfakes of various quality. Viewing them, it is easy to see how inflammatory and damaging to someone’s reputation a well-made Deepfake could prove to be.
Of course, now that we’ve gone over these threats and described how much of a practical threat they really are, it is important that we remind ourselves that all of these threats could damage a business in some way, shape, or form. We also can’t fool ourselves into thinking that these threats must be staged with AI. Human beings could also be responsible for most of them, which makes them no less of a threat to businesses.
It is crucial that we keep this in mind as we work to secure our businesses as we continue to operate them.
As more and more business opportunities can be found online, more and more threats have followed them. Keeping your business protected from them—whether AI is involved or not—is crucial to its success.
TaylorWorks can help you keep your business safe from all manner of threats. To find out more about the solutions we can offer to benefit your operations and their security, give us a call at 407-478-6600.
Comments