A recent cyber analytical report has warned that artificial intelligence (AI) enabled cyberattacks which are quite limited until now, may get more aggressive in the coming years.
The Helsinki-based cybersecurity and privacy firm WithSecure, the Finnish Transport and Communications Agency, and the Finnish National Emergency Supply Agency collaborated on the report, according to an article by Cybernews on Thursday.
AI-powered assaults will definitely excel at impersonation, a tactic utilized frequently in phishing, as per the study.
“Although AI-generated content has been used for social engineering purposes, AI techniques designed to direct campaigns, perform attack steps, or control malware logic have still not been observed in the wild, said Andy Patel WithSecure intelligence researcher.
Such “techniques will be first developed by well-resourced, highly-skilled adversaries, such as nation-state groups.”
The paper examined current trends and advancements in AI, cyberattacks, and areas where the two intersect, suggesting early adoption and evolution of preventative measures were key to overcoming the threats.
“After new AI techniques are developed by sophisticated adversaries, some will likely trickle down to less-skilled adversaries and become more prevalent in the threat landscape,” stated Patel.
The threat in the next five years
The authors claim that it is safe to assert that AI-based hacks are now extremely uncommon and mostly used for social engineering purposes. However, they are also employed in ways that analysts and researchers cannot directly observe.
The majority of current AI disciplines do not come near to human intellect and cannot autonomously plan or carry out cyberattacks.
However, attackers will likely create AI in the next five years that can autonomously identify vulnerabilities, plan and carry out attack campaigns, use stealth to avoid defenses, and gather or mine data from infected systems or open-source intelligence.
“AI-enabled attacks can be run faster, target more victims and find more attack vectors than conventional attacks because of the nature of intelligent automation and the fact that they replace typically manual tasks,” said the report.
New methods are required to combat AI-based hacking that makes use of synthetic information, spoofs biometric authentication systems, and other upcoming capabilities, according to the paper.
AI-powered assaults will definitely excel at impersonation, a tactic utilized frequently in phishing and vishing (voice phishing) cyberattacks, noted the report.
“Deepfake-based impersonation is an example of new capability brought by AI for social engineering attacks,” claimed the report’s authors, who forecast that impersonations made possible by AI will advance further.
“No prior technology enabled to convincingly mimic the voice, gestures, and image of a target human in a manner that would deceive victims.”
Many tech experts believe that deepfakes are the biggest cybersecurity concern.
They have a strong shot at it because phone locks to bank accounts and passports, as well as all recent technical developments, have migrated toward biometric technologies.
Given how quickly deepfakes are developing, security systems that primarily rely on such technology appear to be at higher risk.
There were 1,291 data breaches until September 2021, according to the Identity Theft Resource Center’s (ITRC) study of data breaches.
In comparison to data breaches in 2020, which totaled 1,108, this figure shows a 17 percent increase.
281 million victims of data compromise were discovered during the first nine months of 2021, according to ITRC research, a sharp increase.