As the potential of artificial intelligence rapidly unfold through continuous research studies, experts are starting to be alarmed by the possibility of AI being exploited and used as a future cyber weapon.
Just last month, Tesla CEO and tech billionaire Elon Musk aired his sentiments about the threats that AI poses to humans. It’s pretty alarming, but some experts assured the public that ‘AI Doomsday‘ is still far from happening. However, that doesn’t mean that we are safe from online hackers who might use AI as their future cyber weapon.
Still, just like the other technological advancements that we have right now, artificial intelligence can be exploited by criminals to carry out their evil work. Almost every week, news of hacking incidents come from all corners of the world.#AI to be used as future cyber weapon of hackers! Possible as per experts.Click To Tweet
Last week, the U.S. and major parts of Europe suffered from a large-scale hacking attack which started work to compromise the operations of the energy sector. Aside from that, news of the Equifax hacking also made headlines, citing that an unknown group stole sensitive information of around 143 million Americans.
The question now is, how feasible is the possibility of criminals using AI as a future cyber weapon?
Hackers’ Future Cyber Weapon: Artificial Intelligence
In 2016, John Seymour and Philip Tully, both data scientist from ZeroFOX, a cyber security firm based in Baltimore, Maryland, conducted an experiment to test who can get more Twitter users to click malicious links, humans or artificial intelligence.
Surprisingly, the AI, named as SNAP_R (Social Network Automated Phishing with Reconnaissance), came out better than its human competitors.
During the process, the researchers taught SNAP_R to study the behavior of social network users according to an article from Gizmodo. Apparently, the duo used Twitter because of its bot-friendly and trusting culture.
The machine learning system studied how Twitter users behave, then designed and implemented its own phishing bait. The results of the experiment showed that the artificial hacker was able to compose and distribute more phishing tweets, and with a more substantial conversion rate.
During the experiment, SNAP_R sent ‘simulated spear-phishing tweets’ to over 800 users, averaging 6.75 tweets per minute, and luring around 275 victims. On the other hand, one of the human competitors, Thomas Fox-Brewster of Forbes, recorded an average of 1.075 tweet(s) a minute with 129 attempts and lured in just 49 Twitter users.
Fortunately, this was just an experiment!
However, the study strongly suggests that hackers having the capability to develop and manipulate artificial intelligence can use it as a future cyber weapon to carry out large-scale attacks. Some experts even believe that online criminal groups are already using it despite lack of evidence.62% of security experts from #BlackhatUSA2017 believe hackers will use AI in the coming years!Click To Tweet
In July, hundreds of cyber security experts gathered at the Black Hat USA 2017 in Las Vegas, Nevada to discuss the looming threats posed by emerging technologies.
In a poll conducted by Cylance, an American software firm based in California that develops different software to counter computer viruses, 62 percent of the Black Hat infosec attendees believed artificial intelligence would be used for cyber attacks in the coming year. The Cylance team wrote:
“One thing that was readily apparent at Black Hat this year was that artificial intelligence (AI) has officially arrived. Between the countless booths plastered with the promises of AI, machine learning, and automation (including our own), and various sessions focused on the use of these technologies for active defense, it was clear that the industry has high expectations for intelligent solutions. However, the rise of AI comes with its own drawbacks.”
However, if the data produced by Cylance is to be believed, it’s alarming to see that a huge number of security experts still refuse to acknowledge that AI as a future cyber weapon is almost upon us.
In an interview with Gizmodo, Cylance lead security data scientist Brian Wallace said:
“Hackers have been using artificial intelligence as a weapon for quite some time. It makes total sense because hackers have a problem of scale, trying to attack as many people as they can, hitting as many targets as possible, and all the while trying to reduce risks to themselves. Artificial intelligence, and machine learning, in particular, are perfect tools to be using on their end.”
The Scale of Artificial Intelligence
Artificial intelligence is a broad term and as current technologies evolve, so people’s perception of AI. By definition, intelligence means the capability of an agent, regardless if it’s biological or a machine, to solve complex problems.
If you think about it closely, our world is currently filled with different tools that have that capability. In fact, it’s been around for quite some time now. In essence, machine learning and neural networks are considered forms of artificial intelligence.
Machine learning which enables systems to learn and decide on their own using algorithms and gathered data from its environment is artificial intelligence at work. The same goes with neural networks or systems modeled after the human brain. Wallace further stated:
“The term AI is often misconstrued, with many people thinking of Terminator robots trying to hunt down John Connor—but that’s not what AI is. Rather, it’s a broad topic of study around the creation of various forms of intelligence that happen to be artificial.”
Continuously dismissing these two as forms of artificial intelligence leaves us vulnerable to hackers that might exploit its potentials and use it as a future cyber weapon against us. Deepak Dutt, the founder and CEO of the Ontario-based mobile security company Zighra, said:
“Artificial intelligence can be used to mine large amounts of public domain and social network data to extract personally identifiable information like date of birth, gender, location, telephone numbers, e-mail addresses, and so on, which can be used for hacking [a person’s] accounts.
It can also be used to automatically monitor e-mails and text messages, and to create personalized phishing mails for social engineering attacks [phishing scams are an illicit attempt to obtain sensitive information from an unsuspecting user]. AI can be used for mutating malware and ransomware more easily, and to search more intelligently and dig out and exploit vulnerabilities in a system.”
In a separate interview with Gizmodo, Dutt revealed his belief that AI is already being used in numerous cyber attacks today.
“Also the availability of large amounts of social network and public data sets (Big Data) helps. Advanced machine learning and Deep Learning techniques and tools are easily available now on open source platforms—this combined with the relatively cheap computational infrastructure effectively enables cyber attacks with higher sophistication.”
Taking into account what the experts say regarding AI as a future cyber weapon, the possibilities seem endless. It can be used by cyber criminals to target vulnerable population, create sophisticated malware, and many more.
We are indeed entering a new era of cyber warfare. Something that might be beyond our comprehension as we get more involved with artificial intelligence. Right now, all we can do is wait and just hope for a better future.