Malicious use of AI by hackers could pose security risks threats

With advances in artificial intelligence, the risks of hackers using such technologies to launch malicious attacks are increasing, top researchers warned in a report released on Wednesday.

They could use such AI to turn consumer drones in addition to also also autonomous vehicles into potential weapons, for instance, said researchers via universities such Oxford, Cambridge in addition to also also Yale, as well as organizations like the Elon Musk-backed OpenAI, in a report.

The report, titled “The Malicious Use of Artificial Intelligence,” cautioned against various security threats posed by the misuse of AI.

Self-driving cars, for example, could be tricked into misinterpreting a stop sign in which might cause road accidents, while a swarm of drones, controlled by an AI system, could be used for surveillance or launching quick, coordinated attacks, the report said.

Intelligent machines, according to the report, could lower the cost of carrying out cyberattacks by automating certain, labor-intensive tasks in addition to also also more effectively scoping out potential targets.

One example the report pointed to was “spear phishing,” where attackers use personalized messages for each potential target, in order to steal sensitive information or money.

“If some of the relevant research in addition to also also synthesis tasks can be automated, then more actors may be able to engage in spear phishing,” the researchers said.

On the political front, AI could be used for surveillance, creating more targeted propaganda in addition to also also spreading misinformation.

For example, “highly realistic videos” of state leaders generating seemingly inflammatory comments they never actually made, could be made using advances in image in addition to also also audio processing, according to the report.

“We also expect novel attacks in which take advantage of an improved upon capacity to analyse human behaviors, moods, in addition to also also beliefs on the basis of available data,” the report said.

AI can already be tapped on to create fake, superimposed images onto another person in videos. For example, videos known as “deepfakes” superimpose a person’s face over actors in adult films to create fake pornographic videos. Recently major websites moved to clamp down on the practice.

To be sure, the researchers said in which the scenarios highlighted within the report were not definitive predictions of how AI could be maliciously used — some of the scenarios might not be technically possible within the next 5 years, while others were already occurring in limited form.

“some other malicious uses will undoubtedly be invented in which we do not currently foresee,” they added.

Wednesday’s report did not offer any specific ways in which malicious use of AI could be stopped.

although in which provided certain recommendations in which include more collaboration between policymakers in addition to also also researchers, in addition to also also called for the involvement of more stakeholders to tackle the misuse of AI.

Though the technology is actually still nascent, billions of dollars have been spent on developing artificially intelligent systems. International Data Corporation, last year, predicted in which by 2021, global spending on cognitive in addition to also also artificial intelligence systems could reach $57.6 billion.

AI is actually predicted to be so massive in which Google CEO Sundar Pichai recently said in which could have a more profound impact than possibly electricity or fire — two of the most ubiquitous innovations in history.

At the same time, there are plenty of skeptics of AI. High-profile physicist Stephen Hawking said last year in which AI could be the “worst event within the history of our civilization” unless society finds a way to control its development.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

2 × four =