- Dette emne har 0 svar og 1 stemme, og blev senest opdateret for 5 år, 1 måned siden af Bjarne. This post has been viewed 619 times
- 12. august 2018 kl. 16:36 #318897
- Super Nova
How AI Can Power a Stealthy New Breed of Malware
Cybersecurity is an arms race, where attackers and defenders play a constantly evolving cat-and-mouse game. Every new era of computing has served attackers with new capabilities and vulnerabilities to execute their nefarious actions.
In the PC era, we witnessed malware threats emerging from viruses and worms, and the security industry responded with antivirus software. In the web era, attacks such as cross-site request forgery (CSRF) and cross-site scripting (XSS) were challenging web applications. Now, we are in the cloud, analytics, mobile and social (CAMS) era — and advanced persistent threats (APTs) have been on the top of CIOs’ and CSOs’ minds.
But we are on the cusp of a new era: the artificial intelligence (AI) era. The shift to machine learning and AI is the next major progression in IT. However, cybercriminals are also studying AI to use it to their advantage — and weaponize it. How will the use of AI change cyberattacks? What are the characteristics of AI-powered attacks? And how can we defend against them?
At IBM Research, we are constantly studying the evolution of technologies, capabilities and techniques in order to identify and predict new threats and stay ahead of cybercriminals. One of the outcomes, which we will present at the Black Hat USA 2018 conference, is DeepLocker, a new breed of highly targeted and evasive attack tools powered by AI.
IBM Research developed DeepLocker to better understand how several existing AI models can be combined with current malware techniques to create a particularly challenging new breed of malware. This class of AI-powered evasive malware conceals its intent until it reaches a specific victim. It unleashes its malicious action as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition.
You can think of this capability as similar to a sniper attack, in contrast to the “spray and pray” approach of traditional malware. DeepLocker is designed to be stealthy. It flies under the radar, avoiding detection until the precise moment it recognizes a specific target. This AI-powered malware is particularly dangerous because, like nation-state malware, it could infect millions of systems without being detected. But, unlike nation-state malware, it is feasible in the civilian and commercial realms.
Jeg har andetsted forklaret, at anvendelsen af Deep Learning til identifikation af personer på tusinder af overvågningskameraer på gaden er en umulighed; men situationen er helt anderledes, hvis det drejer sig om kun et kamera og en persons ansigt. Det svarer til anvendelse af ansigtsgenkendelse til oplåsning af en smartphone. Der er grund til at advare mod app’s adgang til kameraer.
AI kan altså anvendes til andet end at stoppe login på astro-forum.
- Du skal være logget ind for at svare på dette indlæg.