Top
The Rise of Machine Learning and Social Engineering Attacks – A N I T H
fade
780
post-template-default,single,single-post,postid-780,single-format-standard,eltd-core-1.1.1,flow child-child-ver-1.0.0,flow-ver-1.3.6,eltd-smooth-scroll,eltd-smooth-page-transitions,ajax,eltd-blog-installed,page-template-blog-standard,eltd-header-standard,eltd-fixed-on-scroll,eltd-default-mobile-header,eltd-sticky-up-mobile-header,eltd-dropdown-default,wpb-js-composer js-comp-ver-5.0.1,vc_responsive

The Rise of Machine Learning and Social Engineering Attacks

The Rise of Machine Learning and Social Engineering Attacks


Artificial Intelligence (A.I.) has been in the news a lot lately.  Some say that AI will take human jobs in the future; others tout its ability to simplify everyday tasks, some are embracing it for the ability to offer a quicker defense against cyber attacks, and Gartner predicts by 2020 we will interact more with chatbots than we do with our own spouses.

Billions of interactions are happening each year already with chatbots, and many do not even realize that they’re not speaking with a human. In fact a bot called Thezboy is in use on Facebook and allows businesses to easily communicate with their customers via posts.  Amazon’s Alexa is now available to help with travel plans, entertainment, ordering products, and a variety of other uses.

If A.I. can be taught to interact with humans in a friendly helpful way, it can also be used in malicious ways.  It’d be very easy for a malicious chatbot to seek out customer complaints online, and then pose as customer service trying to remedy the situation.  Thinking they are going to get a resolution, the consumer may hand over sensitive data like security question answers, passwords, or PII (personal identifying information).

This also has the potential to evolve into sophisticated phishing and spear phishing emails, which target their victims by mimicking corporate lingo or someone’s personal writing style.  As Dann Palmer wrote, “After a period of monitoring, the AI could tailor phishing messages to mimic the message style of the victim to particular contacts in their address book, in order to convince them to click on a malicious link.”   Couple the malicious A.I. with Adobe’s VoCo (Photoshop for voice), it’s possible that we may soon see sophisticated vishing attacks that emulate a trusted voice.

For now, be extra vigilant in what links you click on and to ensure you’re engaging in legitimate chat support by going directly to the company’s support site.  Using digital signatures and enabling encryption on email communications is another step you can take to ensure the person you’re communicating with is authentic.  Be safe out there!

Sources:

http://www.gartner.com/smarterwithgartner/gartner-predicts-a-virtual-world-of-exponential-change/

https://www.forbes.com/sites/steveolenski/2017/03/02/what-cmos-need-to-know-about-chatbots/#7a312aaa34ba

http://www.zdnet.com/article/how-ai-powered-cyberattacks-will-make-fighting-hackers-even-harder/

<!–

–>



Source link

Anith Gopal
No Comments

Post a Comment