Edit Content

About Us

The CISO Forum® is a community-fueled organization designed to foster discussion and facilitate knowledge exchange between enterprise cybersecurity leaders. Since 2015, the CISO Forum has been an exclusive executive forum focused on cybersecurity leadership and strategy. The CISO Forum engages cybersecurity leaders through multiple channels, including exclusive invite-only in person executive summits, digital events, and online collaboration platforms.

Contact Info

The robots are coming. It has become conventional wisdom that artificial intelligence (AI) and machine learning (ML) will increasingly determine our lives going into the future. By 2020, according to an estimate from Capterra, about 85% of customer-business interactions will take place with AI, without a human involved. 47% of organizations with advanced digital practices have a defined AI strategy, based on data from Adobe. But for IT and security executives and professionals who must protect against cybercrime, AI poses both a promise and threat.

The industry is looking toward the promise of AI tools to stay a step ahead of the cybercriminals. Experian’s 2018 annual Data Breach Preparedness Study found just 31% of respondents were confident in their organization’s ability to recognize and minimize spear phishing incidents, and just 21% were confident in their organization’s ability to deal with ransomware. Malware and cyberattacks evolve over time. ML uses data from previous cyberattacks, leveraging what it knows and understands about past attacks and menaces to identify and respond to newer, similar risks. The thinking also goes that AI and ML will help save time for overburdened IT departments.

The threat comes from the bad guys also employing AI to create more sophisticated attacks, enhancing traditional hacking techniques like phishing scams or malware attacks. For example, cybercriminals could use AI and ML to make fake e-mails look authentic and deploy them faster than ever before. Criminals could apply AI to develop mutating malware that changes its structure to avoid detection. AI could scrub social media for personal data to use in phishing cons. Data poisoning is another danger, in which attackers find out how an algorithm is set up, then introduce false data that misleads on which content or traffic is legitimate and which is not.

A lesser threat comes from within the industry, as companies rush to market with so-called AI cyber security tools. There is a difference between AI and machine learning. ML algorithms train on large data sets to “learn” what to look for on networks and how to respond to various scenarios. Generally, ML needs new training data to calculate and reach new conclusions, while a true AI system does not.
Some products are based on “supervised learning,” requiring the data sets that algorithms are trained on to be chosen and labeled, by tagging malware code and clean code, for example. Some vendors are using training information that hasn’t been thoroughly scrubbed of erroneous data points, which means the algorithm won’t catch all attacks. Hackers could switch tags so that some malware is designated as clean code, or simply figure out the code the ML is using to flag malware and delete it from their own, so the algorithm doesn’t detect it.

Given the fast-changing landscape, here are some tips to realize the enormous potential of AL and ML and still protect your organization.

Resist the hype. AL and ML are the hot buzzwords and technologies of the moment. But there’s also a great deal of confusion. According to ESG Research, just 30% of cybersecurity professionals feel they are very knowledgeable about AI and ML and their application to cybersecurity analytics. When purchasing an AI or ML tool, try to do your research and understand what you’re buying so that it’s an effective and appropriate solution for your organization.

Keep a human involved in the process. There used to be an old IT truism of bad data in, bad data out. The “intelligence” in AI is based on data inferences and correlations, which need to be checked and monitored so the model is addressing risks appropriately and evolving as you need. ML systems shouldn’t be totally autonomous. They should be set up with a human in the loop, and the ML should know to ask for help with presented with an unfamiliar situation.

Have a strong data breach plan. According to Experian’s Data Breach Preparedness Study, 88% of organizations have a data breach response plan in place, but less than half (49%) think it is effective or highly effective. If you have a plan, it shouldn’t just sit on a shelf. Make sure that it is robust, with buy-in from all the key departments of your company, and drill on it early and often. If you need to get started on a plan or refine it, Experian’s updated Data Breach Response Guide can serve as a resource.

AI and ML are the wave of the future. But the cyber threats are real now, and so are the limitations of the technology as a foolproof protection tool. Be aware, both of what’s ahead from the cybercriminals and how you’re applying AI solutions, so you’re not lulled into a false sense of security.

 

About the Author: Michael Bruemmer, Experian Michael Bruemmer is Vice President of Experian Data Breach Resolution, which helps businesses mitigate consumer risk following data breach incidents.

Leave a Reply

The CISO Forum® is a community-fueled organization designed to foster discussion and facilitate knowledge exchange between enterprise cybersecurity leaders.

Advertising

Reach a large audience of enterprise cybersecurity professionals