As healthcare data grows in volume and variety, as IT networks expand in scope and complexity – and as threat vectors gain in sinister ingenuity – it's natural for security professionals to crave a simple solution to keep it all safe. Increasingly, many are turning to artificial intelligence in hopes that it's just that.
But Reg Harnish, founder of GreyCastle Security and a fellow at the National Cybersecurity Institute, says AI-powered security analytics tools often have big vulnerabilities of their own, and that great care should be taken when CISOs and other IT execs are looking for technologies in which to invest.
When he confers with clients, Harnish tells them not to be lured into a false sense of security by the big promises AI vendors are making.
"The fact that we've deployed AI does not mean we can shut the lights off and walk out of the room," he said. "It still requires care and feeding. In some ways, in addition to the end goal we still have to monitor – 'Is there an intrusion on my network?' – we also have to monitor the quality of the decision-making around that question."
Because for healthcare organizations that are rolling out AI security tools, especially those who have done so haphazardly, or without careful strategy, "we've complicated the quality assurance process around security technology and created more work for ourselves – and probably more points of failure," said Harnish.
Health systems and other potential AI users should also understand that today it's human beings writing the code.
"It's not robots building robots, it's humans building robots. We're a generation away from codes that we can actually trust,” Harnish added. “That said, we've got to start somewhere, and the faster we can get to robots building robots, the more predictable those technologies will be."
But the security tools we have right now can be very useful when deployed wisely. Given where we are at this point in time, Harnish offered four pieces of advice for IT buyers and other decision-makers looking to invest in AI-powered threat detection and response.
First, it's important to recognize that "everyone wants to slap AI on their website right now," he said. "Just because a marketing slick suggests that there's artificial intelligence baked into a product doesn't necessarily mean it meets your definition of AI, nor does it mean the product will be any more accurate precise or comprehensive."
There are powerful and protective technologies on the market, but "buyers have to continue to do the work they've been doing for years, which is first understanding the risk you're trying to manage," said Harnish. "If your job is cutting the board in half, no amount of hammers is going to help you do that effectively. You have to understand what you're trying to accomplish. Risk management must be the foundation of this."
Second, healthcare organizations should understand the additional risks that AI technology can introduce, even when deployed to combat them, he said. That may be the fact that it engenders a false sense of security, or it may be that "the lines of code went from 100,000 to 100 million, making this technology a lot more vulnerable than the last piece of technology you had."
A third thing to keep in mind is that AI, "doesn't release us from the accountability as human beings, or CISOs. It doesn't really reduce the amount of monitoring that would have to be in these environments," said Harnish.
And fourth, it's worth remembering that, "as long as there are humans in the formula we need to be sure we're able to understand the behavioral biases and the behavioral analytics that go into a good cybersecurity program as well," he said. "You don't build houses with just hammers. There's still a lot of work to do."
People and process, as has been said time and again, are just as important as technology.
"AI for AI's sake is no smarter than security for security's sake, it just doesn't make a lot of sense – to spend a lot of money and time and energy on something that has little return," he said. "One thing to not do is to assume that because you've got AI, you're protected."
Just as HIPAA compliance does not guarantee security, next-generation firewalls and intrusion detection systems do not make for a comprehensive security program.
"Unfairly, technology is placed on a pedestal: Too many people think a security program is firewalls and antivirus," said Harnish. "But as buyers continue their pursuit of checkbox security, the real issue is not the technology itself. Who cares about AI? In five years it's going to be something else that supplants it."
Rather than artificial intelligence, what exists today is an organic kind of gray matter and it continues to fail in cybersecurity, he said.
"Understanding the risks. Looking past headlines. Measuring your performance, continuous improvements. These are basic things that somehow have not made their way into many good cybersecurity programs. There's just a lot of chaos and a lot of confusion going on right now. I see a lot of it."
On the other hand, he also sees more organizations doing the right thing – doing their homework, thinking critically about the big promises made by some security vendors.
The challenge, however, is "that may not be keeping pace with the changes that are happening in our environment," said Harnish. "The volume of data is growing faster than our ability to protect it. The complexity of our networks is growing faster than our ability to defend it. So at least for right now, the gap between where we are and where we should be continues to widen."
In terms of workflow, it comes down to what are you trying to accomplish with AI? If your acute issue as a CISO is that you have many locations, lots of data, lots of centers out on your network, and the goal is enhancing, or making more precise and comprehensive your intrusion detection capabilities, if that's your biggest problem, look for AI that solves that specific problem.
Email the writer: [email protected]
Source: Read Full Article