AI is going to create a crisis in network security, but not in the way you think. AI won’t make smart hacks smarter, it’ll make dumb hacks easier. And most of the time, it’s the dumb hacks that matter in terms of financial loss, fraud, and reputational damage.
Most people appreciate that real security attacks don’t resemble the cheesy 3-D graphics or rapid typing of “Hollywood hacking” fame, but it also rarely involves super geniuses finding esoteric vulnerabilities. Some attacks do, certainly; the sophistication behind projects like Stuxnet is legendary, and whenever you read about some shady organization that’s figured out how to turn an iPhone into a stealth room bug, similar exploits are involved. These attacks, though, are incredibly expensive, time-consuming, and life-limited. As soon as they’re deployed on any kind of scale, the relevant vendors fix the code and close off that access path, meaning people who discover these “0-day vulnerabilities” (referring to how many days the manufacturer has had to patch them) tend to use them very sparingly. Furthermore, it’s rare that a major software vendor has a bug so terrible that exploiting it grants an attacker total access. In most cases, exploits must be chained together to get anywhere useful: first, exploit a flaw in a given application; then exploit a flaw in the underlying operating system; potentially compromise network hardware; etc. Each step requires finding a mistake that nobody else has yet discovered.
If you just want to steal some money or nab a few credit card numbers, though, there are easier ways. The easiest is just to trick someone into doing it for you. These so-called “social engineering” attacks, which involve impersonating authority figures or tricking users into doing something they otherwise shouldn’t (e.g. logging into a fake website) barely qualify as technical in nature, but they can be incredibly effective. Another approach is to wait until a vulnerability becomes public, reverse engineer the patch, and rush to exploit it before companies can deploy a fix. These attacks have come to be referred to as “1-day” exploits and rely on the fact that once someone points out where to look for a vulnerability, it becomes much easier to figure it out. Apple is full of brilliant engineers, and they all missed the now-famous “goto fail” mistake that allowed attackers to trick users into trusting fake websites. Once someone draws a red box around the mistake, though, you probably can figure out what’s going on even if you’ve never written a line of software in your life.
The unfortunate reality is that big companies take a long time to apply software patches. Sometimes, it’s just bad management or oversight, but other times it’s reasonable concern over breaking changes. A few years ago, Microsoft pushed out a Windows update that deleted some users’ Documents directory (seriously). Maybe better to just wait a few days and see how the update fares for everyone else, eh? Especially when a company is using eons of legacy technology piled beneath their systems like geological strata (hi banks and airlines!), it can be very difficult to predict when a seemingly innocent change might alter some long-forgotten dependency and cause a nightmare. One of the worst (if not the worst) consumer data breach happened because Equifax waited over two months to patch a massive Apache flaw. The patch was released on March 7, and just three days later, researchers were seeing attacks attempting to exploit it.
LLMs aren’t going to be out-competing human geniuses any time soon when it comes to finding esoteric 0-days (and even if they did, developers would just start using them in test). What they can do, though (and do very well) is help novice developers write simple code very quickly to take advantage of known issues. Going back to our “goto fail” vulnerability, I asked ChatGPT 4 Turbo to “write a python script that listens on port 443, opens a connection using tls v1.1, and forces use of the echde ciper suite” and got back a rather nice block of (well-commented!) code that did exactly that, although it did come with a helpful warning that TLS v1.1 was deprecated and that I shouldn’t be doing this.
On the social engineering side, a company recently lost $25 million due to wire fraud, enabled via AI-generated fake video chat that tricked an employee into allowing the transfer. No particular talent is required to effect either of these two attacks beyond a rudimentary understanding of how to interact with AI models and basic development work like how to set up a webserver (and you probably could ask GPT to help you configure a simple web server on the domain “bank0famerica.com” and get 90% of the way there by just doing what it told you). These attacks—both social engineering and 1-days—are not very interesting or elaborate, but they are effective. AI increases the number of people capable of performing them by orders of magnitude and more importantly shortens the window between announcement and attack to almost nothing.
Companies need to get ready for the onslaught. First, patch quickly. The days of making do with underfunded IT departments that take months (or even just weeks) to test patches simply are over. Second, build in systems that don’t rely on people behaving perfectly every time, and build in redundancy for high-consequence decisions. It is easy to scold the employee who entered their credentials into a phishing site or sent a wire because they were tricked, but the only durable solution is to build systems that mitigate human error. As an example, passkeys instead of passwords prevent most phishing attacks, while requiring multiple employees to confirm wires makes social engineering attacks vastly more difficult. The wave is coming.
© 2024 Restive®, Inc.