Advertisement

The human side of cybersecurity

From autonomous cars to cybersecurity, technology is only as perfect as the human interacting with it.
In some cases, driverless cars may be following the rules too perfectly. (Steve Jurvetson/Flickr)

Driverless vehicles, we’re told, soon will be able to drive us anywhere we want to go, near or far, with little effort on our part. In theory, we need only enter our destination, sit back and enjoy the ride.

It’s an exciting vision, promising not only a more pleasant and easier way to travel, but also, potentially, a safer one. Ninety percent of motorized vehicle accidents today happen because of human error, one study shows — but machines can be programmed to avoid crashes.

Equipped with sensors, radar, GPS, input from other vehicles and more, autonomous cars won’t drive drunk, or, theoretically, get distracted, cut off other vehicles during lane changes, or fail to yield the right of way — the most common driver infractions, according to the study.

With more than 32,000 traffic-related deaths in 2014 alone, the potential of self-driving cars to save lives and property could be significant. Recently, though a crack has emerged — literally — in this utopic vision. Driverless cars on the roads today are getting into accidents at twice the rate of cars with drivers, according to a Bloomberg News report.  Human-driven cars are hitting them because, it seems, driverless vehicles follow the rules too perfectly.

JR-Reagan-Deloitte-portrait

JR Reagan writes regularly for FedScoop on technology, innovation and cybersecurity issues.

Or, to put it another way, developers of driverless cars may not have considered the human element in their design equation. Now, it seems, they are second-guessing their algorithms, wondering if they should program the cars to exceed the speed limit when conditions warrant, for instance, or to cross a double yellow line to drive around a bicyclist.

We see a similar situation in cybersecurity today. In spite of best efforts to perfect encryption, authentication, firewalls and the like, data breaches continue to happen on massive scales, and many experts now say it’s a matter not of whether organizations will be hacked, but when.

With more than 95 percent of breaches blamed on user error — clicking on phony links, opening fake websites and using unsecure passwords, for instance — we’re seeing that, as with those driverless cars, the technology is only as perfect as the human interacting with it.

For remedies, many look to worker cybersecurity training. But what to make of a recent survey showing that IT workers are more likely than the average employee to engage in risky online behaviors? Although security awareness training is important for reducing risk, clearly it is not enough on its own.

Advertisement

Some financial institutions are trying the “stick” approach, as a recent Wall Street Journal article noted, sending out fake “phishing” emails to workers and then penalizing those who open them, monitoring employees’ social media accounts for sensitive information, and prohibiting “out of office” email replies and phone messages. The result may be enhanced security, but at what cost? Do we really need to create a workplace culture in which our people distrust every email and phone call they receive? Do we risk making them afraid to participate in social media at all?

There must be a better way.

Given the increasing sophistication of cybercriminals’ schemes, expecting even the most knowledgeable among us to spot every phishing email, and to carefully check every Web address before clicking, may be not only unreasonable but also hazardous to our organizational health. To err is, indeed, human, and it often takes only one mistake to let intruders into company databases.

Rather than place the security onus on employees or executives, perhaps we ought to work around the “weak link” of human error. Instead of instilling fear into our workers, maybe we ought to give them infrastructure they can trust.

There’s a lot of talk about “baking in” security during the design process. How about “mixing in” human foibles, as well, such as forgetfulness, inattention, denial and even rebelliousness? Why not design cybersecurity that protects people from themselves — that recognizes and flags “phishing” emails before passing them along, for instance, or stops potentially unsafe links from opening — rather than expecting perfection from imperfect users?

Advertisement

People aren’t told to distrust the water they drink, or to check the safety of the highways they drive before embarking on a journey. Why burden workers, who are busy doing their jobs, with the task of keeping our organizations safe from criminals whose only job is to infiltrate their organizations? Our role, as cybersecurity professionals, is to provide a security infrastructure users can trust.

New research into “self-healing” networks shows promise for a future in which computers do the vigilance work being asked of people. In the meantime, perhaps we should consider adding more than a touch of human unpredictability to our cybersecurity recipes. Instead of a “digital strategy” focused on technology, how can we create a cybersecurity strategy for humans living, and working, in a digital world?

JR Reagan is the global chief information security officer of Deloitte. He also serves as professional faculty at Johns Hopkins, Cornell and Columbia universities. Follow him @IdeaXplorer. Read more from JR Reagan.

Latest Podcasts