Follow Us

Close
Like what you're reading?

Subscribe to receive periodic updates about new posts by email, or follow us via Twitter or RSS.

Please enter a valid e-mail address to subscribe.

Close
Close

You have subscribed.

How Understanding Risk Perception Can Enhance IT Security

 

Target, Michael’s, eBay, PF Chang’s… whoever got hacked this week—in this age of continual breaches, the risks of doing business online are mounting daily. And yet, most people don’t take IT security very seriously.

Consider the Heartbleed bug. Heralded in the media as a security flaw of apocalyptic dimensions, countless articles were published in its aftermath advising people on basic security steps. However, a survey we conducted at Software Advice revealed that 67 percent of Internet users had not changed a single password, and 75 percent of employees had received no guidance on it at work.

risk perception dice
Clearly, the millionth article on passwords isn’t going to make security advice stick. Instead, what if we took a step back and tried to understand: What is it about how humans perceive risk that leads so many of us to keep taking chances online? Could businesses factor this knowledge into more effective security practices? We went to the experts to find out.

Poor Information Leads to Inaction

Dramatic headlines about catastrophic hacks and bugs come to us bereft of context—and there have been so many breaches now, it’s difficult for laymen to decide which ones are worth considering. Meanwhile, since most of us see little or no impact from the breaches on our personal lives, it’s even more difficult to gauge the degree of risk we face online, says Baruch Fischhoff, professor of cognitive psychology at Carnegie Mellon University and a world-recognized authority on risk perception.

“The best view of how people perceive risk is a common-sense one,” Fischhoff says. “People will do the right things when the risks are clear to them, and the right thing to do is clear to them—and if they understand the rationale, and feel confident that what they do will make a difference.” The problem, Fischhoff notes, is that when it comes to security and the Internet, “typically, most of those conditions aren’t met.”

That’s not all. Fischhoff points out that businesses like Target and eBay have access to high-paid tech and security experts—yet they still got hacked. As a result, it’s natural for people to conclude: If they got caught with their pants down, why should I think that changing my password is going to make a difference?

Blaming consumers or employees for lax security habits is thus a futile activity, says Fischhoff. “Operator error” becomes a redundant concept if operators keep making the same mistakes over and over again: In that situation, it’s likely the system, not the operator, is the problem.

The Internet Rewards More Than It Punishes

But faulty information isn’t the only factor leading users to be so lax about online security, says Paul Slovic, a professor of psychology at the University of Oregon and the president of Decision Research: a non-profit research organization investigating human judgment, decision-making and risk.

“Most of the time, when assessing risk, we deal not in calculations of probabilities, but [in] gut feelings,” says Slovic.

These “gut feelings” can be slippery. A statistically-unlikely threat such as nuclear war triggers more dread in most of us than a risky activity that cuts close to home, such as driving a car. Familiarity makes us relaxed about risk, says Slovic: Since we spend so many hours of our lives behind the wheel, we under-evaluate the seriousness of motor vehicle hazards, and believe we can control the risk.

Slovic suggests a similar effect is taking place when it comes to the Internet. He argues that not only have most of us not experienced the harms caused by a security breach—we are constantly experiencing rewards, instead. The Internet makes our lives and our work easier; it helps us buy things; it keeps us entertained. The result, says Slovic, is that there is “a sense of unreality” to online risk. It’s a new world, and we haven’t yet developed the “gut feelings” humans rely on to make the right decisions.

And since the threats seem unreal, the apocalyptic warnings and advice we receive wind up sounding hollow, Slovic adds.

“How many passwords do we have? They keep proliferating, and are seen less as a protective measure and more as a nuisance or an obstacle,” he says. “It seems like a lot of work to have to change your password, especially for some unknown benefit.”

Can You Reduce the ‘Human Factor’?

But the threats are real, and companies have to deal with them every day. So how do you get your employees to take security seriously? Kee Nethery, CEO of ecommerce solutions company Kagi, argues that the best thing you can do is use technology to put controls in place limiting what employees can and cannot do online.

“We can arrange our systems [so] that people can do whatever they want to do, and we either allow or prevent [it],” Nethery says. Nethery argues that via a complex deployment of firewalls, configuration settings and other tools, he has largely fireproofed Kagi’s systems against human error. That being the case, he no longer needs to rely on employees to learn complicated sets of security rules. Instead, employees are required only to understand the policies for handling confidential data.

According to Nethery, it’s very simple: If a client is the source of the data, or if the data has been shared as part of the normal business process, then that data can be shared again. Otherwise, requests have to pass through an approval process—and the identity of the party making the request must be authenticated.

Nethery shares real case studies, drawn from company experience, as examples for employees to follow. This is much more effective than a list of policies or compliance points, he argues.

“For instance, ‘Jesse’ got a phone call from a police officer from Kentucky who wanted information from us. She took down his name and number… where he was stationed and what he wanted, and told him she’d call him right back after getting permission.”

Nethery continues: Taking nothing on trust, Jesse called directory assistance, got the main number for that police station, called that number and verified that the officer was legitimate. When she spoke to the officer again, she asked for a subpoena, so that the firm would be covered when releasing private information.

Nethery calls this a job well done. He also notes that if there’s ever any doubt about a request, employees should ask a manager.

Simple? Perhaps, but reducing “the human factor” of IT security to a minimum requires a high degree of technical competency that not all companies have—and some experts doubt that this approach is truly effective.

For instance, John Pironti—a risk advisor with the Information Systems Audit and Control Association (ISACA) and president of management and technical consulting firm IP Architects—says that if employees feel too constrained by a company’s systems, they will find ways to work around it, perhaps using their smartphones or tablets to do their work. As soon as that happens, Pironti argues, the company loses visibility into what its employees are really doing—and that is a truly unacceptable level of risk.

Best Practices for Enhancing Risk Perception and IT Security

Given all this, how can we get employees to develop “an articulated sense of what could really happen”: what Slovic argues is the fundamental step to making informed decisions about risk?

We took some of Slovic’s and Fischhoff’s suggestions and asked Pironti—a security consultant who travels the world advising firms on how to train their employees—to see how they matched his practical experience of what works. Here are his best-practice tips:

Bosses must set an example
One way to quickly tank your company’s security is to push the responsibility onto your employees without signaling that you take the issue seriously yourself, says Fischhoff.

Randomly issued tests or a perception that security is being done on the cheap can lead employees to feel that security is “all on them,” while the higher-ups don’t really care—and this is demoralizing. Security must be a company-wide effort, informed by a sense that “we’re all in this together.”

The problem is that most lower-level staff are not in a position to make executives take security seriously—and given the 75 percent of respondents to our survey who said they had received no advice about Heartbleed at work, it is safe to assume many companies are underestimating the risks they face.

Slovic suggests that IT security staff who do have the ear of management should use stories and analogies tied to real-life scenarios and consequences—preferably with large dollar figures attached—to reinforce the idea that executives must lead by example.

Pironti agrees. In fact, when he addresses executives about security, he articulates the risk at a deeply personal level: He shows them a children’s book entitled, “Dad’s in Prison.”

“I tell them that when it comes to security, there are certain things we do that are not [only] good practice, but which we are legally required to do. And that people who don’t do these things have gone to jail. So I say that if you don’t want to do them, that’s fine—but please give this book to your kids when you get home tonight,” Pironti says.

Teach, don’t preach
Illustrations and risk analogies tied to actual experiences are as important to educating employees as they are to conversations with executives, says Slovic.

Pironti agrees, adding that companies should “speak to the heart”—in other words, stop telling employees what to do, and start asking them what they want to know. Why? Now that most people no longer expect to have a job for life, employees have to be motivated by a direct personal benefit.

When security practices are taught in a way that empowers employees in their personal lives, best practices can become instinctive, Pironti says. For instance, parental concern about children and computer safety can open the door to actionable conversations about risk and security best practices that are applicable in all situations.

“[Employees] want to know, [for example,] ‘Can you tell me how to make sure my teenager is protected when he’s on social media?’” he says. When risk is personalized this way, says Pironti, it’s not necessary to explain how encryption works: you can keep it very simple.

“For instance, you can say, ‘Look for that little lock on the browser if you’re going to be transmitting something you don’t want anyone else to see,’” he says. This advice about secure Internet use at home is directly transferable to the workplace, where employees also have to be aware of confidentiality and online threats.

Pironti adds that the competitive nature of human psychology also works wonders when it comes to the essential step of reinforcing training. Competitions with the prize of a $100 gift card can be highly motivating when it comes to persuading employees to stop taping passwords to monitors and keep their desks clean.

Make rules easy to implement
Long, detailed rule lists are ineffective, says Fischhoff, as “humans are not good at perpetual vigilance.”

Worse, massive checklists can actually be demoralizing, as people inevitably fail to remember everything they are supposed to do and are constantly getting things wrong.

Pironti agrees. Authoritative models work best in environments like the military where there are clear and immediate consequences for infractions of duty, he says. Very few people in today’s workplace have ever been fired for not following password regulations, and so there is little incentive to take such rules seriously.

Businesses needs a softer approach, he says, and you should seek to reduce the burden of compliance placed upon employees.

“For instance, telling employees they have got to change their passwords every few days and that they can’t write them down is unrealistic,” says Pironti.

Instead of trying to lock down every aspect of a system, companies should determine what data is very important—such as proprietary data and intellectual property—then highlight this when talking to employees about security, comparing it to the crucial health and financial information employees might take extra steps to protect at home.

Access to confidential data can then be restricted to a handful of employees who do use advanced password and access procedures. But other, less-important data should be subject to lighter controls—thus minimizing the nuisance to employees, and enhancing the likelihood that they will take steps where it is most necessary.

Let people know where to go for help
Lastly, employees should not fear retribution if they think they have committed a security error, and there should be a clear process for reporting, which Pironti stresses must be “comfortable and anonymous.”

People feel very uncomfortable admitting to mistakes, or alerting superiors if they think a colleague has compromised security. Companies should assure employees that “there will be no detrimental effect—even if the employee did click on a bad link,” Pironti says.

“How you have that first contact with the employee after they come to you is the most important thing,” he adds. “It sets the tone for future conversations with that employee and their peer group. So the first thing is to tell them that they did nothing wrong—unless it was malicious, of course.”

These conversations should be viewed as an opportunity to correct the problem together. And follow-up is crucial.

“People want to have feedback,” says Ponti. “They want to know if it was a big deal or a small deal, and to know the progress of the investigation.”

Conclusion

So, there can be no doubt that companies face serious obstacles when it comes to articulating risk in a way that employees can take seriously. But they can hardly afford to avoid the issue either—and as Fischhoff, Slovic and Pironti argue, there are steps that can be taken to make the nebulous and murky realm of online risk seem much more real.

What do you think? Are there any other tactics that can be used to enhance risk perception and make security advice stick? Let us know in the comments below.

Share this post:  
Daniel Humphries

About the Author

Daniel Humphries is the Managing Editor of IT Security at Software Advice. He interviews experts, writes articles and conducts behind-the-scenes research into the rapidly changing cyber security landscape, all with the goal of bringing clarity to the bewildering assortment of IT security buzzwords and technologies.

Connect with Daniel Humphries via: 
Email  | Google+  | LinkedIn

  • Toreey Obrian

    well I’ve been seeing scams everywhere about hacking!
    A friend referred to me a hacker that helped me with hacking my boyfriend whatsapp and a friend’s Facebook, don’t ask me why, hahaha!
    appshackersstan@gmail.com is your man! he is cheap and reliable… just buzz him!!