RSA When it comes to getting your users up to speed with cyber-security, the best approach is to give it to them straight. Practicalities over jargon. Specific examples of threats are very persuasive, rather than simply insisting people enable a firewall and malware scanner, check regularly for updates, and avoid clicking on any suspicious attachments and links.
And rather than baffle folks with dire warnings of remote-code execution bugs, privilege escalations, and authentication bypasses, instead tell them clearly and calmly what’s at stake, who’s going to attack them, how it could happen, and what would happen next. Fraudsters emptying online bank accounts or stealing personal information for identity theft. Criminals copying company secrets via booby-trapped email attachments. That sort of thing. It’s much more likely to motivate people into taking computer security seriously.
So argues Dr Emilee Rader, an associate professor at the department of media and information at Michigan State University in the US, who has extensively studied the popular myths prevalent among ordinary folks with regards to online security and privacy. It sounds like obvious advice, but consider whether or not the last security advice you gave out, or overheard, whether at home or at work, was useful and understandable plain language, or talk of generic threats laced with industry jargon.
On Tuesday, Dr Rader told this year’s RSA Conference in San Francisco there is a disconnect between the advice experts give to users, information given by the media, and the things people themselves look for when they research online security.
“Regular users are more concerned about who may be trying to hurt them,” Dr Rader said. “Meanwhile, experts are describing mechanisms for protection, but they are missing an opportunity to connect with the end users,” presumably by skipping over specific examples.
Dr Rader pointed to antivirus packages as one example. She and her team found that users who felt they personally could be targeted by an attacker were more likely to use anti-malware tools than whose who thought infections were random chance. Folks who believed malware had immediate and visible effects were more likely to use antivirus compared to those who didn’t understand or know how an infection might play out.
“People whose folk theories [of the internet] that involve risk or visible harm were more likely to protect themselves,” Dr Rader said. “But people who thought they could get a virus from browsing the web, and there was nothing they could do about it, were less likely to say they protect themselves.”
Finally, said Dr Rader, developers and administrators should be more transparent with their users when it comes to data security and privacy. Letting netizens know how and who their data can be collected by will make them more engaged and more likely to take the appropriate steps to secure themselves.
“Hiding security and privacy from users is not the best choice if you want them to learn,” Dr Rader concluded. “Design feedback systems that allow users to learn from experiences.”