Bringing lessons from cybersecurity to the fight against disinformation | MIT News


Mary Ellen Zurko remembers the feeling of disappointment. Not long after earning her bachelor’s degree from MIT, she was working her first job of evaluating secure computer systems for the U.S. government. The goal was to determine whether systems were compliant with the “Orange Book,” the government’s authoritative manual on cybersecurity at the time. Were the systems technically secure? Yes. In practice? Not so much.  

“There was no concern whatsoever for whether the security demands on end users were at all realistic,” says Zurko. “The notion of a secure system was about the technology, and it assumed perfect, obedient humans.”

That discomfort started her on a track that would define Zurko’s career. In 1996, after a return to MIT for a master’s in computer science, she published an influential paper introducing the term “user-centered security.” It grew into a field of its own, concerned with making sure that cybersecurity is balanced with usability, or else humans might circumvent security protocols and give attackers a foot in the door. Lessons from usable security now surround us, influencing the design of phishing warnings when we visit an insecure site or the invention of the “strength” bar when we type a desired password.

Now a cybersecurity researcher at MIT Lincoln Laboratory, Zurko is still enmeshed in humans’ relationship with computers. Her focus has shifted toward technology to counter influence operations, or attempts by foreign adversaries to deliberately spread false information (disinformation) on social media, with the intent of disrupting U.S. ideals.

In a recent editorial published in IEEE Security & Privacy, Zurko argues that many of the “human problems” within the usable security field have similarities to the problems of tackling disinformation. To some extent, she is facing a similar undertaking as that in her early career: convincing peers that such human issues are cybersecurity issues, too.

“In cybersecurity, attackers use humans as one means to subvert a technical system. Disinformation campaigns are meant to impact human decision-making; they’re sort of the ultimate use of cyber…

Source…