Security Isn’t Enough. Silicon Valley Needs ‘Abusability’ Testing

Technology has never limited its effects to those its creators intended: It disrupts, reshapes, and backfires. And even as innovation’s unintended consequences have accelerated in the 21st century, tech firms have often relegated the thinking about its second-order effects to science fiction and the occasional embarrassing congressional hearing, scrambling to prevent unexpected abuses only after the harm is done. One Silicon Valley watchdog and former federal regulator argues that’s officially no longer good enough.

At the USENIX Enigma security conference in Burlingame, California on Monday, former Federal Trade Commission chief technologist Ashkan Soltani plans to give a talk centered on an overdue reckoning for move-fast-and-break-things tech firms: It’s time for Silicon Valley companies to take the potential for unintended, malicious use of a product as seriously as they take its security. From Russian disinformation on Facebook, Twitter, and Instagram to YouTube extremism to drones grounding air traffic, Soltani argues that tech companies need to think not just in terms of protecting their own users, but what Soltani calls abusability: the possibility that users could exploit their tech to harm others, or the world.

“There are hundreds of examples of people finding ways to use technology to harm themselves or other people, and the response from so many tech CEOs has been, ‘we didn’t expect our technology to be used this way,'” Soltani said in an interview ahead of his Enigma talk. “We need to try to think about the ways things can go wrong. Not just in ways that harm us as a company, but in ways that harm those using our platforms, and other groups, and society.”

Courtesy of Ashkan Soltani

There’s precedent for changing the paradigm around abusability testing. Many software firms didn’t invest heavily in security until the 2000s, when—led, Soltani notes, by Microsoft—they began taking the threat of hackers seriously. They started hiring security engineers and hackers of their own, and elevated audits for hackable vulnerabilities in code to a core part of the software development process. Today, most serious tech firms not only try their best to break their code’s security internally, but also bring in external red teams to attempt to hack it, and even offer “bug bounty” rewards to anyone who can warn them of a previously unknown security flaw.

“Security guys were once considered a cost center that got in the way of innovation,” Soltani says, remembering his own pre-FTC experience as a security administrator working for Fortune 500 companies. “Fast forward 15 or 20 years, and we’re in the C-suite now.”

But when it comes to abusability, tech firms are only starting to make that shift. Yes, big tech companies like Facebook, Twitter, and Google have large counter-abuse teams. But those teams are often reactive, relying largely on users to report bad behavior. Most firms still don’t put serious resources toward the problem, Soltani says, and even fewer bring in external consultants to assess their abusability. An outside perspective, Soltani argues, is critical to thinking through the possibilities for unintended uses and consequences that new technologies create.

Facebook’s role as a disinformation megaphone in the 2016 election, he notes, demonstrates how it’s possible to have a large team dedicated to stopping abuses and still remain blind to devastating ones. “Historically, abuse teams were focused on abuse on the platform itself,” Soltani says. “Now we’re talking about abuse to society and the culture at large, abuse to democracy. I would argue that Facebook and Google didn’t start out with their abuse teams thinking about how their platforms can abuse democracy, and that’s a new thing in the last two years. I want to formalize that.”

Soltani points to examples of tech companies beginning to take steps to confront the issue—albeit often belatedly. Facebook and Twitter scrubbed thousands of disinformation accounts after 2016. WhatsApp, which has been used to spread calls for violence and false news from India to Brazil, finally put limits on mass message forwarding earlier this month. Dronemaker DJI has put geofencing limits on its drones to keep them out of sensitive airspaces, in an attempt to avoid fiascos like the paralysis of Heathrow and Newark airports due to nearby drones. Soltani argues those are all cases where companies managed to limit abuse without curtailing the freedoms of their users. Twitter didn’t need to ban anonymous accounts, for instance, nor did WhatsApp need to weaken its end-to-end encryption.

“I think Black Mirror has done more to inform people on potential pitfalls than any White House policy paper on AI.”

Ashkan Soltani

Those sorts of lessons now need to be applied at every tech firm, Soltani says, just as security flaws are formally classified, checked for, and scrubbed out of code before it’s released or exploited. “You need to define the problem space, the history, to build a compendium of different types of attack and classify them,” Soltani says. And even more importantly, tech companies need to work to predict the next form of sociological harm their products might inflict before it happens, not after the fact.

That sort of prediction can be immensely complex, and Soltani suggests tech firms consult those who make a career out of foreseeing the next unintended consequence of technology: academics, futurists, and even science fiction authors. “We can use art to think about the potential dystopias we want to avoid,” Soltani says. “I think Black Mirror has done more to inform people on potential pitfalls than any White House policy paper on AI.”

In his time at the FTC—as a staff technologist in 2010 and then later as its chief technologist in 2014—Soltani was involved in the commission’s investigations of privacy and security problems at Twitter, Google, Facebook, and MySpace, the sort of cases that have demonstrated the FTC’s growing role as a Silicon Valley watchdog. In several of those cases, the FTC put those companies “under order” for deceptive claims or unfair trade practices, a kind of probation that’s since led to tens of millions of dollars in fines for Google and will likely lead to far more for Facebook, as punishment for the company’s latest privacy scandals.

But that same kind of FTC regulatory enforcement can’t solve the abusability problem, Soltani says. The victims of the indirect abuse he’s warning about often have no relationship with the company, so accusations of “deception” can’t be leveraged against them. But even without that immediate regulatory threat, Soltani argues, companies should still fear reputational damage, or knee-jerk government overreactions after the next scandal—he points as an example to the controversial FOSTA anti-sex trafficking law passed in early 2018.

All of that means Silicon Valley needs to put the kind of thinking and resources into abusability that security—not to mention growth and revenue—has received for years. “There are opportunities in academy, in research, in science fiction, to at least inform some of the known knowns,” Soltani says. “And potentially some of the unknown unknowns, too.”

More Great WIRED Stories

Source link