Passwords you can forget: why Facebook of future will have fingerprint login
Alex Stamos is the security guardian for 1.8bn Facebook users around the world. He talks to our Technology Editor about the importance of its Dublin office, biometrics and threats from virtual reality and artificial intelligence
Alex Stamos is reclining in a sun-drenched Lisbon conference room. Prior to giving a big speech at the Web Summit, he has some pretty big issues to ponder.
His Facebook colleague (and chief technology officer) Mike Schroepfer has just outlined some major plans for the world's biggest online social service. Virtual reality. Drones. Artificial intelligence.
But it will fall to Stamos to make sure the whole thing stays secure. How will future virtual Facebook avatars not fall into the hands of a criminal gang? How will the company protect against intelligent algorithms turning against you?
This is largely down to Stamos.
But while the chief security officer fizzes with chat about what might come to pass in a couple of years, it's the traditional bugbears that still occupy most of his time.
"The absolute number one reason people are harmed online [through Facebook] is because of the re-use of passwords," he says. "Nothing else is even close. There's a big focus in the security industry on incredibly sophisticated attacks and on very sophisticated threat actors. But the truth is, when you look at the statistics, most of the harm is being caused by people reusing passwords in multiple places."
Stamos's last job was Yahoo's chief of security, where he assembled an internal security team nicknamed "the Paranoids". Things didn't work out when he reportedly clashed with chief executive Marissa Mayer over a lack of resolve (and funding) to improve security measures at the company, including a failure to sanction a necessary password-resetting exercise. A year after Stamos left for Facebook, Yahoo revealed that it had fallen victim to a huge data breach affecting at least 500m of its email users.
Stamos has declined to comment on this, but speaks in a general way about the dangers of deprioritising security compared to other company needs, such as user growth.
"The nice thing about my job being CSO at Facebook is that it is well understood here that there is not a trade-off between the trust people have in us and our growth," he says. "We cannot operate as a business if people do not trust us. We cannot grow into new areas unless people understand that we are protecting their security and privacy and that we've communicated that to them."
He is also sceptical on how big breaches or hacks are characterised by companies and the press.
For example, in Yahoo's enormous data breach, the company immediately blamed it on a "state actor". This is an explanation often used by companies in a difficult position because of a major security failing. One advantage to blaming it on a state actor (which is often code for Russia, China or, in the case of Sony's data breach, North Korea) is that it mitigates potential liability, both financially and morally. The theory goes that if a sovereign state attacks, there is only so much the firm can reasonably be expected to do to repel it.
Is the 'state actor' excuse overdone?
"Yes," he says. "One of the problems with the press around this is that there isn't an understanding that the environment of actors is actually much more complicated than they think. A lot of the people who are hacking on behalf of governments are doing so on a contract basis. And they also do other things. They will hack on behalf of spammers, and will just be hired for a specific job. So there are levels of state responsibility. And only the top level is uniform people sitting in the government office hacking. There is a bunch of different levels all the way down to a country that just lets it happen but does not especially encourage it. So people do jump too quickly to the state actor excuse when they don't have any evidence."
But while Facebook may be a happy place to grow and maintain his own new crew of "paranoids", some of the threats the company might face may not have been imagined yet. With virtual reality such a big focus for Facebook, how can he know what to defend against when it is gradually rolled out?
"We're trying to anticipate the issues," he says. "A big part of what we're talking about is the difference between security and safety. I think that when the interesting stuff comes out with VR, the technical security problems are going to be pretty much the same as we have already seen. You'll have attacks against infrastructure, attacks against endpoints, attacks you against production servers. Stuff that's not that different from the attacks you see against browsers today. What will be different are the safety issues. You can build perfectly technically correct software that works perfectly from a security perspective. But when your job is to connect people up, you have to pay attention to the emergent properties to those relationships and the kinds of things that can happen that cause harm. These issues do not come up when you're playing a standalone game, they come up when you have social interactions."
He also says that the issue will become more complicated with hardware. Facebook owns Oculus, which is selling physical virtual reality headsets. It's not a question of patching an online bug and rolling it out within an hour.
"It's the same issue that phone manufacturers face and which Microsoft has had for years," he says. "You have a software life cycle which you have less control over. One of the things we're working on is making sure we have to think about how do you build security in shipped software mode and shipped hardware."
What about artificial intelligence, Facebook's other focus? For someone in the role of a chief security officer for a giant pioneering company, do long term considerations of whether a machine might outsmart security precautions ever enter the planning process?
"I'm not a futurist so I don't spend a lot of time thinking about 20 years from now," says Stamos. "But there's a lot of immediate jumping to the "hard AI" idea when we talk about this. I mean hard AI in the sense of actual consciousness where you can't tell the difference from a human being. But when we talk about AI as technologists, we're generally talking about soft AI. In other words, systems that can understand at scale and at speed certain specific things but which are highly constrained and programmed to do what they do. For example, we are already deploying machine learning systems to keep people safe. When you log into Facebook, we look at a bunch of aspects of your login and a machine learning algorithm makes a determination as to whether you're likely to be the real you and not a guy who bought your password off the black market. There is no long-term safety issue with that system getting better and better at distinguishing as such."
Stamos's more immediate issues are with passwords. Because Facebook is the biggest social network, password breaches have potentially the most widespread impact. Is it possible to foresee a time soon when we can get beyond current username and password requirements to keep accounts activated?
"Absolutely," he says. "I think what's going to happen is that you're going to have a half dozen ways you can get in and our challenge is going to be making sure that there's enough coverage so the majority of people can get rid of their password."
Does that mean biometric means of entries, such as eye scans, voice patterns or fingerprints? "I think so, yes," he says. "If you have an Android phone with a fingerprint reader it might actually support Fido [an alliance of companies working on biometric standards which doesn't yet include Apple] so your fingerprint never leaves the phone. Facebook wouldn't get your fingerprint but your phone tells us in a secure manner that this is you. That's an area we're exploring. We need to do this in a standards compliant way. One of the reasons usernames and passwords continue to exist is that there's no interoperability necessary with that system. We just have to wait for more and more manufacturers to be part of that alliance and for the rate of phones which have secure biometrics with Fido to go up."
Mr Stamos says that Facebook's Irish base is a critical one for security.
"The Dublin office is actually one of our biggest offices from a security perspective," he says. "We have a big security and safety team there. We have a lot of people who are experts in countries, language and culture. It's impossible for the security team to hire people to speak all those languages so the people on our side who do safety investigations go sit with them."