My first job in security – and in fact my first job out of school – was for a biometrics company. There were a lot of upsides to that job: the work was fun, the engineers talented (most of us fresh from school), and we had a cool project to work on. There were some downsides, too, though. For example, it left me with a skepticism of practical biometric applications – at least when it came to actually using them myself.
Don’t get me wrong, I was still an avid follower and fan of biometrics technology for years; I piloted it, deployed it, advocated it, etc. But for years – even decades – after that first job, I absolutely refused to use it. That may sound surprising from someone directly responsible for building and deploying the technology, but I think when you hear the reasons, you’ll understand why.
Specifically, the company I worked for was a startup. As anybody who’s worked for a startup can attest, budgets can be thin – and, as a result, when it came time to create marketing materials, we used an unmodified image capture of my right index finger as part of the marketing push. You know how you’ve seen biometrics companies sometimes use a fingerprint as part of their logo or on marketing glossies? Well, the image our company used just happened to be my fingerprint. To this day, if you know where to look, you can still find it; I won’t tell you how, but trust me when I tell you it’s still out there. My fingerprint was on the website, on marketing glossies, was shown on live TV, and was on business cards.
One thing that publicly advertising a high-res image of your fingerprint will do to you is make you nervous about how it might be misused. For example, I knew exactly how someone could inject that image into our system (or systems like it) and trick the system into logging you in as me. Having done exactly that routinely (for testing and QA purposes), I knew it was possible – even likely.
Adding to the skepticism was the fact that the engineering team I worked with came up with a few additional techniques to spoof the system. For example, the readers we used employed a smooth, glass platen (almost never done nowadays for authentication systems). It would sometimes – about a quarter of the time – retain a film of oil on the platen exactly conforming to the fingerprint ridges of the last scan. Properly shaded and with some dust or ground pencil lead, you could use this oil to trick the camera into thinking it was a legitimate capture. “Liveness detection” was an option of course, but frankly it was so “persnickety” (would increase the false reject rate so much) that nobody used it in practice.
The changing of the threat model
The reason I’m telling you all this is that something happened subsequently that I think is illustrative of an important point – that a change in the threat model can make all the difference in the safety (or not) of using a given technology for a particular purpose.
I say this because it happened to me with biometrics; I’ve gone from “avid skeptic” to “avid user.” I use it to log into my laptop, my phone, various different apps on my phone (password managers and the like), and sometimes even for physical entry to secure facilities. In short, the barrier went away.
What changed? That fingerprint image is, after all, still out there. Sure, the technology has changed a bit – most readers are capacitance now rather than optical, and extraction methods (such as how the fingerprint is processed and compared) are better and faster. But the essence of the process is still very much the same: a fingerprint is rendered down to minutiae and stored, subsequent minutiae extractions are compared, and a decision (is it the same or a different fingerprint?) is rendered. What’s different now is the threat model.
The threat model has shifted for a few reasons. In the context of a mobile phone, the fingerprint is taking the place of a PIN or password to gain access to the device itself – the same is true of my laptop. Meaning, someone would need to have the actual device itself – in addition to the fingerprint – in order to actually misuse or try to spoof the biometric. It’s not a remote login scenario like replacing my network/domain password or using it for login to a website or remote resource. Am I nervous about someone downloading and using my fingerprint for login to my phone? Not so long as they need to actually steal my phone or laptop to do it. It seems to me that anybody going to the trouble to steal my equipment could just as easily log in other ways and save themselves the hassle.
For actual physical entry to a secure facility, the threat model doesn’t concern me, either. There are a number of other supporting controls beyond just the use of a single biometric (such as hand geometry or fingerprint). They are probably also looking at my ID, there’s a PIN or password that I need to know, a badge to wear, and people that will throw me out if I look suspicious. It’s one link in a chain of which several elements would all have to fail in order for something bad to happen.
The point I’m trying to make is that the threat model determines the suitability of the control and can mean the difference between a technology being safe or not, a control being sufficient or not, and an application deployment being viable or not. In other words, basing what we deploy – and how we mitigate risks – on the specific threat scenarios that may be reasonably encountered in the field is critical. This is why systematic and workmanlike threat modeling (using whatever flavor of model you prefer) is so important and, in my opinion, why more people should do it. In fact, if I had taken the time to threat model the whole “fingerprint image as marketing” proposition, I probably would have (wisely) pushed back. Threat models can change (to become either more risky or more safe) depending on how and where a given technology will be used or how and where a given control will operate. Understanding what those factors are – and when they change – will absolutely provide value.