Planning defeat

Today’s cryptogram letter by Bruce Schneier contains a very interesting piece on digital eavesdropping capabilities and how governments put pressure on manufacturers through new laws to integrate surveillance systems in end-user products.

Planning a back door in your product is kind of like putting a self destruct mechanism on a plane. Now, not only do you have to make sure normal operations are secure, but you also have to be damn sure that this new system is as well. In short, 3 times the amount of work for (ideally) the same result.

Recently, a friend of mine locked her door nice and tight and forgot to close the window. Everybody laughs at that. Why wouldn’t they about putting a back door in a piece of software?  Common sense says that it’s exactly the same problem: it gives two ways to access the contents, with different security systems. Granted, the security systems should be good on both, but they are by essence different. It means that the weak link in the security (the human) is burdened twice. And that the potential bad guy has an extra mean of access. It does seem like a loose-loose situation, doesn’t it?

So why do people keep doing it?

Well there are two main reasons, and a few corollary ones:

• It’s reassuring.

In case the main door is unavailable, for instance like when you have a castle and the attackers got inside and control the main access points, you still have a way to access the inside. People have been doing that for millenia. All the legends, and all the stories, talk about a “secret entrance” that the bad guys don’t know. Sometimes it’s a secret exit, but the principle is the same. Its very security, however, depends completely on secrecy. You can’t tell anyone about it. The very same guards that swore to protect your castle can’t know about it, because, even if they are loyal, their watching this secret entrance will attract attention to it. But it also means that you won’t know about it if anyone else gets privvy to the information. And therefore if anyone else knows about your secret entrance/exit, you are defenseless against them if they decide to use it for “bad” goals.

On the one hand, backup is always good. You have something to fall back to in case everything goes pear shaped. On the other hand, backup is very often incompatible with security in the sense of privacy/secrecy of the contents.

But I can hear you say “I could have a very good security system on my backup plan, so that anyone other than me cannot access it!”. Yes, you could. Again, remember Ali Baba. The cave is protected by a password, in addition to be secret. But you would have to make the password/key/etc completely secure. Which means you wouldn’t be able to open it easily. Which means, given human nature, that after using it 3 or 4 times, you would revert to a somewhat simpler security scheme. And rely almost exclusively on the secret part to protect it. That’s how the 40 thieves got stolen from.

• It’s a power trip.

You and you alone know a secret access to a place that might or might not be yours. Like the old myth of invisibility, you have a way to control and/or check what others think of your whereabouts.
Everyone might assume you are safely locked in the dungeon, but you are having fun at the nearest pub. Or the people inside the dungeon may think they can keep you out, but in reality, you could surprise them inside any time you like.

This is the main argument in the laws mentioned in Cryptogram: the bad guys don’t know that you are listening in on everything they say. Therefore they will chat quite freely about their mischievous plans to take over the world. And the arm of the law will catch them with their pants down.

That’s all good in theory, but don’t they think that maybe the bad guys know this already? After all, when there is a war going on, even the “good guys” (ie always “us” as opposed to them), everyone talks in code, so that even if the conversation is intercepted, the “bad guys” (same’o but reversed) won’t understand it. So, if the bad guys are aware that their calls might be monitored (and I don’t see how they would not be aware of that), they will take steps to ensure security. Which means that only people who are not aware they could be monitored (aka people who don’t think they are doing anything wrong enough to warrant the police’s attention) will be caught. Are these really the kind of bad guys we (as society) want to catch at all costs? Even if it means giving a potential way into a very sensitive place to the real bad guys?

[Fast forward]

In terms of software, the back door is even more elusive, because there’s very little that leaves a trace that can’t be erased. Therefore, if a bad guy has the secret key, there’s a good chance they can perpetrate a lot of bad deeds without anyone knowing. Can anyone risk that just for an ego trip?

  

Useless security

Security this, security that… Lately, most of the new rules, laws, or software features include somewhere in their description the word “security”.

Trouble is, most of the time, it’s not about security, but rather stability and/or bullying. I have two examples that make me wonder if anyone other than me reads Bruce Schneier’s blog.

A few weeks ago, I was in the United States, and a friend of mine said she wanted a hotdog from the Yankee Stadium. You have to know that I must look like a terrorist or something, because I get searched in airports a lot. Besides, with all the TSA rules that get more and more cryptic and obtuse (again, read Cryptogram), it was a no-brainer that I would get busted or something.

What do you know? The guys saw the hotdog all wrapped up in my bag, asked me stuff about it, I explained it was for a yankee fan, etc, etc. They made me unwrap it, I had to beg a little, but they let me get on the plane with it (hey, a hotdog could be a very dangerous weapon). What they didn’t make a fuss about, though, was that I had two boxes of matches (from some NYC club), a Zippo, and a very nice electronic screwdriver about 15cm long, sharp and slim. When you think about it, even though there’s absolutely no way to stop a determined terrorist, I guess some dangerous items are more obvious than a hot dog. And yes, I know, it was a stupid bet.

The second thing that bugs me about security is the way people sometimes claim to handle security the easy way for my own sake. I have some sensitive data on my computer. I access some sensitive data from my (secure) servers at home through a (secure) VPN. Well, I never found an easy way to enforce the VPN at all times.

On my mac, whenever I change networks, the VPN disconnects. And I have to reconnect manually. Except that all the services (mail, remote storage, etc) try to reconnect (sending all my passwords, sometimes in the clear) before I can activate the VPN, automagically. It drives me nuts to see my mails downloading on my iPod Touch without the VPN, even though I had it activated just 5 minutes before and it turned off in the meantime. Agreed, I should choose a provider that uses SSL connections for the mails, but still.

Unless I get my hands dirty and hack it into believing that there is no route outside of the VPN one (which is hard to do, and tedious to implement), no one seems to think it’s a real issue.

People, security is not about show. Security is not just to reassure everybody and to pretend everything’s under control. Agreed, security sometimes is a heavy process and sometimes gives you more hassle than you really want to handle. But if security is what you mean, you have to think this through before you implement it.

  

Another go at electronic voting machines

A few weeks ago, this pdf (pdf deleted – was titled “Stop the Presses: How Paper Trails Fail to Secure e-Voting”) was released by this group.

I’ll spare you the details, but in essence, they say that printing a paper ballot to verify an electronic vote is at best useless, and at worst counter-productive. Their premises are as follow:

  • the Americans trust online payment
  • e-Voting could be made at least as secure as online banking
  • therefore, Americans should trust e-Voting.

First of all, online fraud and other cyber-atrocities aren’t targeted for the same reasons, don’t have the same fallout, and don’t affect nearly as many people. When someone grabs your credit card number and makes a payment online, it’s an identity theft. The pirate is you, from the server’s perspective. If he succeeds in paying something, it shows up on your account balance. You then call your bank and “prove” that you didn’t actually pay for that. After some time, insurances come in and you are (hopefully) refunded. And the police chases the pirate, hoping to refund the seller as well. Or the seller’s insurance. And to punish the pirate in the process.

To make sure this chain is secure, there are a few certainties : you know who you are. You have access to the list of your own purchases. You notice something goes wrong. You alert the Authority that something’s wrong. They know (for certain) who you are. They trust you (to a certain extent). Therefore there is a problem.

Voting is a lot less simple. You know who you are. You know who you voted for. There is no way to link that back to you, and there shouldn’t be. Therefore (unless you are the only voter for a candidate and there’s no vote accounted for this one) you can’t know if something is wrong. Therefore YOU trust the Authority to make sure everything’s accounted for.

Now, if you get cyber-stolen or somesuch, the insurance companies investigate on you to make sure that you are honest. The crux is how the voter can investigate on the validity of the result.

Some technologies presented in this paper are pretty clever ways to actually ensure that. Who knows? It might even work. I agree that paper trails from electronic voting machines don’t account for much if you don’t trust the system. Maybe it’s because most of the real problems appear at more macroscopic levels.

When someone steals an election, he or she has to do it by manipulating relatively large numbers. We are talking about thousands, or millions, of votes. Why bother with individual votes then? Let’s do the “half cent trick”. If very small amounts of data are manipulated at a local level, they may amount to huge shifts when added up together. The weakness of every system is the alert threshold. If I am a local (and honest) watchdog at a poll, will I trigger a general alert for a dozen miscast votes out of 3000? I should. But who would, honestly? If every local poll has a dozen miscast votes, that’s 0.4% error margin. If memory serves right, this could have tipped several elections in the past.

And the mistrust is right there : e-Voting leaves precious little trails. Who knows what someone with access to the central computer with enough skill to cover his tracks could do? Would we notice these 0.4%? Locally, most certainly not. It would mean that every one of these miscast votes would have to be traced back.

The alert threshold has to be lowered and trust has to be earned. What if I (as a local watchdog) asked random (truly random) people to watch me perform my duties? At the very least, even if everyone in the group agrees to drop the case on the 0.4% error margin, someone would know. At best, I can’t be slack anymore, since these people can voice this fact to the world.

The problem isn’t really to find a perfect model. Picking chiefs at random, and taking turns, to avoid professional politicians is as perfect a model as the random selection mechanism is truly random. It’s not even to rule if the paper solution is better or worse than the electronic solution for a perfect counting of the votes. The problem lies in the fact that democracy is built on people, and people rely on democracy. To earn the trust of these people, they have to know that their voice is heard, even if it’s just one man versus a million.

These guys in their article say that no count could really be perfect. Hell, yeah, we all know that. But if the choice is between a trusted and 99.4% perfect system and a non-trusted (since it’s opaque and witness-less) and theoretically 99.6% perfect system, my money is on the former rather than the latter.

Ah but… THAT’s the problem… Money. Electronic votes cost less (a lot less, agreed) and take less (a lot less, too) time than their paper counterpart. The thing is, from the voter’s point of view, the cost is the same: going in a room and validating his/her vote. And the voter wants a system it can trust. Time to remember who these elections are for : voters, or elected people.