Disclosure Laws

At a conference recently, one of the panelists asserted that the California Disclosure Law (SB-1386) was the worst information security law in memory. I disagree. I think it is the best regulation around information security; even better than GLBA. Most information security regulations are about controls - that is, they specify how one should protect information assets. Sometimes, those are relevant controls - but in practice, every IT environment is a little different, and what works for one environment isn't going to work for another environment.

What SB-1386 has done for the industry is create a very clear cost for an information security breach. Now, companies will, hopefully, think about what controls are relevant for their environment. And maybe, just maybe, that will lead to better security.

Invisibility Cloak

Invisibility gets closer.

It's a cool concept. But once the price comes down, this is one of those potentially disruptive technologies (it reminds me a lot of Shield, by Poul Anderson). I think there are some scary uses, and some cool uses:

In the scary category:
  • concealed guns
  • concealed bombs
  • traffic hazards. People drop bricks off overpasses - what about a cloaked piece of furniture?
Fortunately, the cloaking on each of these would be far and away the most expensive item, so I doubt we'll see them anytime soon.

In the cool category:
  • Urban renewal - what if, instead of creating a hole in space that one looked through, you offshifted light, so that light appeared to go over the object? Imagine a downtown parking garage, with landscaping on the top. From the side, it appeared to be - just a park. Because just making the garage invisible is ugly - you end up with strange sightlines and perspectives. But making it actually disappear? We could have saved a lot of money on the Big Dig.
  • Architectural features - imagine a building where every other floor is invisible. Or where the pillars aren't there.
  • Privacy umbrella - have your own portable changing station at the beach! Of course, I could see some uses for this that might best fit in the scary category, come to think of it....

Infosec - Failing or Succeeding?

Noam Eppel at Vivica asserts that Information Security is a total failure:

Today we have forth and fifth generation firewalls, behavior-based anti-malware software, host and network intrusion detection systems, intrusion prevention system, one-time password tokens, automatic vulnerability scanners, personal firewalls, etc., all working to keep us secure. Is this keeping us secure? According to USA Today, 2005 was the worst year ever for security breaches of computer systems. The US Treasury Department's Office of Technical Assistance estimates cybercrime proceeds in 2004 were $105 billion, greater than those of illegal drug sales. According to the recently released 2005 FBI/CSI Computer Crime and Security Survey, nearly nine out of 10 U.S. businesses suffered from a computer virus, spyware or other online attack in 2004 or 2005 despite widespread use of security software. According to the FBI, every day 27,000 have their identities stolen.

Noam's article is a good read if you think the Internet is safe. But a lot of folks disagree with his conclusion, and I side with them. Noam's article is just a litany of all the doom and gloom statistics out there - like the ones you see on Slide 2 of every security vendor's pitch.

The enemy's gate is down

In hi-tech business, it's worth tracking the money to look at where the future of our technologies will take us. And often, you can at least look at where VCs are thinking about their money:

Mark Kvamme's keynote at ad:tech:

Yet in spite of the changes in consumer behavior, the media spend still lags behind. Kvamme noted that while average household time spent with TV is 33 percent, the average ad spend on TV is 38 percent. In contrast, average household time spend on the internet is 33 percent, but the ad spend is a "miniscule" five percent.

And, Kvamme said, breaking it down by CPM makes the differences in media weight even more apparent. A $64 CPM on network TV is a bad buy when compared to a premium internet CPM of $30, and it looks downright terrible when compared to an internet ROS CPM of $10.

And really, those ratios are even worse. With the advent of DVRs, I think a lot of us are just skimming past the commercials (Although I've noted a marked increase in the quality of advertising eye candy on TV, possibly to get folks like me to stop and watch the pretty pictures). But an Internet ad pretty much always catches your eye; and it can be better targeted than the brute demographics of TV ads -- when I hit a handful of car sites, you know that I'm a pretty good target for vehicular ads.

There's an interesting thing to watch out for here. I think there will be a lot more money moving into online advertising, and more into the streaming media space - good news for us - but I wonder if we're going to see more of the excesses of the advertising space. Blink tags. Popunders. Loud, bad music blaring inline from our browsers. Animated banners covering up that news article.

Either way, it's more media to move.

(hat tip: Craig Newmark)

False Positives

Driving in to work this morning, I discovered a wonderful failure mode of an alerting system. My car has a weight sensor in the passenger seat; if it detects a possible passenger in the seat, without a safety belt in use, it alerts you.

Now, our other car has had this, and it's just a little red light on the dash. But this car starts an audible dinging alarm, which then goes to a very fast audible alarm. I'm not sure if it will turn itself off, as I moved my backpack quickly from the passenger seat to the floor. But what was my first thought?

Man, I've got to disable that alarm.

And that's where security systems can get it wrong. If you put in a control that annoys your end users, your end users will actively work to defeat the system. And that's when you've reduced security, because usually, their workaround is more unsafe than the pre-security system (in this case, disabling the alarm might also eliminate the red warning light, which is a tolerable false positive, but would catch an unsafe passenger).


How do you perfectly secure data on a system? The hard drive should be encrypted, of course. Logging onto the system should use a one time password, as well as an asymmetric identifier. You put the computer in a locked room. Make sure the computer isn't connected to the network, of course, and, for good measure, power it down. The door should have multiple locks, so that you can enforce two-person access controls, and each needs to prove their identity with a physical token, biometrics, and a PIN.

And, of course, the last thing you should do is take a sledgehammer to the computer before leaving the room.

You wouldn't do that last step, would you? And, of course, depending on the value of the data, you probably aren't doing most of the other steps, either. And that's what security is really all about - finding the risk management balance where the protections are commensurate with the threats and value of the data.

I've found that when someone doesn’t want to implement a given security profile, they sometimes resort to the sledgehammer argument; that is, to find an extreme level of security that isn't being recommended, and assert that the absence of that level of protection therefore justifies not adding a lower level of protection.

Autoturning headlights

We just bought a new car, and it has headlights that turn to the left or the right when the steering wheel has turned in that direction. It's a pretty neat feature, although I discovered an interesting "attack" you can do with it, even unintentionally. The distance the steering wheel needs to turn to trigger the auto-adjust happens to be the curvature of the onramp from Storrow Drive to 93 North. So if you happen to be behind someone, and the steering wheel has any jitter in it (not that, you know, you'd jitter it intentionally), your headlamps will wash back and forth across the inside of their car.
As if being followed by a tall car wasn't already painful enough.


Pseudonymity, for those new to it, is the use of a semi-permanent, but incomplete or false identity. For instance, in many online communities, I'll just go by my first name, with a specific Gmail address so that people can distinguish me from all the different Andys out there. I have different pseudonyms in different spaces; in some of them, people who already know me know my pseudonym, but strangers don't.

This is a pretty common practice. It's better than anonymity for community building, as I've noticed that when people feel truly anonymous, they tend be less courteous and more inflammatory. But as the Michael Hitzlik furor has pointed out, people can still abuse pseudonymity; how do you know when 50 people are really only one person?

The answer is pretty simple. A pseudonym should be much like a nickname - you can have a different one for each group you hang out with, but it still doesn't let you pretend to be 4 or 5 different people at once.

Usenix Security Symposium

The first week of August, you'll find the USENIX security symposium in Vancouver. The invited talks this year look great, but I'm not sure I'll be able to make it. If you go, don't miss Matt Blaze's talk on wiretapping - he gave it at ICNS 2006; I thought it was one of the best research talks I've seen in a while.