Unsolicited Advice on Self Improvement

Some thoughts on advice about guiding others’ self-improvement.  Often, advice comes in the form of “If you do X, Y will happen.” It’s worth unpacking that. What “If you do X, Y will happen” often really means is, “For some group, which I think is large and I believe you are in, doing X will increase their favorable outcomes in the direction of Y by some small amount.” 
The first half of that framing can be really othering if you’re wrong. “I think this works for all humans!” might get heard as “It doesn’t work for me, do you think I’m not human?”  I think of the latter half of the framing as the 1% effect:  For some set of people, this advice might improve their expectations of good outcomes, but maybe by no more than 1 percent.   How big is “some people”?  It might be just the advocate (“I did this and it worked for me”), or it might apply to a large in-group.  But it almost certainly doesn’t apply universally, to every possible listener.  We should understand that the original framing (“Do X to get Y”) is incredibly othering to members of the  outgroup.  That advice might not work at all for them.  Or a 1% improvement in their outcomes might be outweighed by the odds stacked against them. So that’s the first tip in guiding: acknowledge that you might be wrong, and your advice might not apply.  And that for some people, these topics are so sensitive and carry so much historical stress that you might hurt them more than you planned to help them. 
But what about the 1% effect?  Assume that any piece of guidance might help someone by 1%.  Is that a lot?  It depends!  The difference between the best NFL player and the worst is just a small collection of 1% effects.  But me?  1% isn’t getting me onto the Patriots roster.  Understand where there’s a cluster effect - maybe this advice is useful, but the biggest effect you see is for people who’ve invested in gaining a lot of advantage in a related area, and others might not see the same benefit. 
Consider this one: when people ask how I am, I say, “I’m fantastic!”  It helps me keep my frame focused on positives, which increases my resilience to stress.  I used to say, “Not bad,” and I noticed that I was looking for the bad.  Now for me, that “one simple trick” sits on top of a lot of mindfulness, and care, and good fortune.  Thanks to a colleague, I always remember I could have been born a nematode!  I’m really fortunate to be a human in the 21st century. I 
suspect that for many people, a daily remembrance of good fortune will improve their condition - but I also know that for people with a wide range of circumstances, from chemical depression to trauma to many more, that advice rings hollow. So when you’re evangelizing something that works for you, and maybe others, recognize that you are almost certainly talking at someone with a different experience.  And they might not appreciate unhelpful and/or unsolicited advice without caveats. (If you’re in the mistargeted group for whom this advice might even be harmful, recognize that this blind spot around inapplicable advice might be just a blind spot, and not explicit malice (We hope).)
That leads to a point about inclusion.  I think of inclusion as “reducing the energy cost of a person just to exist in a space.”  Recognize that your assumptions about what works for other people increase their existence cost when you’re wrong. 
So I conclude with hopefully near-universally applicable guidance, from the hallowed halls of San Dimas:  
Be excellent to each other. And, as always, thank you to a remarkable cast of humans who help me think about these ideas, and find ways to make the world better.  It’s truly a blessing to know you and have access to you.

Composing Defences

Often, in the information security community, we bandy about terms like “defence in depth” or “layered defences.”  Most of the time, it’s just a platitude for “buy more stuff.” It’s worth exploring the way these terms evolved, and how we should think about defensive architectures in the world defined not by physical space, but by network connectivity.

In the flat space of military defences in the pre-WWII area, defence in depth would refer to one of two concepts.  In the first mode, it was a set of defences which interlocked in some form -- consider a castle wall, a moat, and a set of guards atop the wall.  Each of these defenses, individually, was trivially defeatable, but together, they multiplied. While an adversary was busy crossing the moat, they were easy to shoot at.  The moat made it hard to scale the wall. The wall gave defensive cover to the guards. In the second mode, it was about depth in distance - consider the depth of the Soviet terrain as they fell back in World War II, and the lengthening of the attacker’s supply lines as weather set in.  “Never get involved in a land war in Asia” is good advice for a reason.

Integrating defences relies on some basic features of the physical world.  Adversaries occupy space across a period of time. Defenders can trivially observe adversaries - the Mark One eyeball is generally ubiquitous across history.  But when defences integrate, it may be easier to think of them as stacking – defence in height.

When defences fail to integrate, allowing an attacker to sequentially defeat them – consider a set of hurdles in a line – then depth may be the correct way to consider the dimension.  Consider a pair of identical, locked doors, with a small, unmonitored space between them.  While an attacker may take more time to defeat the doors (either using lockpicks, slides, or a purloined key), neither defence is actually made harder by the presence of the other.

Sometimes, defences don’t even stack.  Defence in breadth represents a set of defences that present a choice to an adversary, where they can opt not to engage in a defence, by going around it.  The postern gate provides an alternate path for a spy than the front gate; the Maginot Line could be gone around; any of a dozen servers in a network DMZ can be breached to provide access to an intranet.

The lesson for defenders is to understand both the system you’re defending, and how its defences work – or don’t ­– together.  Increased complexity may be an indicator of defences in breadth, often with “layered” defences where the defeat of one could go undetected.  Our goal should be to create defence in height, where we know how our defences work together towards defeating adversaries.

How do we approach improving our defences?  
One way is to flip our mental model, and consider ourselves as attackers, and the adversary as a defender.  In the same way an adversary might conduct surveillance on our defences, we need to surveil the adversary as they defeat our defences.  We should consider our boundary systems as the adversary’s, and ask, “How can we see the adversary conducting an operation?” While an adversary’s dwell time inside our perimeters might not need to be long to accomplish their goals, how can we observe artifacts of their presence?

Another approach is to understand that our perimeters are almost always wider than we understand.  When we try to govern our systems, we often start from the best maintained systems and work outward;  adversaries will start from our worst-maintained systems and work inward. We need to aim to operationalize the same visibility and maintenance practices across our entire perimeter stack, so that we understand our risks, and not bury them deep as a footnote in our assessments.

A third approach is to reduce our perimeter entirely.  Simplifying our defensive models makes them easier for us to understand, and reduces the possibilities for adversaries to penetrate through unknown ways.  This may involve partitioning our system clusters, so that lateral movement is restricted, and each network architecture becomes understandable.

All of these approaches have value in improving our defenses, and restoring height to our walls in meaningful and helpful ways.

A Perimeter of One

Even before there were enterprises we thought of as carbon vs silicon, enterprises were graphite-and-paper.  In the graphite-and-paper enterprise, an organization had perceived control over all of its information assets – after all, they were written down, in hard copy, and often didn’t leave the building.  While humans came into the building, the information perimeter existed at the tip of the pencil.

As computers came into the enterprise, often the first use case was to displace existing systems, and replace the graphite-and-paper enterprise with a silicon enterprise – instead of doing accounting in double-entry on fold-out ledgers, accounting took place in a general ledger application.

Yet the security world still thought of computers as just quicker versions of the graphite-and-paper world.  Our perimeter still existed at the fingertips of the humans, only now those fingertips were typing on a keyboard instead of scribbling in a notebook.  But our security was still based on the models of a physical perimeter. Mostly. But with a very dangerous flaw.

A physical perimeter — at least the non-human parts of it — isn’t really designed to keep adversaries out.  It’s designed to slow adversaries down. To change the cost equation for adversaries, to make them risk their own safety until human guards notice their attempts to enter.

And when the silicon enterprise connected to other networks, we kept this very flawed model.  Because we’d always trusted the silicon — after all, it had evolved from graphite-and-paper, which only lied when humans told it to — we weren’t prepared for how untrustworthy our computers would become. And the rate of silicon communications far exceeded our expectations for monitoring, and adversaries had little personal risk.  So we relied on “securing our perimeter,” in a last-ditch attempt to keep adversaries out.

But the basis of our security controls was all about establishing perfect trust in our devices and networks.  We’d require the best endpoint security, no matter where our devices were, because our security models would rely on that trust to build a credible environment.  Even when our devices would travel the world in the hand of a user — the one thing we wouldn’t trust — and be used for official and personal use, we would still believe that we could trust those devices, and make them part of our enterprise.

But those devices aren’t part of our enterprise.

They’re part of the user’s perimeter, instead.

Around the turn of the millennium, enterprising CFOs realized that with the increased consumerization of the mobile phone market, there was no reason for enterprises to own and manage cellphones.  Instead, at best, a cellphone allowance could be issued to employees, and those humans could be responsible for their devices.

It was a smart move financially, but one with long-lasting repercussions for the security model of enterprises.  While most phones — and even early smartphones — acted as clients to some larger network, with the advent of the iPhone, the model shifted.  Smartphones are now an extension of the human who carries them, not of the network that they connect to.

And since the distance between a smartphone and a laptop isn’t that large, we should consider the laptop as also part of the human who carries them.  And, as a result, the enterprise really shouldn’t carry any implicit trust for them.

In the same fashion that a consumer-oriented enterprise doesn’t overly trust the security of the devices its users operate, the modern enterprise needs to function in the same fashion to its employees.

Does this mean that we just abandon employees to the dangers of the Internet?  Of course not. The modern IT department has become a managed service provider, providing its clients — the human employees — with support and security services to protect that human’s cybernetic perimeter against adversaries.  But that service doesn’t mean that our enterprise applications should implicitly trust those devices.

Instead, our enterprise applications should give no more trust to the devices than necessary, and only as a proxy for the specific human who carries them.  This is hard work, because we’re so used to the belief of being able to trust everything on our network. But our network is the Internet now, and our mental perimeter needs to shrink to only encompass our applications.  Everything else outside those applications should have no implicit trust.

And the user’s devices?  They’re inside the user’s perimeter, and we should help them establish a safe perimeter of one.

Vendor Rebuf

I receive a lot of inbound messages from inside sales teams across the security (and other) industries. I used to just delete all of them, and sometimes make fun of them (without naming!) when particularly egregious practices happened (like four or five followups).

But that’s a dehumanizing practice. Lead generation is already a hard and thankless discipline, and a friend suggested sending a polite “no” was a better approach. So I put together a template response, stored it as a signature block, and have used it for quick responses. I present it to you here, and put it into the public domain - you may freely repurpose this text.

Good day!

I’m going to decline what I’m sure was a lovely invitation. I recognize you have a job to do — namely, get a qualified first appointment — but I am not a lead for you, and my answer is no. If I ignored your email, you might reasonably wonder whether I didn't get it, or whether I was considering it. This might reasonably lead to a follow up. Let me be clear: I received your email and thank you, but I am not interested. Please don't follow-up.

There are thousands of security companies (and many thousands when all of the VARs are included), and almost all of them would like some time on my calendar. If I accepted even a 15 minute appointment from each of them once a year, I wouldn’t have any time left to do my regular job, which is helping Akamai make wiser risk choices.

But in an effort to give you a response, I’ve drafted this form note that I can quickly send to minimize my cost of you getting to a “no.” I’m sure you’ve got many questions, most of them aimed at converting my “no” to a “yes” (hint, not going to happen), so I’ve included a brief FAQ:

Q: Can I keep you on my mailing list?
A: Please remove me, unless you have documented evidence that I willfully opted into it. Odds are, you either got this list from a conference, or paid for it from Hoovers or equivalent. If you have a way to mark me as “don’t contact,” please do so.

Q: Can I send you a gift?
A: Please don’t. Either it’s not really small, in which case you run afoul of both my personal ethics & our corporate ethics policy, or it’s truly small, in which case it’s unlikely to be valuable, and it’ll just be disposed of, increasing the adverse environmental impact of our industry. Relatedly, offering me a gift if only I take a meeting is actually insulting. You’re basically trying to bribe me to take the meeting.

Q: But I see you’re a Patriots fan / oenophile / runner, and I’ve got this meaningful gift!
A: I have season tickets to the Patriots, my own wine cellar, and I don’t run as often as I’d like to. I’m happy to talk about those things, and you can find me on Twitter as @csoandy doing so - and occasionally talking security, as well.

Q: Can you refer me to someone else at Akamai?
A: No. As a standard practice, all of the information security professionals at Akamai never do blind references. If we make an introduction, we get permission first from the target, and therefore we’re investing our time and reputation. So cold intros are almost never going to get that.

Q: How do you decide who to talk to?
A: Sometimes because I’m interested in a specific technology. Sometimes because a peer highly recommends a company. Sometimes because there’s a specific hazard I’m trying to determine how best to mitigate.

Q: How do I be on your radar for when you might be interested?
A: Be awesome as a company. I recognize that’s your overall marketing team’s job, but you’re a part of that. Did you send me a boilerplate blurb like, “We’re the market-leading provider of enterprise security services that enable businesses to serve their customers without fear of compromise”? That’s boilerplate that almost any security company could claim (hey, Akamai could use that, although it’s an overly strong claim, so we wouldn't!). In fact, that’s one of my litmus tests - if your boilerplate could describe my company, then I’m just going to stop. Use a brief technical explanation, like, “Akamai provides both security-enhanced CDN services, like DDoS mitigation, bot management, web application firewalls, and client reputation; and enterprise services like DNS-based malware filtering and simple-to-provision application VPNs to safely connect your third-parties into your network.” With a note like that, at least I can have your name in my mental map of solution providers.

Q: Great, can I call back next quarter?
A: No. In fact, be aware that I never suggest a lead developer call/email back at a set time, so if you start with, “Andy, I’m following back up as you suggested…” note that I’ll stop reading there.

Q: But I’m not a security company! Can’t you take time?
A: Then even less so. I’m almost certainly not even an appropriate target, which means you’re sending cold intros to people who aren’t appropriate targets.

Zombie Vivification

Poll a dozen security professionals, and you’re likely to hear most of them opine that cybersecurity is getting worse. By calling it “cybersecurity,” you’ll also get a dozen opinions about why we shouldn’t use the cyber- prefix, but that’s a story for another day.

By and large, I agree. Cybersecurity is getting worse. Breaches no longer even make major headlines. Cars and insulin pumps are the subject of recall and regulation. So many vulnerabilities are disclosed in a year that the Common Vulnerabilities and Exposures (CVE) framework had to go to a 5-digit numbering system for each year’s vulnerabilities.

But this is a good thing. When our net cybersecurity exposure starts going down, it probably means our pace of innovation and development around networked technologies will have also dropped. Why are these correlated? Zombies, and the Peltzman effect.

Much of our technology innovation comes from startups - businesses that already exist in a state of significant risk. While we might consider that established businesses can be modeled like humans, startups more closely resemble the walking dead. Like zombies, they are shambling to avoid death, unlike humans and corporations, which strive to perpetuate themselves.

Risk compensation — the Peltzman effect — teaches us that humans, when presented with a change in their perceived risk, will act in opposition to that change. When the world becomes riskier, humans play it safe. When the world becomes safer, humans take on more risk.

A startup, as a zombie, isn’t an entity at risk of failure. It’s an entity that is already, by definition, failing — once a startup is healthy, we no longer think of it as a startup. And since a startup already knows its date of demise, existential risks don’t matter anymore - and trying to play it safe would only make matters worse. Like a zombie, a startup’s best play is to ignore any risk to its life, and focus on risks to feeding itself. The nicks and bruises and technical debt that it accumulates can only matter if it first survives.

Zombies don’t feed if they play it safe; faster, more aggressive zombies get the brains first. Startups need to operate in the same model (hopefully pursuing revenue instead of brains). It’s only when startups (or zombies) become alive that the risky choices come back to haunt them.

It’s those risky choices made by successful startups that we inherit. Those risky choices become the cybersecurity risks that we all shake our heads and wonder, “why would anyone make these choices?” The startups that didn’t make those choices didn’t survive.

The Future of The Internet -- and how to secure it

Once, there was an Internet. And it was a happy place with no security concerns whatsoever, because only a dozen or so people got to use it.

That fairy tale is not the world we live in today, and thanks to high profile problems like Heartbleed and Shellshock, more people recognize it. Unfortunately, some of the design ethos from that fantasyland still impacts us. The web isn’t secure for the uses it sees today—and HTTP was never designed to be. SSL, intended to provide a secure connection layer between systems, has evolved through multiple versions into TLS, each attempting to reduce the vulnerabilities of the prior.
The vulnerabilities and problems of HTTPS, while not numberless, are legion. And each of these vulnerabilities presents an opportunity for an adversary to defeat the goals of Internet users—whether they seek financial security, privacy from government surveillance, or network agnosticism.

What is HTTPS, anyway?

HTTPS isn't a standalone protocol; HTTP over TLS is two separate protocols, isolated from one another. The effects of one protocol's actions on another are rarely studied as much as the actual protocols themselves. That isolation has led to vulnerabilities—using compression in HTTP to improve transfer speed is good, except that the secrecy goals of TLS can be subverted through variable-sized content, as in the BREACH security exploit.

Who do you trust?

TLS certificates are issued by certificate authorities (CAs); these CAs sign the certificates that a web site presents to its users to ‘prove’ who they are. You could almost consider them like a driver’s license—issued by some authority. But who are these authorities? They are the dozens of entities—some commercial, some governmental—who are trusted by our browsers. Unlike a driver’s license, any trusted CA can issue a certificate for any website—it’s like having your local library issue an ID card for a Pentagon employee or one government issue certificates for another government’s website.
Illegitimately gaining a trusted certificate can be achieved in at least three distinct paths:
  • compromise a CA publishing interface, either directly or by compromising a user’s credentials;
  • for Domain Validated certificates, have publication control of the website that the CA can observe (by compromising DNS, the publication interface, or the server directly); or
  • by modifying the browser’s list of trusted certificates. This is a common practice in many enterprises, to enable the enterprise to run a CA for their own websites, or to deploy a web filtering proxy. But these CAs are then able to issue certificates to any website.
Once an adversary has a certificate, they merely need to also become a ‘man in the middle’ (MITM), able to intercept and modify traffic between a client and a server. With this power set, they are able to read and modify all traffic on that connection.
Certificate Transparency (CT) is an initiative to begin monitoring and auditing the CAs to determine whether they have issued rogue certificates and to provide browsers an interface to collectively validate certificates. This may lead to a reduction in the number of trusted CAs to only those that don’t behave in a rogue fashion. There is another possible mitigation called DANE (DNSSEC Assertion of Named Entities), where the information about the validity of certificates/authorities for hostnames/domains is published through DNS and signed by DNSSEC, reducing the number of trusted entities who can publish SSL keys.

I can haz TLS?

Until recent versions of TLS that incorporate Server Name Indication (SNI), a server was required to first present the certificate that declared for which hosts it was able to conduct an HTTPS session. This meant that no IP address could have more than one certificate. In HTTP, a single IP address can, through virtual hosting, have many hostnames, as the client will signal to the server which hostname from which it would like a web page. While the advent of multi-domain certificates has allowed multiple hostnames, it hasn’t provided the freedom to have ‘unlimited’ TLS-secured hostnames. SNI is an extension to TLS that provides this capability, allowing a browser to tell a server what certificate it would like to be presented.
But SNI isn’t supported by all browsers—most notably, Windows XP and early versions of Android. The former is on its way out, but the latter is still being deployed on lower-end feature phones, especially in the developing world. And unfortunately, there are no good strategies for supporting both SNI and non-SNI clients available today. Until either SNI is fully supported, or IPv6 adoption achieves critical mass, many websites will not be able to have HTTPS.

TLS is only Transport Layer Security

Often, a client isn’t talking directly to the content provider—there is some other entity in the middle. It might be an enterprise proxy; it might be a network operator gateway; it might be a content delivery network. In these cases, the TLS connection only provides secrecy on the first leg—the client has to hope that the secrecy is preserved across the public Internet. Few of the mid-point entities provide any assertions about how they’ll handle the security of the forward connections that were prompted from a TLS connection; some even advertise the convenience of having the ‘flexibility’ to downgrade from HTTPS to merely HTTP.
Until HTTP contains a signaling mechanism through which the mid-points can communicate about the TLS choices they’ve made, a client will not know whether a TLS connection is robust (or even exists!) across public links.

TLS isn’t privacy

TLS provides encryption for the information contained inside a request, thus hiding the specific content you’re engaging with. It’s useful for hiding the specific details of similarly shaped data, like social security numbers or credit cards; but very poor at hiding things like activism or research. The design of the system doesn’t conceal the ‘shape’ of your traffic—and the Wikipedia pages for Occupy Central have a different shape than the shape of the Wikipedia page for the Large Hadron Collider. It also doesn’t prevent traffic analysis—while the contents of a user-generated video may be secret, the identity of the systems (and hence the users) that uploaded and downloaded it aren’t. Some privacy systems like Tor may provide useful protections, but at the cost of performance.

Don’t trust the lock

All together, the architecture of TLS and HTTPS doesn’t provide enough safety against all adversaries in all situations. There are some steps underway that will improve safety, but many hazards will still remain, even absent the highly publicized implementation defects. But these steps will increase the cost for adversaries, sometimes in measurable and observable ways.
That icon lock in your browser is useful for securing your commerce and finances, but be cautious about trusting it with your life.

This article originally appeared in The Internet Monitor 2014: Reflections on the Digital World.

Dancing Poodles

SSL is dead, long live TLS

An attack affectionately known as “POODLE” (Padding Oracle On Downgraded Legacy Encryption), should put a stake in the heart of SSL, and move the world forward to TLS. There are two interesting vulnerabilities: POODLE, and the SSL/TLS versioning fallback mechanism. Both of these vulnerabilities are discussed in detail in the initial disclosure, as well as a history lesson in Daniel Franke’s How Poodle Happened.


POODLE is a chosen-plaintext attack similar in effect to BREACH; an adversary who can trigger requests from an end user can extract secrets from the sessions (in this case, encrypted cookie values). This happens because the padding on SSLv3 block ciphers (to fill out a request to a full block size) is not verifiable - it isn’t covered by the message authentication code. This allows an adversary to alter the final block in ways that will slowly leak information (based on whether their alteration survives verification or not, leaking information about *which* bytes are interesting). Thomas Pornin independently discovered this, and published at StackExchange.

On its own, POODLE merely makes certain cipher choices no longer as trustworthy. Unfortunately, these were the last ciphers that were even moderately trustworthy - the other ciphers available in SSLv3 having fallen into untrustworthiness due to insufficient key size (RC2, DES, Export ciphers); cryptanalytic attacks (RC4); or a lack of browser support (RC2, SEED, Camellia). The POODLE attack takes out the remaining two (3DES and AES) as trustworthy (and covers SEED and Camellia as well, so we can’t advocate for those).

One simple answer is for all systems to stop using these cipher suites, effectively deprecating SSLv3. Unfortunately, it isn’t that easy - there are both clients and servers on the Internet that still don’t support the TLS protocols or ciphersuites. To support talking to these legacy systems, an entity may not be able to just disable SSLv3; instead they’d like to be able to talk SSLv3 with those that only support SSLv3, but ensure that they’re using the best available TLS version. And that’s where the next vulnerability lies.

SSL/TLS Version Selection Fallback

We’ve probably all encountered - either in real life or in fiction - two strangers attempting to find a common language in which to communicate. Each one proposes a language, hoping to get a response, and, if they fail, they move on to the next. Historically, SSL/TLS protocol version selection behaved that way - a client would suggest the best protocol it could; but if it had an error - even as simple as dropped packets - it would try again, with the next best version. And then the next best … until it got to a pretty bad version state.

This is a problem if there’s an adversary in the middle, who doesn’t want you picking that “best” language, but would much prefer that you pick something that they can break (and we now know that since all of the ciphers available in SSLv3 are breakable, merely getting down to SSLv3 is sufficient). All the adversary has to do is to block all negotiations until the client and server drop down to a SSLv3.

There is a quick fix, to merely disable SSLv3. This means that if an adversary succeeds at dropping down, the connection will fail - the server will think it’s talking to a legacy client, and refuse the connection. But that’s merely a solution for the short term problem of POODLE, because there are other reasons an adversary might want to trigger protocol version downgrade today (e.g., to eliminate TLS extensions) or in the future (when TLS1.0 ciphers are all broken). A longer term fix is the TLS Signaling Cipher Suite Value (SCSV). This is a new “cipher suite,” that encodes the best protocol version that the client would have liked to use. This is carried as a new cipher suite; servers that support SCSV don’t actually treat it as a cipher to choose from (what cipher suites normally list), instead, if the value carried in the SCSV is *worse* than the best protocol version that the server supports, it treats this connection as one that has been attacked, and fails the connection. A client only sends an SCSV value if it has already been forced to version downgrade; it’s a way of signaling “I tried to connect with a better protocol than I think you support; if you did support it, then somebody is messing with us.”

So POODLE should put a stake most of the way through SSL’s heart, and SCSV will help us keep it there. Long live TLS.

Crossposted on the Akamai Blog.

Ninja Management

Kacy Catanzaro is the first woman to qualify for the American Ninja Warrior Finals. Her qualifying run in Dallas is not merely an athletic marvel, but also demonstrates a useful set of skills and practices for anyone tackling large and complex tasks.

Consider the structure of the course: a set of challenges, each one generally more difficult than the one before it. And when each challenge is completed, you move on to the next one. Note that the course isn’t a single challenge (although many other competitors approached it as a single challenge), and it is a lot like management - especially incident-based management. We often work on an urgent project, get it complete, and then move on to the next project. Or worse - the project extends for a long time, and we *don’t* treat it as a series of challenges. Sometimes, we use similar skills, competencies, and people, and wear them out. (Go watch some other competitors, and see how often their runs come to an end because of certain overused muscle groups giving out.)

Let’s consider Kacy’s approach. After each challenge, she undergoes some subset of the following ritual: celebration, gratitude, recovery, and preparation.


It’s important when we finish some task to celebrate. To acknowledge that we just did a hard thing, and defeated it. This gives us mental closure (“I totally beat that!”), as well as builds up our energy level (“I beat this hard thing, the next hard thing can’t be that bad”). It gives us the mindset of winners (“I get things done”) instead of the oppressed (“I have a never-ending set of challenges”). You can see Kacy celebrate after clearing each challenge - even just a little fist-pump acknowledges her success at the previous challenge and gets her ready to face the next one.


Even when we complete a project “on our own,” we often receive a lot of help that can be easy to overlook - people helped train us, took work off our plate, cheered us on. And when we don’t do the work on our own - when many people contributed to an accomplishment - we should express our gratitude. We should remind them that their work is valued, and that when they do work on our behalf, we appreciate it. There is a shortage of gratitude in the world; recipients of gratitude will react strongly and positively to the feedback. You see Kacy thanking the crowd and her boyfriend for their support after many of the challenges.


Challenges are *hard*; if they were easy, we’d call them cakewalks. We use (and abuse) the resources at our disposal - our bodies, our coworkers, our families, and our systems. After taking advantage of these things, we need to acknowledge the damage, and take even small steps to repair the damage. That might be taking a day off, taking action to reconnect with people we’ve ignored, taking care of ongoing maintenance; or merely relaxing for a while. Kacy is focused on finishing, and she takes the time to let overtaxed muscles rest and recover before asking more of them.


After finishing a challenge, we will often face another challenge. It’s not often the same challenge, even if it looks a bit similar. Or it may look wildly different, but be addressable with strikingly similar strategies. Either way, we need to take the time to think through how we’re going tackle this challenge, and then go execute. Watch how Kacy plans her approach to the next challenge before she tackles it, rather than jumping into it blindly.

Life will present us with many sequences of challenges, some masked as single large challenges, others clearly separated. Taking the time to recharge ourselves and our fellow participants will increase not only our effectiveness at any given task, but also our ability to continue to efficiently operate over time. These four rituals are an easy rubric to apply in almost any situation, and, like Kacy, they can enable us to overcome the obstacles in our path.

The Brittleness of the SSL/TLS Certificate System

Despite the time and inconvenience caused to the industry by Heartbleed, its impact does provide some impetus for examining the underlying certificate hierarchy. (As an historical example, in the wake of CA certificate misissuances, the industry looked at one set of flaws: how any one of the many trusted CAs can issue certificates for any site, even if the owner of that site hasn't requested them to do so; that link is also a quick primer on the certificate hierarchy.)

Three years later, one outcome of the uncertainty around Heartbleed - that any certificate on an OpenSSL server *might* have been compromised - is the mass revocation of thousands of otherwise valid certificates.  But, as Adam Langley has pointed out, the revocation process hasn't really worked well for years, and it isn't about to start working any better now.

Revocation is Hard

The core of the problem is that revocation wasn't designed for an epochal event like this; it's never really had the scalability to deal with more than a small number of actively revoked certificates.  The original revocation model was organized around each CA publishing a certificate revocation list (CRL): the list of all non-expired certs the CA would like to revoke.  In theory, a user's browser should download the CRL before trusting the certificate presented to it, and check that the presented certificate isn't on the CRL.  In practice, most don't.  Partly because HTTPS isn't really a standalone protocol: it is the HTTP protocol tunneled over the TLS protocol.  The signaling between these two protocols is limited, and so the revocation check must happen inside the TLS startup, making it a performance challenge for the web, as a browser waits for a CA response before it continues communicating with a web server.

CRLs are a problem not only for the browser, which has to pull the entire CRL when it visits a website, but also for the CA, which has to deliver the entire CRL when a user visits one site.  This led to the development of the online certificate status protocol (OCSP).  OCSP allows a browser to ask a CA "Is this specific cert still good?" and get an answer "That certificate is still good (and you may cache this message for 60 minutes)."  Unfortunately, while OCSP is a huge step forward from CRLs, it still leaves in place the need to not only trust *all* of the possible CAs, but also make a real-time call to one during the initial HTTPS connection.  As Adam notes, the closest thing we have in the near term to operationally "revocable" certs might be OCSP-Must-Staple, in which the OCSP response (signed by the CA) is actually sent to the browser from the HTTPS server alongside the server's certificate.

One Possible Future

A different option entirely might be to move to DANE (DNSSEC Assertion of Named Entities).  In DANE, an enterprise places a record which specifies the exact certificate (or set of certificates, or CA which can issue certificates) which is valid for a  given hostname into its DNS zone file.  This record is then signed with DNSSEC, and a client would then only trust that specific certificate for that hostname. (This is similar to, but slightly more scalable than, Google's certificate pinning initiative.)

DANE puts more trust into the DNSSEC hierarchy, but removes all trust from the CA hierarchy.  That might be the right tradeoff.  Either way, the current system doesn't work and, as Heartbleed has made evident, doesn't meet the web's current or future needs.

(Footnote:  No conversation made herein around Certificate Transparency, or HSTS, both of which are somewhat orthogonal to this problem.)

This entry crossposted at blogs.akamai.com.

On Cognition

Context: Keynoting at ShowMeCon.

Here are my integrated slides (in PDF, each build its own slide) for my combined talk on how to apply some of the knowledge and research from the world of cognitive science to organization thinking, especially in the context of a security profession. This reading list still applies, to which I’ve added Traffic, by Tom Vanderbilt.

In brief: We often model other humans as one-dimensional caricatures - because it’s efficient for our brain to do so; because it’s hard to think in a different mode than our own; because we don’t see ourselves as the villain in our own story. To be effective partners in our organizations, we have to understand not only how this affects other people, but how it affects ourselves; and then rewire our behavior to move past these caricatures and have dialogues that change behavior, not just reinforce stereotypes.