netsekure rng

random noise generator

Pass-The-Hash vs cookie stealing

I saw a few talks at the BlueHat conference at Microsoft and the funniest of all was Joe McCray’s (@j0emccray) “You Spent All That Money And You Still Got Owned????”. At some point, he touched on Pass-The-Hash attacks and asked why those can’t be prevented. That struck me as an interesting question and an analogy popped in my head:

“pass-the-hash attacks are functionally equivalent to cookie stealing attacks”

If you think about the pass-the-hash attack, it requires administrator privileges, which means you can get LocalSystem level privileges, at which point you own the operating system. Then you extract the user’s hash out of memory or from the SAM database and you inject them into the attacker’s machine. Then you rely on single-sign on built on top of NTLM/Kerberos to authenticate to remote resources.

What if we assume the following mapping: OS -> Browser, LocalSystem code execution -> Browser code execution, User’s hash -> User’s cookie, Single Sign On -> HTTP session with cookies?

It can be easily observed that the pass-the-hash attack is equivalent to attacker having code execution in the context of the browser, stealing the user’s cookies, injecting them into the attacker’s browser, and accessing remote resources. Actually, in the web world, one doesn’t even need code execution in the browser to steal the user’s cookies, it can be done through purely web based attacks.

Is it possible to defend against attacker using your cookies? It is extremely hard, because to the remote server, your cookie is *you*. From that perspective, a Windows domain is no different than web HTTP domain, so remote resources have no way of telling apart the real you and someone having your token, be it a password hash or a cookie. I haven’t gone through the thought experiment of mapping best practices for securing against cookie stealing attacks to see if those will nicely map into best practices for defending against pass-the-hash attacks, so I’d leave that as an exercise for the reader.

How to approach fixing the TLS trust model

TLS is an exciting protocol and its wide deployment makes it even more interesting to work on. It has been said many times that the success of online commerce is due to the success of SSL/TLS and the fact that people felt safe in submitting their credit card information over the Internet. These days a lot of people have been speaking openly about how broken the TLS trust model is because of its reliance on Internet PKI and the Certificate Authorities infrastructure and there is some truth to that. We have seen two cases already where CAs trusted by all browsers have issued fraudulent certificates for high profile sites. Those incidents revealed two key problems in the existing TLS infrastructure today:

  • Any CA can issue a certificate for any web site on the Internet, which I call “certificate binding” problem
  • Revocation checking as deployed is ineffective

To counteract these deficiencies, multiple proposals have either existed or emerged. The most notable two are using DNSSEC for certificate binding and Convergence, which is based on independent system of notaries. While both have merits, I believe both have sufficient problems that will prevent them from being deployed widely, which is required for any successful change.

Using DNSSEC for storing information about the server cert is actually very appealing. The admin is in control of which cert is deployed and controls the DNS zone for the site, so it makes sense to be able to use the DNS zone information to control trust. There is even a working group inside IETF to work on such proposal. The problems with DNSSEC are multiple though, here I list just a few:
  •  It is not as widely deployed yet, but that is being fixed as we speak
  • Client stub resolvers don’t actually verify the DNSSEC signature, rather rely on the DNS server to do the verification. This opens the client to attack, if a man-in-the-middle is between the client and its DNS server.
  • There is non-zero amount of corporate networks, which do not allow DNS resolution of public Internet addresses. In such environments, the clients rely on the proxy to do the public Internet DNS resolution. This will break DNSSEC based approach, as clients don’t have access to the DNS records.
  • DNSSEC is yet another trust hierarchy, which is not much different than the current PKI on the web, just a different instance.

Moxie Marlinspike has the right idea about trust agility and his proposal, which he calls Convergence, has a very good foundation. Where I believe it falls short is the fact that many corporate networks block outgoing traffic to a big portion of the Internet. Unless all notaries are white-listed for communication, traffic to those will be blocked, which will prevent Convergence from working properly. Also, the local caching creates a problem with timely revocation - if a certificate is found to be compromised, then until the cache expires, it will still be treated as a valid one.

My take
I actually don’t want to introduce any new methods of doing certificate validation. My goal is to point out a solution pattern that can be used to make any scheme actually deployable and satisfying most (if not all) cases. There are few basic properties any scheme should have:
  1. All information needed for doing trust verification should be available if connectivity to the server is available
  2. Certificate should be bound to the site, such that there is 1-to-1 mapping between the cert and the site.
  3. A fresh proof of validity must be supplied

There is already existing and deployed, although rather rarely, part of TLS called OCSP stapling. It does something very simple - the TLS server performs the OCSP request, receives the response, and then supplies that response as part of the TLS handshake, the last part being the most crucial. The inclusion of the OCSP response as a TLS message removes all of the network problems that the currently proposed solutions face. As long as the client can get a TLS connection to the server, trust validation data can be retrieved. This brings property 1 to the table. In addition, OCSP responses are short lived, which satisfies property 3 as well. So the only missing piece is the 1-to-1 property.

So, there are two ways the problem can be approached - either bring certificate binding to OCSP somehow, or use any other method to provide certificate binding. The latter can actually be achieved rather easily with minimal changes to clients and servers. RFC 4366, Section 3.6 describes the Certificate Status Request extension, which is the underlying protocol messaging of how OCSP stapling is implemented. The definition of the request message is:

      struct {
          CertificateStatusType status_type;
          select (status_type) {
              case ocsp: OCSPStatusRequest;
          } request;
      } CertificateStatusRequest;

      enum { ocsp(1), (255) } CertificateStatusType;

The structure is extensible allowing for any other type of certificate status to be requested, as long as it is defined. I can easily see this message defining DNSSEC and Convergence as values of CertificateStatusType, then define the appropriate format of the request sent by the client. Conveniently, the response from the server is also very much extensible:

      struct {
          CertificateStatusType status_type;
          select (status_type) {
              case ocsp: OCSPResponse;
          } response;
      } CertificateStatus;

      opaque OCSPResponse<1..2^24-1>;

Currently, the only value defined is for OCSP response, which is treated as opaque value as far as TLS is concerned. Nothing prevents whatever information the above proposals return from being transmitted as opaque data to the client.

Just like Moxie explored in his presentation, using the server to do the work of retrieving trust verification data preserves the privacy of the client. It does put some extra burden on the server to have proper connectivity, but that is much more manageable and totally under the control of the administrator.

It is true that there will have to be change implemented by both clients and servers which that will take time. I fully acknowledge that fact. I do believe though, that using the Certificate Status Request is the most logical piece of infrastructure to use to avoid all possible network related problems and provide an inline, fresh, binding trust verification data from the server to the client.

One thing I have not yet answered to myself is - how do we make any new model to fail safe. Having hard failure and denying the user access has been problem forever, but if we keep on failing unsafe, we will continue chasing the same problem into the future.

So in conclusion, for any solution to fixing the TLS trust model must satisfy the following:
  • Provide timely/fresh revocation information
  • Work in all network connectivity scenarios
  • Preserve the client privacy
Ideally, it will also fail safe :)

TLS Client Authentication and Trusted Issuers List

One of the common questions I’ve seen asked lately is related to TLS client authentication, which likely means more people are interested in stronger client authentication. The problem people are hitting is described in KB 933430, where the message the server sends to the client to request client authentication is being trimmed. Let’s look at why this occurs and what are the possible solutions, but first some background.

When TLS server is configured to ask for client authentication, it sends as part of the handshake the TLS CertificateRequest message. The TLS 1.2 RFC defines the message as follows:

      struct {
          ClientCertificateType certificate_types<1..2^8-1>;
          DistinguishedName certificate_authorities<0..2^16-1>;
      } CertificateRequest;

where the supported_signature_algorithms is addition in the 1.2 version of the TLS protocol. The certificate_authorities part of the message is further defined:

opaque DistinguishedName<1..2^16-1>;

When the server sends this message, it optionally fills in the certificate_authorities part of the message with a list of distinguished names of acceptable CAs on the server. The main reason for this list is for the server to help the client in narrowing down the set of acceptable certificates to choose from. For example, if the server only accepts certificates issued by the company private CA, there is no need for the client to send a certificate issued by a public CA, as the server won’t trust it. Nothing in the RFC prevents the client from sending any certificate, but it is in the best interest of the client to send appropriate certificate.

On Windows, the way the TLS is implemented, the server picks all the certificates that are present in the “Local Computer” “Trusted Root Certification Authorities” store (or in short the local machine root store). With Windows Server 2008 and later, the default list of trusted authorities is very small as I’ve described in a previous post, so including those distinguished names in the message does not pose a problem. However, if the server has most of the trusted roots installed or has additional root certificates, it is possible for the combined length of the distinguished names to exceed the limit of the TLS record size, which is 214 (16384) bytes. The TLS standard supports this, as it breaks messages up into records and a single message can span multiple records - this is called record fragmentation. Windows does not implement this part of the RFC though, so it cannot send messages that are bigger than what a TLS record can hold. In most cases this works just fine, but in this particular instance it is a problem.

Now, what can be done to solve this problem. The are two solutions - either decrease the list of root certificates in the message or do not send that list at all (allowed by the RFC). The former approach is possible, but is more error prone and I wouldn’t recommend it for most people. I would argue that the latter approach is the preferred one, but I always get backlash when I propose this. If one were to think about the original purpose of this message and the way Windows has implemented this, it will be easier to understand why this is better. Remember:

  • the server wants to “help” the client do an informed decision on which certificate to send as its identity
  • Windows sends the contents of the local machine root store as the list

If we take those two factors together, the end result is that the server is not helping *at all* the client to make a good decision.  A few sources - my own research,’s SSL Survey, and the EFF SSL Observatory - point out that 10-15 root CAs issue the majority of the certificates seen on the web, therefore if we send 16k worth of these, the probability that any certificate the client has is *not* issued by someone in the list is close to zero. Therefore, in this configuration, the list the server presents to the client doesn’t effectively filter down the set of certificates on the client. It is almost equivalent to sending an empty list to the client and ask it to chose randomly.

In short, if you are hitting this problem, you are better of using Method 3 described in KB 933430 and setting SendTrustedIssuerList to 0, which disables sending of the list of CAs than any other method.

Automatic CA root certificate updates on Windows

I was recently listening to Chris Palmer talking about SSL on the PaulDotCom podcast and one thing caught my attention – the discussion on IE behavior with trusted roots certificates. It was discussed that IE is violating the “No-Write-Up” policy of the integrity level (IL) mechanism in Windows. While the end effect looks like it, the inner workings of how this is accomplished is more complicated and the behavior is not restricted to IE.

Let’s start with some preliminary background information:

  • Certificates are validated through a process of building a chain up to a trust anchor, the underlying of which is constructing a graph of nodes (certificates) and edges (issuance relationships) and then the graph is traversed to find all possible complete chains (in most cases just one). The smaller the graph is, the quicker it is to find a complete chain. Once the chain is complete, the certificate at the root of the chain is checked for trust. If the chain ends in a certificate present in the list of trusted root certificates and all other verifications pass, the certificate validation is successful.
  • Microsoft has a specific program called “Microsoft Root Certificate Program”, which is how certificate authorities (CAs) submit their root certificates for inclusion in Windows. The end result of this program is a *fixed* list of root certificates that Windows considers trusted. The entire list is available on TechNet and is updated whenever there are any changes. This list (or the equivalent of it at the time) was present in full in the root certificate store in Windows XP and earlier, but starting with Vista, this default list in the root certificate store is much smaller in order to increase performance while validating certificates. If a chain ends up in root certificate which is part of the Root Program but is not present in the list of trusted roots currently on the machine, Windows downloads the appropriate root certificate directly from Windows Update. The full process for all versions of the OS is described in a KB article. I’m not going to rehash the explanation of how it works, but the key point is that only those certificates accepted through the root program will be downloaded from Windows Update.

The IE low integrity processes are not instructing the broker to do anything, it all happens under the hood in the crypto APIs. Now, here is why the IE behavior is observed. You go to a web site, which certificate is issued by a CA not yet in your trusted root list. Because IE uses the standard cryptography API that Windows provides, when certificate validation is performed, Windows itself (not IE, nor its broker process) goes and fetches the root certificate from Windows Update *if* that certificate is part of the Root Program. The same behavior will be seen for any program that is using the same API to do certificate validation. Chrome as far as I know uses the Windows crypto APIs to do certificate validation and relies on the trusted roots list from Windows, so if you browse with Chrome, you will see the exact same thing happen (though I haven’t verified it personally).

Some people have asked me how they can prevent Windows from going to Windows Update in such cases. This can be achieved by disabling automatic root update through policy as described on TechNet. It should be noted that one should exercise caution in doing so, because disabling root update means that Windows will no longer manage certificate trust for you. You will have to manage the set of trusted root certificates on your own. To that end, import the full list of CAs part of the Root Program such that those are available in the list of trusted roots. Microsoft provides a small program – rootsupd.exe, as part of KB931125 which accomplishes this task. With this approach, you have control over trust management, but you need to keep the list updated whenever the set of roots in the root program changes. Microsoft has a wiki page which includes the information about new certificates that are part of an update. This page has RSS feed available which you can subscribe to so that you are notified of new updates, which makes keeping track of the updates an easy job.

I hope this helps explain how this all works and clarify that there is no real violation of the No-Write-Up policy, even though it might seem like it from high level.

Fraudulent SSL certificates

As many people are reporting today, there have been a few SSL certificates issued to a fraudulent party. The Comodo CA had an RA account compromised and used to issue certificates for some of the top web sites on the net. Their advisory is

All major browsers are updating to blacklist those certificates and I’d suggest you install updates as soon as you can to prevent possible attacks. Since none of the certificates have been seen in the wild, the chance is very very slim, but it doesn’t hurt to do an update.

It was very interesting to see Jacob Appelbaum correlate multiple sources of information to discover this independently from the actual announcement. I’ve been advocating that bad guys are already doing this, but very few people believe it. Now I hope this demonstrates that automated correlation can reveal lots of data. Furthermore Adam Langley has a good discussion why revocation has problems and we should be looking into how to improve the state of it.


Windows SSL/TLS update for secure renegotiation

Couple of weeks ago Microsoft released an update to the SSL/TLS stack to implement secure renegotiation as described in RFC 5746. The Microsoft KB article describes the three settings controlling the behavior of the patch, but a bit more detail can be useful.

A bit of background first. TLS extensions are a method of extending the TLS protocol without having to change the specification of the core protocol and are described in RFC 4366. It is defined as arbitrary extra data that can be appended to the ClientHello and/or ServerHello messages (which are the first messages sent by each side). Servers are supposed to ignore data following the ClientHello if they don’t understand it.

Since the TLS extensions were not a formal RFC in the past, some server implementations were written to fail requests which have more data following the ClientHello message, which makes them non-interoperable with clients that send TLS extensions. This is the precise reason why RFC 5746 has adopted the idea of Signaling Cipher Suite Value (SCSV) to avoid breaking interoperability with servers not accepting TLS extensions. The recommended approach, though, is to use the TLS extension defined by the RFC.

Now here are the important details. By default, any version of Windows prior to Vista did not send TLS extensions when using the TLSv1.0 protocol. With the new update, this has changed and if TLSv1.0 is enabled, then the renegotiation indication extension will be sent as part of the TLS handshake, as recommended. So the small set of servers (no one that I know of knows the actual percentage of such servers) which do not tolerate this behavior will cause interoperability problems. This is where the UseScsvForTls setting described in the Microsoft KB comes in. Setting the registry key to non-zero value will cause the SSL/TLS stack to generate TLS ClientHello messages containing SCSV and without extensions, so interoperability with such servers can be restored. As far as the other two keys, AllowInsecureRenegoClients and AllowInsecureRenegoServers, they control the compatible vs strict mode and will not make any difference on the structure of the messages on the wire. The only effect is whether communication is allowed to continue with unpatched party or not.

PhoneFactor WordPress plugin

I have recently stumbled upon the plugin PhoneFactor has for WordPress and decided to give it a shot, knowing the idea behing the PhoneFactor authentication model. The install was smooth, since WordPress does a good job on integrating installing plugins into the admin panel. There were a few issues that I hit once it was installed, but they were mainly caused by my own customizations to WordPress to force SSL on the admin panel and some other security enhancements.

If you are like me and rely heavily on SSL for the admin area, you might already know and be frustrated by the busted defaults for most plugins. In their defense, it is a WordPress issue, since their built in functions return http based URLs instead of https when hosted over SSL. I believe this is being addressed in 3.0, but so far I’ve had to patch each new release of WP to get safety, but I digress.

The only issue with the plugin itself is that it reported the username I picked as used, even though it was not. You can safely ignore this or if you want to do it right, apply the following patch:

@@ -171,6 +171,7 @@ pf.testNumberCompleted = function(data) {
                                                        CURLOPT_POST => true,
                                                        CURLOPT_POSTFIELDS => $post,                                                                                                                       CURLOPT_RETURNTRANSFER => TRUE,
+                                                       CURLOPT_SSL_VERIFYPEER => false                                                                                                            );                                                                                                                                                 $curl = curl_init();
                                                foreach ($curl_options as $option => $value) curl_setopt($curl, $option, $value);

With that taken care of, one can sign up for an account right from the plugin. Once it is all said and done this plugin is AWESOME. Now I can actually authorize each login to my blog and ensure that even if someone were to get their hands on my password, they won’t be able to login to the blog. And the best part is that the call feature is actually free!

Thanks PhoneFactor for a great plugin!