Blue Coat Intermediate CA Certificate Has Not Been Revoked

In a recent Internet security kerfuffle, Symantec issued the surveillance company Blue Coat Systems, a powerful digital certificate that allows them to masquerade as any secure business or financial institution by impersonating their web server.  See my original post for background, Blue Coat has Intermediate CA signed by Symantec.

In statement by Symantec the company notes, that companies often test with their own Intermediate CA.  While it’s true companies test their PKI processes, it’s very uncommon that Intermediate CA certificates in the test environment anchor to trusted roots in popular web browsers.  Any Intermediate CA certificate anchoring to trusted roots is by definition a – live production certificate.


Symantec goes on to note that certificates used in testing are “discarded” once tests are completed.  Unfortunately, this type of public communication is difficult to understand from a technical standpoint.  The standard practice to assure the public a certificate cannot be used is to revoke the certificate.  In the PKI system, a certificate that has been revoked provides scary warnings when users try to browse these web sites.  The assurance we desire is that the certificate is revoked.  Whether Blue Coat has the private key or not is immaterial.To better understand the communication from Symantec, I checked the Blue Coat CA revocation status.  The result is that the Blue Coat CA certificate has not been revoked.  While there is no evidence of inappropriate use, nothing about this incident in the way it’s explained or handled is considered industry best practice or even normal practice.  This is not the first time Symantec’s processes around certificate management have been called to question by security researchers, The Case of the Symantec’s Mysterious Digital Certificates.

You can test the Blue Coat CA certificate revocation status yourself with the following procedure.

Step 1 – Download Blue Coat CA Certificate
Download the Bluecoat CA Certificate to your computer.
 
Step 2 – Extract CRL host from Bluecoat Certificate
I’m using a work in progress tool I wrote, DeepViolet, to read the certificate but openssl is a well established alternative available on many operating systems.  If your using openssl you can view the certificate with the following, openssl x509 -in bluecoat-cert.crt -text -noout
 
java -jar dvCMD.jar -rc ../Downloads/bluecoat-cert.crt
Starting headless via dvCMD
Trusted State=>>>UNKNOWN<<<
Validity Check=VALID, certificate valid between Wed Sep 23 17:00:00 PDT 2015 and Tue Sep 23 16:59:59 PDT 2025
SubjectDN=CN=Blue Coat Public Services Intermediate CA, OU=Symantec Trust Network, O=”Blue Coat Systems, Inc.”, C=US
IssuerDN=CN=VeriSign Class 3 Public Primary Certification Authority – G5, OU=”(c) 2006 VeriSign, Inc. – For authorized use only”, OU=VeriSign Trust Network, O=”VeriSign, Inc.”, C=US
Serial Number=108181804054094574072020273520983757507
Signature Algorithm=SHA256withRSA
Signature Algorithm OID=1.2.840.113549.1.1.11
Certificate Version =3
SHA256(Fingerprint)=AF:70:11:C3:EF:70:A7:96:26:B1:43:A7:14:99:96:FF:15:2F:75:62:85:1D:08:C3:AA:DC:DE:E8:29:9E:57:2B
Non-critical OIDs
CRLDistributionPoints=[http://s.symcb.com/pca3-g5.crl]
AuthorityInfoAccess=[ocsp=http://s.symcd.com]
CertificatePolicies=[2.23.140.1.2.2=qualifierID=http://www.symauth.com/cpsCPSUserNotice=http://www.symauth.com/rpa1.3.6.1.4.1.14501.4.2.1=CPSUserNotice=In the event that the BlueCoat CPS and Symantec CPS conflict, the Symantec CPS governs.1.3.6.1.4.1.14501.4.2.2=CPSUserNotice=In the event that the BlueCoat CPS and Symantec CPS conflict, the Symantec CPS governs.]
AuthorityKeyIdentifier=[7F:D3:65:A7:C2:DD:EC:BB:F0:30:09:F3:43:39:FA:02:AF:33:31:33]
SubjectKeyIdentifier=[47:95:0A:0B:A7:A1:82:A2:6D:C9:9B:9C:CD:3E:F3:90:42:E4:6F:99]
ExtendedKeyUsages=[serverauth clientauth]
SubjectAlternativeName=[[[2.5.4.3, SymantecPKI-2-214]]]
Critical OIDs
KeyUsage=[nonrepudiation keyencipherment]
BasicConstraints=[TRUE0]
 
Processing complete, execution(ms)=784
Step 4 – Download CRL 
Download the certificate revocation list from the server specified in the certificate.
 
wget -O bluecoat-symcb-crl.der http://s.symcb.com/pca3-g5.crl
Step 3 – Display CRL
Now that we have the certificate revocation list we can view the list of certificates revoked.  Apparently there are no revoked certificates.
 
openssl crl -inform DER -text -in bluecoat-symcb-crl.der
Certificate Revocation List (CRL):
        Version 1 (0x0)
        Signature Algorithm: sha1WithRSAEncryption
        Issuer: /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. – For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority – G5
        Last Update: Mar 22 00:00:00 2016 GMT
        Next Update: Jun 30 23:59:59 2016 GMT
No Revoked Certificates.
    Signature Algorithm: sha1WithRSAEncryption
        18:32:9f:5a:ed:de:b4:e1:c0:4a:97:de:3b:81:7e:5e:0e:10:
        fa:1b:b4:4e:97:33:d4:88:67:2b:fc:d2:8c:9a:b4:cb:7f:27:
        c5:19:ae:14:73:e0:63:c0:35:ae:e5:ed:3f:8a:32:bf:e3:c1:
        51:84:2f:23:60:e2:86:d2:79:8d:f5:3b:a0:69:1d:bd:ca:c6:
        3f:49:ed:7b:f8:a4:d0:ae:fa:0f:3a:35:c4:b6:ad:1c:bd:7c:
        35:e0:8f:62:83:e1:db:c6:05:92:98:2c:3a:12:48:2b:c9:59:
        a7:c1:de:1f:d0:6e:4e:1f:1d:3b:cb:5e:d1:e2:79:8c:c0:64:
        35:14:b1:04:87:04:4c:8f:3b:6f:10:ac:e8:6c:b4:b0:fb:69:
        15:de:9c:70:1a:1b:e7:be:af:18:a8:29:7e:c5:aa:73:e9:c8:
        3c:79:a3:fc:23:9a:9f:16:55:34:9e:c1:5c:fd:68:51:4a:6f:
        7b:51:53:a7:a3:f4:c7:70:3c:03:58:e6:0a:8f:f1:44:e1:ad:
        c7:b0:a4:dc:e5:be:ba:92:84:93:ac:71:24:ba:70:e4:cf:ed:
        84:6b:c2:b3:a1:49:3f:55:10:1c:b9:90:51:32:ee:6a:3e:85:
        0a:83:a8:80:f2:60:c0:87:3f:7f:b3:fc:b1:49:d2:17:0e:3e:
        c7:74:e5:23
—–BEGIN X509 CRL—–
MIICETCB+jANBgkqhkiG9w0BAQUFADCByjELMAkGA1UEBhMCVVMxFzAVBgNVBAoT
DlZlcmlTaWduLCBJbmMuMR8wHQYDVQQLExZWZXJpU2lnbiBUcnVzdCBOZXR3b3Jr
MTowOAYDVQQLEzEoYykgMjAwNiBWZXJpU2lnbiwgSW5jLiAtIEZvciBhdXRob3Jp
emVkIHVzZSBvbmx5MUUwQwYDVQQDEzxWZXJpU2lnbiBDbGFzcyAzIFB1YmxpYyBQ
cmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5IC0gRzUXDTE2MDMyMjAwMDAw
MFoXDTE2MDYzMDIzNTk1OVowDQYJKoZIhvcNAQEFBQADggEBABgyn1rt3rThwEqX
3juBfl4OEPobtE6XM9SIZyv80oyatMt/J8UZrhRz4GPANa7l7T+KMr/jwVGELyNg
4obSeY31O6BpHb3Kxj9J7Xv4pNCu+g86NcS2rRy9fDXgj2KD4dvGBZKYLDoSSCvJ
WafB3h/Qbk4fHTvLXtHieYzAZDUUsQSHBEyPO28QrOhstLD7aRXenHAaG+e+rxio
KX7FqnPpyDx5o/wjmp8WVTSewVz9aFFKb3tRU6ej9MdwPANY5gqP8UThrcewpNzl
vrqShJOscSS6cOTP7YRrwrOhST9VEBy5kFEy7mo+hQqDqIDyYMCHP3+z/LFJ0hcO
Psd05SM=
—–END X509 CRL—–
 
 

Hacking 101 by Phineas Fisher

Updated May 25 2016
I located another copy of the video on the Internet, https://tune.pk/video/6528544/hack

Updated May 22, 2016
I noticed Youtube removed Phineas Fisher’s video.  The reason listed, “This video has been removed for violating YouTube’s policy on spam, deceptive practices, and scams”.  I watched the video.  There was no spam, deceptive practices, or scams.  The material was somewhat embarrassing for the Catalan Police Union.  Even so, there’s no short supply of inflammatory and embarrassing videos on Youtube; especially ones involving government officials.  It’s difficult to understand why this particular video received extraordinary attention.Instructional video by Phineas Fisher demonstrating his hack of the Catalan Police Union in 39 minutes.  Anything that could go wrong for the Police did go wrong but here’s the short-list.

1) Police using WordPress, WordPress is amazing blog software but it has a long history of security problems.  Wordpress provides a very rich extensibility framework of plugins written by almost anyone.  These plugins extend many desirable features to WordPress but there is little to no quality control over these plugins and it’s vulnerability Disneyland for bad guys.  Wordpress is great for running your personal blog but probably not the best choice if your a big target like a government agency (or security professional).

Applications DB Account Running w/MySQL Administrative Privileges, best practice is that the DB account used by the application run with the lowest privileges possible while still meeting the needs of the application.  In this case, application designers were unaware or lazy and used an account with administrative privileges.

3) Twitter Password for Police Same as WordPress Account, once the attacker had the WordPress password he was able to sign into Twitter and deface the Police department’s Twitter account.  Best practices is not to use the same account across different web applications.  If you are going to bend this rule then at least don’t use your shared password across sites you think could be hacked, sites that place less emphasis on security, etc.  For example, don’t use the same password you use with your Facebook or Google password with smaller, less known sites, sites that may invest less into security.  At least your cutting your risk with this approach.

Weaknesses with Short-URLs

danger_minesRecent research was presented[1] raising security and privacy concerns around URL shortening services like bit.ly, goo.gl, and others.  The services are used to shorten lengthy URL’s to more compatible URLs suitable for online use.  Smaller URLs also provide ancillary benefit since they are easier to remember.  My first impression was the recent research[1] on URL shorteners was that it was specious since URL shortening was never intended or designed as a security and privacy control from the start.  Reading the research softened my initial opinion.  The seeming randomness of these short URLs provides the public unfounded confidence of their utility for security.  Specifically, the false idea that others will not discover the link since it appears secure – difficult to guess.  Unfortunately, the part of the URI providing the identity for the long URL, is as few a 6-characters for some shortening services, far too small a space to be cryptographically secure, and easily brute forceable by attackers and demonstrated by researchers.

The research paper was not the first cracks in the short URL armor.  The following presents some concerns I gathered across different resources from other researchers.  I also share some personal thoughts about short URL weaknesses that I have not noticed elsewhere.  I don’t stake any claim to these and I’m simply passing them along to raise awareness.  I’m betting we have not seen the last around security and privacy concerns with short URLs.

1) Short URLs not secure
As researchers mention[1] these links are not secure and easily brute forced.  This may or may not be a concern for you depending on how you use them.

2) Short URLs target host unknown until clicked
Phishing is a problem for everyone.  Short URLs exacerbate an already bad email phishing problem.  There are some services like checkshorturl.com where email users can unwind these URLs but most people will never do this.  People are trusting and verification takes extra work.  Clicking a shortened URL is like hitchhiking in a strangers car, you don’t know where it’s taking you.

3) Obfuscated redirects
Brian Krebs makes an interesting point[3], attackers can leverage an open redirect on a government host and create a short branded URL.  The result is an authentic URL that looks like it navigates to a government web site but instead navigates to the attackers malware site.

This URL
http://dss.sd.gov/scripts/programredirect.asp?url=http://krebsonsecurity.com

Becomes this branded URL (notice the .gov domain, ouch!)
http://1.usa.gov/1pwtneQ.

The combination of an open redirect and short URL branding creates a situation of misplaced trust or false sense of security.  Users think clicking will take them to a government site when if fact it takes them to another site entirely.  The moral of the tale, if you have any open redirects in your web site your in trouble but if you also use branded URL shorteners your setting the public up for malware and phishing attacks.

4) Obfuscate payloads
A spin on Krebs idea I considered is that any arbitrary payload can be saved in a long URL by attackers and hidden from prying eyes – a payload.  For example, on some services it’s possible to create arbitrary URLs with invalid hosts and parameters so long as those URLs are syntactically correct.  Meaning if I create a URL https://www.xyz.com/def some shortening services are not checking to ensure host xyz is a valid host.  Even if the host is valid, URI parameters may be developed that legitimate hosts ignore entirely like the following, https://www.xyz.com/def?a=b,b=c.  Some servers like Blogger ignore superfluous parameters like a=b,b=c in the request if you pass them.  Attackers can create any URL they want.  I used the following in a quick test,

http://www.xx100kaass.com/0 (x10,000 zeros, for a 10k URL)

I created a bogus URL with a 10KB URI that consisted of a slash (/) followed by 10,000 zeros and was successful.  Attackers can store payloads in these bogus URLs to use for a variety of purposes.  Outside of validating the syntax and host, shortening services have no idea of knowing if these URIs are valid and, in their defense, there’s probably not a good way for them to validate.  Therefore, they must store the entire long URL.  This means an attacker can use URL shortening services, to hide small chunks of arbitrary data for nefarious purposes like command and control for bot networks, torrent information, etc.  URL shortening sites undoubtably provide security intrusion and content controls.  There’s likely some limits in size or number of URL per second they will accept, etc.  I’m not sure what they are but it’s likely they vary between shortening services.

5) Multiple indirection
Some of the URL shorting services will not accept their own URLs for a long URL but at least a few of them will accept shorted URLs of other services.  Therefore it’s possible to create multiple levels of indirection. short URLs referring to other short URLs.  How many levels can be created?  I’m not sure.  It seems like browsers must have some practical level of redirect control but I have no idea.  I’m not sure if this serves a practical purpose yet but at the very least it complicates organizational IT forensics.

6) Infinite loops
I was wondering if I could create two or more short URLs referring to each other.  To get this to working requires an understanding of the shortening algorithm such that the attacker can determine the shortened URI before it’s created.  Or perhaps a shortening services that allows changing a long URL after the short URL has been created.  This will allow an attacker to create short URLs that either directly or indirectly refer to each other.  I didn’t spend much time looking at this.  I tried to find some code online to see if there were any standard algorithms.  I was thinking everyone may be leveraging an open source project so I could determine the algorithm easily.  Nothing was obvious, I was not successful.  Perhaps someone else may want to take this up.  I’m not sure if the browser is smart enough to detect these types of infinite redirects or not.  If not, it seems plausible it could be used to hang or crash the browser.  Even if possible, I’m not sure this has any practical value for attackers anyway.

7) XSS in URLs
I tried to see if I could get JavaScript inside a long URL and then shorten it to bypass browser security controls.  No success.  I tried using the javascript URI scheme type.  Some URL shorteners allowed it but at least Chrome and Safari were smart enough to handle the redirects as a html scheme type regardless of the scheme type I provided.  I also tried the data scheme type with no positive result.  Data works when pasted directly into the browser URL bar but not successful as a redirect.  Again handled like html scheme type regardless of the specified scheme.  Browsers are a battle hardened environment, good news for us.

8) Shortener Service Unavailability
If the shortening services goes away temporarily or permanently it impacts the services anywhere shortened links are embedded.  What happens to Twitter if bit.ly goes away?  Not good.  DDOSing bit.ly is essentially the same as DDOSing Twitter since a better part of Twitters content would be unreachable for users if bit.ly cannot respond.  Bit.do has a big list of shortening services[2].  Bit.do also tracks shortening services no longer available and there’s many more of them I was aware.  If shortening is part of your business strategy, or your users are using it, you may want to consider all your available options and weight risks, reliable services, hosting your own, etc.

Keep in mind my tests were not comprehensive and exhaustive.  I didn’t want to do anything that could be considered offensive.  So if noted a test was successful it may not be successful across all services.  Conversely if a test was unsuccessful if may not be unsuccessful everywhere.  An important consideration, while there are some problems with URL shorteners there’s not a good immediate option for avoiding them.  If your going to participate in social media your going to be using short URLs like it or not until improvements are made.

[1] Gone in Six Characters: Short URLs Considered Harmful for Cloud Services
[2] Bit,do list of URL Shorteners
[3] Spammers Abusing Trust in US .Gov Domains

* Landminds image from World Nomads

Forget Ninja’s and Pirates, Application Security is Like This!

Photo 1: exploded thumbnail

Today I was using LinkedIn and noticed a message was posted about the upcoming Black Hat and DEFCON security conferences in Las Vegas.  At the bottom of the persons post there are a bunch of thumbnail images of contacts we both have in common.  If you have browsed a few articles on LinkedIn you probably have seen these thumbnails before.

Photo 1 is the result of hovering my mouse over one of the contacts at the bottom of the authors post.  These are the contacts we have in common.  Again, nothing new here, you have probably seen this before.  I noticed in the exploded view, the HTML entity tag for ampersand, circled in red, looked out of place.  At first, I was thinking perhaps this person entered the entity tag directly.  Some people online enter some strange stuff to get your attention, especially security people.

When I opened the persons profile, Photo 2, I noticed the ampersand was shown not the entity tag.  What can we do with this knowledge?  Well probably not much, at least just yet.  The point is there is a bug in LinkedIn application code that is screwing up escaping of entity references.  The code is getting confused between HTML code and characters the user types from the keyboard.

Photo 2: profile view

Why is the confusion between the characters we type and HTML code important?  It’s precisely in the area of escaping and character encoding where we find Cross-site Script Injection (XSS) vulnerabilities.  XSS is not anything new and it’s listed on the OWASP Top 10 (A1) but it’s listed as A1 on the OWASP Top 10 for good reason, it’s pervasive.

In this LinkedIn example, the ampersand is likely a programming bug and nothing more.  We can’t do much with an ampersand that’s changed to an entity reference.  However, if it were possible to include code within our tag lines it may not be properly escaped or improperly rendered.  Of course, the code would have to be short since there are limitations to the number of characters that can be stored in a tag line.  If a vulnerability could be found here, the benefit to an attacker is that they can hijack LinkedIn user browsers who view the exploded thumbnails, Photo 1.  On a site like LinkedIn this is probably a lot of users.

In closing, I am not showing you LinkedIn vulnerabilities.  I have no idea if there is a vulnerability in this code.  In fact, I don’t want to know.  I have conducted no testing against these interfaces or used any tools.  All I have proven is that there’s a program bug and we can write blog posts about bugs safely online.  Security begins by noticing what’s around you.

See you at DEFCON next week!

Security Leadership Vacuum at Lenovo

The February 2015 the Superfish security incident at Lenovo is evidence of the ever increasing vacuum in top executive security leadership.  A security leadership vacuum is important strategically since without proper leadership it’s extremely difficult to effect a positive outcome, secure products and services.  If we are sick we would not dream of diagnosing our own medical conditions but this is exactly what is happening in security programs across the world.  Top leaders of corporations and governments are making decisions that are quite frankly – wrong.  Poor strategic decisions carry dire consequences for us all.  Unlike a software bug or poor tactical decision a poor strategic decision creates an unfavorable environment for security resulting in highly vulnerable products and services that are difficult to remedy.  Poor security strategy is a systemic industry problem and not unique to Lenovo.  But using Lenovo as a convenient example, let’s examine the concerns more closely.

A quick check to Lenovo’s management page reveals the company has no top security executive.  Consider this a subtle warning sign.  Security at Lenovo is one of the many responsibilities assigned to Chief Information Officer (CIO), Xiaoyan WANG.  We can learn much about a company’s emphasis on security by reviewing it’s leadership structure on it’s web site or financial reports.  If you review Lenovo’s management page one of the things you will notice is, 1) security is not the primary roles but instead one of many CIO responsibilities, 2) other responsibilities of the CIO present a direct conflict of interest to security.

First, let’s look at security as a primary responsibility.  Security is like other areas of an organization, the more resources you invest the better results you can expect.  I don’t mean to imply blindly pumping cash into your security program is helpful but understanding how to apply resources to the problem is where partnership between business and security executives is important.  A poor security leader or no leader at all is the surest way to kill a security effort before it even begins.  To be clear, no company says do a bad job on security.  The problem is without proper security leadership and resource allocation – a good job is next to impossible.  Company’s with the best chance of success in their security programs place security on at least equal footing with other top business priorities.  In a security conscious company I expect to see at least one top security executive like Chief Security Officer (CSO) or Chief Information Security Officer (CISO).  Ideally, I want to see others like a Chief Privacy Officer (CPO) as well. This tells me this company really understands the impact of digital age on our products and services.  Of course, Lenovo may have a CSO that reports to the CIO, or to a leader that reports to the CIO, and many companies do, but in the the end this is a conflict of interest because CIOs are focused on delivery.  Ultimately, product delivery may trump security but without an independent advocate to argue on the side of product quality productivity will win every time and this may not be best the organization.

Next, the conflict of interest issue.  It may not be obvious but WANG’s many responsibilities include, “information service delivery and security”.  For years, IT organizations and software developers are accustomed to the idea of a Test group that performs independent quality assessments of products and services before customer delivery. Independent assessment is an essential quality control measure for producing consistent high quality products and services.  All too often security lump into the same bucket as other technical product quality review.  I believe this is a mistake.  Placing application security responsibility into the same group responsible for product delivery is like placing the fox in charge of the hen house.  Security product quality is a business concern not a concern for a technology group.  Few CEO’s were ever fired over a software bug but many more CEO’s will be fired in the future over software vulnerabilities.  Additionally, vulnerabilities are unique among bugs since they can shake the very foundations of your organizations credibility with customers which may take years to reestablish.  In today’s highly optimized world of software development, leaders often don’t have the necessary resources to deliver products on time and schedule.  In such a climate, it’s too tempting to focus limited resources on tangible features customers to can see.  However, with security it’s far to easy to make bold claims of a strong security posture.  Without specialized tools and testing security posture claims must be accepted at face value.  I see security differently, security is a top business concern not a technology concern.  As a top business concern, security must answer through it’s own leadership which ideally terminates at the security executive that answers with accountability to the board.  This will allow security to be considered on equal footing with other business priorities and risks.

A final note on security responsibility for C-level readers.  The days of blaming breaches on the ingenuity of hackers is coming to an end.  Overesteeming hacker abilities to infiltrate systems is a convenient way of shifting public scrutiny away from poor leadership and security practices back to attackers.  Increasingly the broader public and regulatory agencies are becoming less accepting of such excuses.  If you don’t make security a top priority in your board room, with all due proper funding, with security leaders leveled like other leaders – you will be accountable on breach day.  Leaders of America’s largest corporations are learning painful lessons security responsibility can be delegated but blame cannot see, Target CEO Fired – Can You Be Fired If Your Company Is Hacked?

For those interested in a previous post, So You Want to be a Security Professional, I cover some background on security positions and ways to organize security duties.  For full background on the Lenovo’s incident, I refer readers to Bruce Schneier’s article, Man-in-the-Middle Attacks on Lenovo Computers.

[1] Superfish cover by Anelis, DeviantArt