Updated May 25 2016
I located another copy of the video on the Internet, https://tune.pk/video/6528544/hack

Updated May 22, 2016
I noticed Youtube removed Phineas Fisher’s video.  The reason listed, “This video has been removed for violating YouTube’s policy on spam, deceptive practices, and scams”.  I watched the video.  There was no spam, deceptive practices, or scams.  The material was somewhat embarrassing for the Catalan Police Union.  Even so, there’s no short supply of inflammatory and embarrassing videos on Youtube; especially ones involving government officials.  It’s difficult to understand why this particular video received extraordinary attention.Instructional video by Phineas Fisher demonstrating his hack of the Catalan Police Union in 39 minutes.  Anything that could go wrong for the Police did go wrong but here’s the short-list.

1) Police using WordPress, WordPress is amazing blog software but it has a long history of security problems.  Wordpress provides a very rich extensibility framework of plugins written by almost anyone.  These plugins extend many desirable features to WordPress but there is little to no quality control over these plugins and it’s vulnerability Disneyland for bad guys.  Wordpress is great for running your personal blog but probably not the best choice if your a big target like a government agency (or security professional).

Applications DB Account Running w/MySQL Administrative Privileges, best practice is that the DB account used by the application run with the lowest privileges possible while still meeting the needs of the application.  In this case, application designers were unaware or lazy and used an account with administrative privileges.

3) Twitter Password for Police Same as WordPress Account, once the attacker had the WordPress password he was able to sign into Twitter and deface the Police department’s Twitter account.  Best practices is not to use the same account across different web applications.  If you are going to bend this rule then at least don’t use your shared password across sites you think could be hacked, sites that place less emphasis on security, etc.  For example, don’t use the same password you use with your Facebook or Google password with smaller, less known sites, sites that may invest less into security.  At least your cutting your risk with this approach.

Recent research was presented[1] raising security and privacy concerns around URL shortening services like bit.ly, goo.gl, and others.  The services are used to shorten lengthy URL’s to more compatible URLs suitable for online use.  Smaller URLs also provide ancillary benefit since they are easier to remember.  My first impression was the recent research[1] on URL shorteners was that it was specious since URL shortening was never intended or designed as a security and privacy control from the start.  Reading the research softened my initial opinion.  The seeming randomness of these short URLs provides the public unfounded confidence of their utility for security.  Specifically, the false idea that others will not discover the link since it appears secure – difficult to guess.  Unfortunately, the part of the URI providing the identity for the long URL, is as few a 6-characters for some shortening services, far too small a space to be cryptographically secure, and easily brute forceable by attackers and demonstrated by researchers.

The research paper was not the first cracks in the short URL armor.  The following presents some concerns I gathered across different resources from other researchers.  I also share some personal thoughts about short URL weaknesses that I have not noticed elsewhere.  I don’t stake any claim to these and I’m simply passing them along to raise awareness.  I’m betting we have not seen the last around security and privacy concerns with short URLs.

1) Short URLs not secure
As researchers mention[1] these links are not secure and easily brute forced.  This may or may not be a concern for you depending on how you use them.

2) Short URLs target host unknown until clicked
Phishing is a problem for everyone.  Short URLs exacerbate an already bad email phishing problem.  There are some services like checkshorturl.com where email users can unwind these URLs but most people will never do this.  People are trusting and verification takes extra work.  Clicking a shortened URL is like hitchhiking in a strangers car, you don’t know where it’s taking you.

3) Obfuscated redirects
Brian Krebs makes an interesting point[3], attackers can leverage an open redirect on a government host and create a short branded URL.  The result is an authentic URL that looks like it navigates to a government web site but instead navigates to the attackers malware site.

This URL

Becomes this branded URL (notice the .gov domain, ouch!)

The combination of an open redirect and short URL branding creates a situation of misplaced trust or false sense of security.  Users think clicking will take them to a government site when if fact it takes them to another site entirely.  The moral of the tale, if you have any open redirects in your web site your in trouble but if you also use branded URL shorteners your setting the public up for malware and phishing attacks.

4) Obfuscate payloads
A spin on Krebs idea I considered is that any arbitrary payload can be saved in a long URL by attackers and hidden from prying eyes – a payload.  For example, on some services it’s possible to create arbitrary URLs with invalid hosts and parameters so long as those URLs are syntactically correct.  Meaning if I create a URL https://www.xyz.com/def some shortening services are not checking to ensure host xyz is a valid host.  Even if the host is valid, URI parameters may be developed that legitimate hosts ignore entirely like the following, https://www.xyz.com/def?a=b,b=c.  Some servers like Blogger ignore superfluous parameters like a=b,b=c in the request if you pass them.  Attackers can create any URL they want.  I used the following in a quick test,

http://www.xx100kaass.com/0 (x10,000 zeros, for a 10k URL)

I created a bogus URL with a 10KB URI that consisted of a slash (/) followed by 10,000 zeros and was successful.  Attackers can store payloads in these bogus URLs to use for a variety of purposes.  Outside of validating the syntax and host, shortening services have no idea of knowing if these URIs are valid and, in their defense, there’s probably not a good way for them to validate.  Therefore, they must store the entire long URL.  This means an attacker can use URL shortening services, to hide small chunks of arbitrary data for nefarious purposes like command and control for bot networks, torrent information, etc.  URL shortening sites undoubtably provide security intrusion and content controls.  There’s likely some limits in size or number of URL per second they will accept, etc.  I’m not sure what they are but it’s likely they vary between shortening services.

5) Multiple indirection
Some of the URL shorting services will not accept their own URLs for a long URL but at least a few of them will accept shorted URLs of other services.  Therefore it’s possible to create multiple levels of indirection. short URLs referring to other short URLs.  How many levels can be created?  I’m not sure.  It seems like browsers must have some practical level of redirect control but I have no idea.  I’m not sure if this serves a practical purpose yet but at the very least it complicates organizational IT forensics.

6) Infinite loops
I was wondering if I could create two or more short URLs referring to each other.  To get this to working requires an understanding of the shortening algorithm such that the attacker can determine the shortened URI before it’s created.  Or perhaps a shortening services that allows changing a long URL after the short URL has been created.  This will allow an attacker to create short URLs that either directly or indirectly refer to each other.  I didn’t spend much time looking at this.  I tried to find some code online to see if there were any standard algorithms.  I was thinking everyone may be leveraging an open source project so I could determine the algorithm easily.  Nothing was obvious, I was not successful.  Perhaps someone else may want to take this up.  I’m not sure if the browser is smart enough to detect these types of infinite redirects or not.  If not, it seems plausible it could be used to hang or crash the browser.  Even if possible, I’m not sure this has any practical value for attackers anyway.

7) XSS in URLs
I tried to see if I could get JavaScript inside a long URL and then shorten it to bypass browser security controls.  No success.  I tried using the javascript URI scheme type.  Some URL shorteners allowed it but at least Chrome and Safari were smart enough to handle the redirects as a html scheme type regardless of the scheme type I provided.  I also tried the data scheme type with no positive result.  Data works when pasted directly into the browser URL bar but not successful as a redirect.  Again handled like html scheme type regardless of the specified scheme.  Browsers are a battle hardened environment, good news for us.

8) Shortener Service Unavailability
If the shortening services goes away temporarily or permanently it impacts the services anywhere shortened links are embedded.  What happens to Twitter if bit.ly goes away?  Not good.  DDOSing bit.ly is essentially the same as DDOSing Twitter since a better part of Twitters content would be unreachable for users if bit.ly cannot respond.  Bit.do has a big list of shortening services[2].  Bit.do also tracks shortening services no longer available and there’s many more of them I was aware.  If shortening is part of your business strategy, or your users are using it, you may want to consider all your available options and weight risks, reliable services, hosting your own, etc.

Keep in mind my tests were not comprehensive and exhaustive.  I didn’t want to do anything that could be considered offensive.  So if noted a test was successful it may not be successful across all services.  Conversely if a test was unsuccessful if may not be unsuccessful everywhere.  An important consideration, while there are some problems with URL shorteners there’s not a good immediate option for avoiding them.  If your going to participate in social media your going to be using short URLs like it or not until improvements are made.

[1] Gone in Six Characters: Short URLs Considered Harmful for Cloud Services
[2] Bit,do list of URL Shorteners
[3] Spammers Abusing Trust in US .Gov Domains

* Landminds image from World Nomads

Photo 1: exploded thumbnail

Today I was using LinkedIn and noticed a message was posted about the upcoming Black Hat and DEFCON security conferences in Las Vegas.  At the bottom of the persons post there are a bunch of thumbnail images of contacts we both have in common.  If you have browsed a few articles on LinkedIn you probably have seen these thumbnails before.

Photo 1 is the result of hovering my mouse over one of the contacts at the bottom of the authors post.  These are the contacts we have in common.  Again, nothing new here, you have probably seen this before.  I noticed in the exploded view, the HTML entity tag for ampersand, circled in red, looked out of place.  At first, I was thinking perhaps this person entered the entity tag directly.  Some people online enter some strange stuff to get your attention, especially security people.

When I opened the persons profile, Photo 2, I noticed the ampersand was shown not the entity tag.  What can we do with this knowledge?  Well probably not much, at least just yet.  The point is there is a bug in LinkedIn application code that is screwing up escaping of entity references.  The code is getting confused between HTML code and characters the user types from the keyboard.

Photo 2: profile view

Why is the confusion between the characters we type and HTML code important?  It’s precisely in the area of escaping and character encoding where we find Cross-site Script Injection (XSS) vulnerabilities.  XSS is not anything new and it’s listed on the OWASP Top 10 (A1) but it’s listed as A1 on the OWASP Top 10 for good reason, it’s pervasive.

In this LinkedIn example, the ampersand is likely a programming bug and nothing more.  We can’t do much with an ampersand that’s changed to an entity reference.  However, if it were possible to include code within our tag lines it may not be properly escaped or improperly rendered.  Of course, the code would have to be short since there are limitations to the number of characters that can be stored in a tag line.  If a vulnerability could be found here, the benefit to an attacker is that they can hijack LinkedIn user browsers who view the exploded thumbnails, Photo 1.  On a site like LinkedIn this is probably a lot of users.

In closing, I am not showing you LinkedIn vulnerabilities.  I have no idea if there is a vulnerability in this code.  In fact, I don’t want to know.  I have conducted no testing against these interfaces or used any tools.  All I have proven is that there’s a program bug and we can write blog posts about bugs safely online.  Security begins by noticing what’s around you.

See you at DEFCON next week!

Information about this breaking SSL attack is coming in from a variety of sources.  I will share some better links.

A couple of articles to get you started sent to me via Jan Schaumann (Twitter: @jschauma).  The Errata article describes browser settings you can apply to stop POODLE’s dead in their tracks.

Errata Security: Some POODLE Notes
Matthew Green: Attack of the Week, POODLE

Next, a link from Oona Räisänen (Twitter: @windyoona) for a POODLE test tool to check if your browser is vulnerable.


For OS X users who would like to run Chrome or Firefox with command line options from the desktop read-on.

To easily click an open from your desktop, create a bash script, like the following.  Use VI, TextEdit, TextMate, TextWrangler, or your favorite text editor.

open -a “Google Chrome” –args –ssl-version-min=tls1 &

Save the preceding to a file named, chrometls.command.  Open the directory where chrometls.command is stored, on my system I store scripts in ~/bin.   Next you need to make sure chometls.command is executable, run the following.

chmod +x chrometls.command

Now open up Finder and drop a copy of chrometls.command you created on your desktop.  Double-click this file on your desktop and OS X you will launch Chrome – bada bing, bada boom, your done!

If the terminated shell is messing with your OCD there is an option to automatically close shell windows once the command or script terminates.  Open a Terminal, from the Terminal preferences on the profile tab you will see a set of drop down options, “When the shell exits”.  Change the value to be, “close if the shell exited cleanly”.  After you launch the browse the shell will close automagically.  I write some shell scripts on occasion but not usually under OS X so I thought I would pass this along for those in need.

When I run Chrome in this way I see the Springfield Terrier, indicating I’m not vulnerable, the command line arguments from Errata work for me.