Weaknesses with Short-URLs

danger_minesRecent research was presented[1] raising security and privacy concerns around URL shortening services like bit.ly, goo.gl, and others.  The services are used to shorten lengthy URL’s to more compatible URLs suitable for online use.  Smaller URLs also provide ancillary benefit since they are easier to remember.  My first impression was the recent research[1] on URL shorteners was that it was specious since URL shortening was never intended or designed as a security and privacy control from the start.  Reading the research softened my initial opinion.  The seeming randomness of these short URLs provides the public unfounded confidence of their utility for security.  Specifically, the false idea that others will not discover the link since it appears secure – difficult to guess.  Unfortunately, the part of the URI providing the identity for the long URL, is as few a 6-characters for some shortening services, far too small a space to be cryptographically secure, and easily brute forceable by attackers and demonstrated by researchers.

The research paper was not the first cracks in the short URL armor.  The following presents some concerns I gathered across different resources from other researchers.  I also share some personal thoughts about short URL weaknesses that I have not noticed elsewhere.  I don’t stake any claim to these and I’m simply passing them along to raise awareness.  I’m betting we have not seen the last around security and privacy concerns with short URLs.

1) Short URLs not secure
As researchers mention[1] these links are not secure and easily brute forced.  This may or may not be a concern for you depending on how you use them.

2) Short URLs target host unknown until clicked
Phishing is a problem for everyone.  Short URLs exacerbate an already bad email phishing problem.  There are some services like checkshorturl.com where email users can unwind these URLs but most people will never do this.  People are trusting and verification takes extra work.  Clicking a shortened URL is like hitchhiking in a strangers car, you don’t know where it’s taking you.

3) Obfuscated redirects
Brian Krebs makes an interesting point[3], attackers can leverage an open redirect on a government host and create a short branded URL.  The result is an authentic URL that looks like it navigates to a government web site but instead navigates to the attackers malware site.

This URL
http://dss.sd.gov/scripts/programredirect.asp?url=http://krebsonsecurity.com

Becomes this branded URL (notice the .gov domain, ouch!)
http://1.usa.gov/1pwtneQ.

The combination of an open redirect and short URL branding creates a situation of misplaced trust or false sense of security.  Users think clicking will take them to a government site when if fact it takes them to another site entirely.  The moral of the tale, if you have any open redirects in your web site your in trouble but if you also use branded URL shorteners your setting the public up for malware and phishing attacks.

4) Obfuscate payloads
A spin on Krebs idea I considered is that any arbitrary payload can be saved in a long URL by attackers and hidden from prying eyes – a payload.  For example, on some services it’s possible to create arbitrary URLs with invalid hosts and parameters so long as those URLs are syntactically correct.  Meaning if I create a URL https://www.xyz.com/def some shortening services are not checking to ensure host xyz is a valid host.  Even if the host is valid, URI parameters may be developed that legitimate hosts ignore entirely like the following, https://www.xyz.com/def?a=b,b=c.  Some servers like Blogger ignore superfluous parameters like a=b,b=c in the request if you pass them.  Attackers can create any URL they want.  I used the following in a quick test,

http://www.xx100kaass.com/0 (x10,000 zeros, for a 10k URL)

I created a bogus URL with a 10KB URI that consisted of a slash (/) followed by 10,000 zeros and was successful.  Attackers can store payloads in these bogus URLs to use for a variety of purposes.  Outside of validating the syntax and host, shortening services have no idea of knowing if these URIs are valid and, in their defense, there’s probably not a good way for them to validate.  Therefore, they must store the entire long URL.  This means an attacker can use URL shortening services, to hide small chunks of arbitrary data for nefarious purposes like command and control for bot networks, torrent information, etc.  URL shortening sites undoubtably provide security intrusion and content controls.  There’s likely some limits in size or number of URL per second they will accept, etc.  I’m not sure what they are but it’s likely they vary between shortening services.

5) Multiple indirection
Some of the URL shorting services will not accept their own URLs for a long URL but at least a few of them will accept shorted URLs of other services.  Therefore it’s possible to create multiple levels of indirection. short URLs referring to other short URLs.  How many levels can be created?  I’m not sure.  It seems like browsers must have some practical level of redirect control but I have no idea.  I’m not sure if this serves a practical purpose yet but at the very least it complicates organizational IT forensics.

6) Infinite loops
I was wondering if I could create two or more short URLs referring to each other.  To get this to working requires an understanding of the shortening algorithm such that the attacker can determine the shortened URI before it’s created.  Or perhaps a shortening services that allows changing a long URL after the short URL has been created.  This will allow an attacker to create short URLs that either directly or indirectly refer to each other.  I didn’t spend much time looking at this.  I tried to find some code online to see if there were any standard algorithms.  I was thinking everyone may be leveraging an open source project so I could determine the algorithm easily.  Nothing was obvious, I was not successful.  Perhaps someone else may want to take this up.  I’m not sure if the browser is smart enough to detect these types of infinite redirects or not.  If not, it seems plausible it could be used to hang or crash the browser.  Even if possible, I’m not sure this has any practical value for attackers anyway.

7) XSS in URLs
I tried to see if I could get JavaScript inside a long URL and then shorten it to bypass browser security controls.  No success.  I tried using the javascript URI scheme type.  Some URL shorteners allowed it but at least Chrome and Safari were smart enough to handle the redirects as a html scheme type regardless of the scheme type I provided.  I also tried the data scheme type with no positive result.  Data works when pasted directly into the browser URL bar but not successful as a redirect.  Again handled like html scheme type regardless of the specified scheme.  Browsers are a battle hardened environment, good news for us.

8) Shortener Service Unavailability
If the shortening services goes away temporarily or permanently it impacts the services anywhere shortened links are embedded.  What happens to Twitter if bit.ly goes away?  Not good.  DDOSing bit.ly is essentially the same as DDOSing Twitter since a better part of Twitters content would be unreachable for users if bit.ly cannot respond.  Bit.do has a big list of shortening services[2].  Bit.do also tracks shortening services no longer available and there’s many more of them I was aware.  If shortening is part of your business strategy, or your users are using it, you may want to consider all your available options and weight risks, reliable services, hosting your own, etc.

Keep in mind my tests were not comprehensive and exhaustive.  I didn’t want to do anything that could be considered offensive.  So if noted a test was successful it may not be successful across all services.  Conversely if a test was unsuccessful if may not be unsuccessful everywhere.  An important consideration, while there are some problems with URL shorteners there’s not a good immediate option for avoiding them.  If your going to participate in social media your going to be using short URLs like it or not until improvements are made.

[1] Gone in Six Characters: Short URLs Considered Harmful for Cloud Services
[2] Bit,do list of URL Shorteners
[3] Spammers Abusing Trust in US .Gov Domains

* Landminds image from World Nomads

FBI vs. Apple iPhone, The Real Story

Are you confused over the battle between the FBI and Apple over the iPhone?  On the surface it seems un-American that Apple does not wish to provide [2] the FBI information it requires for a terrorism investigation.  A deeper review shows the FBI interests are more broad than a terrorists iPhone.  The FBI and the court[1] are demanding Apple weaken strong iPhone security features used on all iPhones.  Let’s review the court and FBI demands.

“…bypass or disable the auto-erase function…”, this is a security feature on the iPhone that wipes data if there are too many failed password/pin attempts to unlock the phone.  It’s disabled by default and optionally enabled by iPhone owners.

“…enable FBI to submit passcodes to the SUBJECT DEVICE for testing electronically…”, the FBI desires to attempt many passcode/pin’s rapidly to unlock a device.  In security parlance this is known as a Brute-Force Attack.  FBI wants to be able to brute force iPhones.

“…device will not purposefully introduce any additional delay between passcode attempts…”,  this security feature introduces an increasing delay between successive failed passcode attempts which adds a growing penalty to the attacker for bad passcode/pin guesses.  This is another Apple security feature designed to prevent brute force attacks.  The FBI wants this removed.

“…SIF[Software Image File] will load and run from the Random Access Memory (“RAM”) and will not modify the iOS on the actual phone…”, this change helps the FBI avoid detection of it’s iPhone monitoring activities while preventing unintentional tampering of forensic evidence that may be used in a trial.

If the FBI requested the information on the terrorists phone their motives would appear more creditable.  Instead they requested security features, used across all iPhones, purposefully weakened.

The order includes provisions to limit or lock the request to only the SUBJECT DEVICE.  On the surface it appears as though this demand is applicable to only a single named phone used by terrorists.  Weakening security on a single iPhone is the governments method to eat an elephant one piece at a time.  Initially the FBI compels Apple to make code changes supporting their agenda.  As time passes the FBI along with other government agencies will make increasingly more demands that use the previous assistance as a leverage point, opening a pandora’s box.  If the FBI requested the information on the terrorists phone their motives would appear more creditable.  Instead they requested security features, used across all iPhones, purposefully weakened.  The public can only assume this court order is the FBI’s attempt to gauge tech industries reactions for future information requests and continue their crusade on security backdoors.

[1] California District Court Order compelling Apple to assist FBI
[2] A Message to Our Customers, letter from Apple to customers on security

Facebook Hijacks Your Camera Roll

I generally don’t comment much on privacy these days.  Why?  I’m equipped to fight security but privacy, sigh, FB-Camera-Roll.pngthat’s an entirely different battle.  However, I’m making an exception since Facebook’s new feature is a hot mess from a personal privacy perspective.  The new mobile application feature presents your private camera roll photos to you for, I presume, easy publishing.  There are a number of points that concern me.

1) Unauthorized processing of mobile camera roll.  Let me explain, I did provide Facebook authorization to my camera roll but that was so I could select and upload the individual photos I choose for publishing.  Processing the entire camera roll and offering up private photos without end-user consent is begging for an accident.  The application could have a bug or the end-user could click the wrong button.  People take a lot of photos and there’s no good reason to assume they want recent photos published.

2) Mixing private photos with public photos.  Facebook provides a lock icon and notes only I can see my private photos.  This is terrible design!  Your private photos are a heartbeat from accidental publishing.  It’s already easy to publish photos from your camera role using Facebook’s mobile application.  Whatever the reason for the new feature, it provides Facebook an opportunity to data mine private offline camera rolls.  We need Apple to create better sandboxes for personal data and how applications can use personal data.  We also need all operating systems vendors to provide better controls to increase transparency into applications running on their operating systems and platforms.  The all or none approach to our personal data (e.g., camera roll, contacts, etc) is no longer good enough.  We need to design our application environments from the perspective that every application is hostile.

3) Increased potential for data leakage and exfiltration.  We have no idea how Facebook’s mobile application works.  It’s possible it could be holding images, reprocessed thumbnails, or similar in private caches.  Any vulnerabilities in the application (and every application has them) could lead to data leakage and exfiltration.  Without access to the closed source and testing we don’t know if a vulnerability exists.  All we know for sure is that the risk of data leakage and exfiltration is greater with more data within the applications tendrils.

4) Abused trust of Facebook’s end-user community.  Software vendors wield tremendous power.  In running their applications on our systems we place our sensitive personal information under their care.  Most assume mobile application vendors handle personal information with care and more or less in accordance with end-user expectations.  There is no basis of fact for this belief.  End-users have been lead to believe they must give up personal information for continued use of free software.  Perhaps end-users need to give up something but there should be much more transparency around how our information is used.  Free software is no justification for betraying public trust.

I don’t keep much in the way of private information on my mobile but it’s a matter of principle.  Facebook continually surprises me, and I’m betting others, around how it uses personal information.  I’m seriously considering deleting all my Facebook mobile applications until the privacy climate improves.  I will continue to use Facebook but, only through the web browser, and only where I can tightly control the diet of personal information I feed to the beast.

Forget Internet of Things, You Already Have Spies In Your Home!

First things first, what hell is Internet of Things (IoT)?  Very simply, the IoT movement intends to connect a wide
variety of electronics, embedded devices, and sensors to the Internet.  As practical example, some makers of city street lights have Internet enabled their bulbs.  On the surface, Internet lightbulbs appear as useful as Internet connected refrigerators but a distinct advantage is that these bulbs will alert a central office when replacement is necessary.  In a city with hundreds, or thousands of street lights, a proactive message of an inoperable light eliminates significant effort driving around to check bulbs.  Even in the mundane case of the refrigerator, if Internet enabled, new water filters could be ordered before needed saving homeowners some trouble.

Popcorn Time is Back but How Long?

Popcorn Time is a streaming movie player similar to Netflix and Vudu.  Like it’s big brothers, Popcorn Time is easy to use but unlike it’s big brothers – it’s free.  I covered Popcorn Time’s run-in with the movie industry in two posts last year.  Apparently Popcorn Time is back for more bludgeoning.
Previous Popcorn Time Posts