OWASP security tips to defend your WordPress server.
DeepViolet(DV) open source TLS/SSL DAST tool updated to Beta 4. The major improvement for Beta 4 is the addition of an API so Java designers can implement DV features in their own projects.
Following are a summary of improvements for Beta 4.
- Added API support for those who want to use DeepViolet features in their own Java projects. See package com.mps.deepviolet.api
- Added samples package with sample code to demonstrate new API
- Refactored existing code for the command line support and UI to use the new API.
- 2 new command line options for debugging added, -d and -d2. d turns on Java SSL/TLS debugging. -d2 assigns DV debug logging priority.
- Generated JavaDocs for Public APIs, see com.mps.deepviolet.docs
- javadoc.xml added to generate JavaDocs
- Support for dock icon on OSX for the UI
UPDATE, March 10, 2018: computing technology update, Google’s Bristlecone Quantum Processor.
Throughout the week of April 11th, 2016 Stanford held is annual affiliates Computer Forum on the campus. Participation in the forum is available to affiliate members. If your interested to be an affiliate send a note to me, see About page. Stanford security forum is a great place to unplug from the day-to-day business and consider broader security challenges. The campus is beautiful and the projects are interesting. Attending the forum is always uplifting, I usually meet leaders from industry I know, university staff, and I always learn something new from their research.
The forum is a week long but attendees can sign up for individual days depending up interests. I attended 2 days of the week long forum. Monday was dedicated to security. Thursday was dedicated to IoT. Research projects and themes change from year to year. This year cryptography and IoT where the broad themes. Full media from the week long forum trails the post.
A Few Thoughts or Impressions
Following are some of the more important points I learned or points that captured my interests, not in any particular order of importance.
Why are quantum computers fast?
Traditional computers process information in bits. A bit is either “on” or “off”, a 1 or a 0 respectively but quantum computers also provide an Amplitude property associated with each quantum bit. Remember Schrödinger’s Cat? The cat was in a Superposition of States where the cat is both alive and dead. Amplitude is the measurement of the superposition which is the probability the cat is in one state or the other. A point of some utility is that amplitude is not a simple percentage but instead is a complex number. The the value combined with the amplitude of the bit form a quantum computational unit known as the Qubit. In a traditional computer, increasing the number of bits increases the computers word size and address space which increases the processing power in polynomial time. Increasing the number of qubits in a quantum computer increases processing power in exponential time. Unlike a traditional computer, doubling the size of a quantum more than doubles computational power. The increase in computational power is due to two major factors, 1) unique superposition properties of the qubit, 2) higher dimensional algorithms applicable specific problem spaces. Quantum computers provide a different operational computing model when compared to a traditional computer. Rather than serialized approach to computing using logic gates, lasers and radio waves interfere with each other and operate across many qubits simultaneously. In some qubits, interference is constructive and in others interference is destructive. The design of the quantum computer and algorithms seek to reinforce constructive interference patterns that produce the desired results. I realize this answer is not satisfactory for everyone. Take a look at the presentation materials in the links at the of the post. Also take a look at, The Limits of Quantum article.
Quantum computers not likely to replace traditional computer
Quantum computers are fast at solving specific problems where an algorithm exists. Quantum computers are not necessarily fast at solving all problems. It’s unlikely a quantum computer will replace your desktop; however, if a quantum computer could be made small enough it could make an addition to your desktop for specialized functions (e.g., 3D graphics).
Implications for web browser security
A quantum algorithm exists for finding large prime numbers, Shore’s Algorithm. Web browser security is predicated on the fact that large prime numbers are difficult to factor. A quantum computer along with Shore’s Algorithm can factor primes fast. However, the state of the art in quantum computers today is about 9-qubits. According to Professor Dan Boneh, we don’t need to be concerned about quantum computers cracking browser security until quantum computers reach around 100-qubits.
Browser security in a post-quantum computing world
Professor Boneh elaborated, post-quantum computing encryption algorithms remain an area of interest. Algorithms that are useful in a post-quantum world favor smaller primes within higher dimensional number spaces(>1024). A research paper, Post-Quantum Key Exchange – A New Hope provides details.
TLS-RAR for auditing/monitoring SSL/TLS connections
A new protocol has been developed to monitor SSL/TLS. TLS-RAR does not require terminating the SSL/TLS connection and establishing a new connection to the end-point. Instead TLS-RAR works by dividing TLS connections into multiple epochs. As a new epoch is established, between client and server, a new TLS session key is negotiated. Meanwhile, the TLS session key for old epochs is provided to the observer which may be an auditor or monitoring tool. In this way the observer has access to view old TLS epoch information. The observer cannot view or alter information from the current epoch. Data integrity and confidentiality between client and server is maintained. Some of the advantages, no changes to the client are required(no new roots to add), and support for current TLS/SSL libraries. This means TLS-RAR is compatible with a host of IoT technologies and components already deployed.
Session Media from the Forum
The following links provide access to session materials throughout the form.
DeepViolet updated to Beta2. A number of bugs have been fixed and new features added. The tool can be run from the command line or alternatively as a desktop GUI application. Refer to the GitHub DeepViolet documentation for more detail. Following is an overview of the command line options for quick reference.
Or alternative use the desktop application.
Recent research was presented raising security and privacy concerns around URL shortening services like bit.ly, goo.gl, and others. The services are used to shorten lengthy URL’s to more compatible URLs suitable for online use. Smaller URLs also provide ancillary benefit since they are easier to remember. My first impression was the recent research on URL shorteners was that it was specious since URL shortening was never intended or designed as a security and privacy control from the start. Reading the research softened my initial opinion. The seeming randomness of these short URLs provides the public unfounded confidence of their utility for security. Specifically, the false idea that others will not discover the link since it appears secure – difficult to guess. Unfortunately, the part of the URI providing the identity for the long URL, is as few a 6-characters for some shortening services, far too small a space to be cryptographically secure, and easily brute forceable by attackers and demonstrated by researchers.
The research paper was not the first cracks in the short URL armor. The following presents some concerns I gathered across different resources from other researchers. I also share some personal thoughts about short URL weaknesses that I have not noticed elsewhere. I don’t stake any claim to these and I’m simply passing them along to raise awareness. I’m betting we have not seen the last around security and privacy concerns with short URLs.
1) Short URLs not secure
As researchers mention these links are not secure and easily brute forced. This may or may not be a concern for you depending on how you use them.
2) Short URLs target host unknown until clicked
Phishing is a problem for everyone. Short URLs exacerbate an already bad email phishing problem. There are some services like checkshorturl.com where email users can unwind these URLs but most people will never do this. People are trusting and verification takes extra work. Clicking a shortened URL is like hitchhiking in a strangers car, you don’t know where it’s taking you.
3) Obfuscated redirects
Brian Krebs makes an interesting point, attackers can leverage an open redirect on a government host and create a short branded URL. The result is an authentic URL that looks like it navigates to a government web site but instead navigates to the attackers malware site.
Becomes this branded URL (notice the .gov domain, ouch!)
The combination of an open redirect and short URL branding creates a situation of misplaced trust or false sense of security. Users think clicking will take them to a government site when if fact it takes them to another site entirely. The moral of the tale, if you have any open redirects in your web site your in trouble but if you also use branded URL shorteners your setting the public up for malware and phishing attacks.
4) Obfuscate payloads
A spin on Krebs idea I considered is that any arbitrary payload can be saved in a long URL by attackers and hidden from prying eyes – a payload. For example, on some services it’s possible to create arbitrary URLs with invalid hosts and parameters so long as those URLs are syntactically correct. Meaning if I create a URL https://www.xyz.com/def some shortening services are not checking to ensure host xyz is a valid host. Even if the host is valid, URI parameters may be developed that legitimate hosts ignore entirely like the following, https://www.xyz.com/def?a=b,b=c. Some servers like Blogger ignore superfluous parameters like a=b,b=c in the request if you pass them. Attackers can create any URL they want. I used the following in a quick test,
http://www.xx100kaass.com/0 (x10,000 zeros, for a 10k URL)
I created a bogus URL with a 10KB URI that consisted of a slash (/) followed by 10,000 zeros and was successful. Attackers can store payloads in these bogus URLs to use for a variety of purposes. Outside of validating the syntax and host, shortening services have no idea of knowing if these URIs are valid and, in their defense, there’s probably not a good way for them to validate. Therefore, they must store the entire long URL. This means an attacker can use URL shortening services, to hide small chunks of arbitrary data for nefarious purposes like command and control for bot networks, torrent information, etc. URL shortening sites undoubtably provide security intrusion and content controls. There’s likely some limits in size or number of URL per second they will accept, etc. I’m not sure what they are but it’s likely they vary between shortening services.
5) Multiple indirection
Some of the URL shorting services will not accept their own URLs for a long URL but at least a few of them will accept shorted URLs of other services. Therefore it’s possible to create multiple levels of indirection. short URLs referring to other short URLs. How many levels can be created? I’m not sure. It seems like browsers must have some practical level of redirect control but I have no idea. I’m not sure if this serves a practical purpose yet but at the very least it complicates organizational IT forensics.
6) Infinite loops
I was wondering if I could create two or more short URLs referring to each other. To get this to working requires an understanding of the shortening algorithm such that the attacker can determine the shortened URI before it’s created. Or perhaps a shortening services that allows changing a long URL after the short URL has been created. This will allow an attacker to create short URLs that either directly or indirectly refer to each other. I didn’t spend much time looking at this. I tried to find some code online to see if there were any standard algorithms. I was thinking everyone may be leveraging an open source project so I could determine the algorithm easily. Nothing was obvious, I was not successful. Perhaps someone else may want to take this up. I’m not sure if the browser is smart enough to detect these types of infinite redirects or not. If not, it seems plausible it could be used to hang or crash the browser. Even if possible, I’m not sure this has any practical value for attackers anyway.
7) XSS in URLs
8) Shortener Service Unavailability
If the shortening services goes away temporarily or permanently it impacts the services anywhere shortened links are embedded. What happens to Twitter if bit.ly goes away? Not good. DDOSing bit.ly is essentially the same as DDOSing Twitter since a better part of Twitters content would be unreachable for users if bit.ly cannot respond. Bit.do has a big list of shortening services. Bit.do also tracks shortening services no longer available and there’s many more of them I was aware. If shortening is part of your business strategy, or your users are using it, you may want to consider all your available options and weight risks, reliable services, hosting your own, etc.
Keep in mind my tests were not comprehensive and exhaustive. I didn’t want to do anything that could be considered offensive. So if noted a test was successful it may not be successful across all services. Conversely if a test was unsuccessful if may not be unsuccessful everywhere. An important consideration, while there are some problems with URL shorteners there’s not a good immediate option for avoiding them. If your going to participate in social media your going to be using short URLs like it or not until improvements are made.
* Landminds image from World Nomads