Web App Security – an intro
In modern web applications there is an an alphabet soup of acronyms to keep in mind when writing your code, SQL injection, XSS, XSRF, SSL, just to name the common ones. SQL injection attacks tend to make big news , but due to their publicity are also the most commonly secured vulnerabilities. There is tons of documentation on preventing sql injection but significantly less on properly handling XSRF and XSS attacks. While these kinds of vulnerabilities can be seen by an experienced developer looking carefully over the code, there are very few automated tools for the job. Tools like Nikto and Nessus are great at scanning the underlying web server platform (IIS, Apache, etc), and in some cases identify some commonly known exploits. But they aren’t designed to scan a running web application for unique attack vectors.
According to OWASP XSS is defined as
Cross-site Scripting (XSS) attacks occur when an attacker uses a web application to send malicious code, generally in the form of a browser side script, to a different end user. Flaws that allow these attacks to succeed are quite widespread and occur anywhere a web application uses input from a user in the output it generates without validating or encoding it.
In otherwords, XSS attacks happen whenever a site displays un-sanitized data directly. This is without a question the most common type of attack on the internet. Any application which takes data from the user is potentially vulnerable to this class of vulnerabilities. Most major sites have suffered from at least a limited XSS vulnerability at some point. While they are extremely common, they aren’t easy to predict, or find. Even finding solid tools for auditing your own applications has been difficult until recently.
The other class of attacks I want to look at are the even less well known, XSRF (sometimes listed as CSRF) vulnerabilities. Again to OWASP for a definition:
CSRF is an attack which forces an end user to execute unwanted actions on a web application in which he/she is currently authenticated. With a little help of social engineering (like sending a link via email/chat), an attacker may force the users of a web application to execute actions of the attacker’s choosing. A successful CSRF exploit can compromise end user data and operation in case of normal user. If the targeted end user is the administrator account, this can compromise the entire web application.
Again, simplified, the idea is to pick a fictional link like: http://yourapp.com/site/delete?confirm=yes, and get a user who you suspect is already logged into yourapp.com as an administrator. Take that link and find a method of getting a user to click this link. There are numerous methods for accomplishing this, which I won’t even begin to cover here. If done correctly this will cause a user to execute an action, with valid credentials, that they are not aware they are performing.
As you can see these types of attacks are not specific to any particular web platform and therefore potentially possible in all web applications. So now that you’ve heard the bad news, it’s time to get to some good news! A new tool has been developed that makes identifying these kinds of vulnerabilities easier. That tool is called skipfish. I’ll let you read the description yourself, but in summary skipfish is a tool capable of doing filename fuzzing attacks, analyzing your application and altering it’s dictionary based on keywords from your site, handling authentication cookies, and filling out and validating form data. That’s cool.
Here’s more good news, skipfish is entirely open source. Here’s the bad news, there are not (yet) pre-compiled binaries or official Windows support. It should be possible to compile skipfish under cygwin on Windows. But for the sake of this article we’re going to assume you have access to some sort of Debian based distro (Ubuntu, Knoppix, Backtrack, etc). Now, let’s get to it!
tar zxvf skipfish-1.32b.tgz
sudo apt-get install libidn11-dev
cp dictionaries/default.wl skipfish.wl
That should download skipfish, it’s dependency (libidn) and then compile and run skipfish. Obviously we haven’t asked it to do much yet so you shouldn’t really see a lot of useful output at the end of this. Now it’s time to get to work! I’m using skipfish to test an application I’m currently developing. I recommend you have a local application to test against as it’s significantly (almost an order of magnitude) faster to test locally than against an internet based site. All error reports posted from here on out relate to my application, yours will obviously show different data.
Testing with skipfish
We’ve got skipfish downloaded, installed, we’ve picked the application to test, now it’s time to actually hit it and see what happens! My test application is available at http://localhost, substitute your URL where necessary. For starters let’s just hit the public facing portion of our app. It’s possible to provide skipfish cookie data for an authorized session and have it look at the internal pages of your app, which we’ll look at later.
./skipfish -o output -U -b i http://localhost
Now skipfish is off and running. Let’s look at the arguments. -o output tells skipfish to put the results into a directory named output, -U tells it to log any external URL’s and emails found (these might be targets for further auditing). -b i tells it to use a valid MSIE User Agent string when making requests.
Depending on the speed of your test machine, the performance and size of your application, and probably a dozen other factors, it might take a few seconds, or several hours. Watch the dialog for a few minutes, gauge the amount of time you have, and then go get a soda, watch some TV, or whatever it is you do while waiting for things to finish. We’ll move on to the next step once this has finished.
My scan finished, and in record time (about half an hour, there’s a lot of pages!). Now, skipfish has generated us an awesome report on what it’s found, and how it ranks the severity of those findings. To open it, browse to the output directory we specified, and open the index.html file in either Firefox or IE (there is a known issue in WebKit browsers that makes opening heavily scripted local files difficult).
In my case it found nothing severe, but found no shortage of interesting things to look it. Under each category it provides a link to the URL it found the issue on, as well as a “show trace” button that will provide the HTTP request/response for that request. I’m not going to get into an analysis of the results in this article as there are a large variety of potential outputs and they will vary greatly with the application being scanned. I’ll leave it as an exercise for the reader to analyze their individual results.
There is though, a secret and amazingly powerful bit of data provided with each scan’s output. One of the most interesting aspects of skipfish is that it runs in a non-deterministic manner. This means that each unique run of skipfish can lead to a unique set of results. While this is great from an initial testing perspective, it makes it difficult to perform follow-up tests to confirm that issues have been fixed. Now, that secret bit of data? In the top right of each output page is a field labelled Random Seed. You can feed this back into skipfish via the -q parameter to perform the exact same run again.
Now let’s take a look giving it an authenticated session. For starters I’m going to login to my local app in FireFox, and look at the cookies. Your application’s login cookies will most likely look vastly different than my own, but I’ve simulated those from my application below.
./skipfish -o authed -U -b i -C authed=true -C userid=12 -X action=logout -N http://localhost/admin
This time we specify a new output directory and -o authed, two cookies -C authed=true & -C userid=12, these need to be replaced with the cookies from your application. There can be as many of these as necessary. We also specify a path to exclude, -X action=logout, this tells skipfish to ignore any URL that contains action=logout which in this case prevents skipfish from automatically being logged out. Just to be double sure, we also specify -N which tells skipfish to ignore any attempts to delete cookies.
Just like before, once this scan completes we need to open our output directory in FireFox to review the results. Lucky for me there are again no high impact vulnerabilities to worry about, just some warnings and medium issues.
So there we have it, a brief run-through of a few of the stickier web app vulnerabilities, and an overview of a brand new tool to look for them! I haven’t used skipfish extensively yet, but it’s definitely a tool I plan to keep in my belt for application testing from here on out.