just a dude abiding

Using httpriot on IOS

One of the things that surprised me when I first started with iPhone development is the verbosity of the built-in classes for making HTTP requests. (see: Using NSURLConnection ) So I quickly started looking for a wrapper library that simplifies this process. I found two candidates ASIHTTPRequest and httpriot. For no reason other than some familiarity with the Ruby library that inspired it, I chose httpriot.

It turned out to require a BIT more code than I had expected, so I’ve documented my setup, and hopefully some of my reasoning here. I’m probably wrong in some of this, but there’s no easier way to know than to share it with the world.

After making a few requests, I noticed some repeated code which seemed ripe for abstraction. I started my abstraction by create a simple subclass of httpriot’s main class, HRRestModel, named RestRequest. From what I gathered, this is the recommended way of using httpriot to begin with.

What this allows me to do is specify some basic authorization parameters that I want to use for most every request. The other thing I found is that my actual View Controllers didn’t end up needing to use all of the different failure and success cases that the HRResponseDelegate provides. So I went ahead and created my own delegate protocol, RestRequestDelegate. This protocol requires only two methods one for success, and one for failure.

This is the class that shows the real duplication of effort required to use httpriot. Notice that we have to implement five different methods to handle all of the success/failure cases. Using this subclass method, we only have to implement two per request.

Some notes on the actual implementation of the RestRequest: You’ll want to set your own BaseURL. You can do this per request class, or as I do here, in the main base class. At a glance right now, the processResult and processFailure methods may seem superfluous, and as of right now, they are. But their purpose is that they can be overridden in subclasses to be used in any data processing or reorganization. Also note that I go ahead and enable/disable the network activity indicator in this base class. I made the decision that I wanted the indicator to show for all HTTP requests, so I just went ahead and handled it here.

Now on to a sample class that subclasses our new, simpler, RestRequest class. There’s not really much to explain here, so I’ll skip on down to the implementation class.


The fetch method just takes in an object (which is our view controller in this case), and then calls the correct URL. It could also take in any HTTP parameters you need to pass, but for this example I’m not using any.

We then have a simple processResult and processFailure. Again, this is a simple example, so we don’t have any data processing to do, so we just call the delegate method, restRequestSuccess and pass it our result. Simple as can be.

Now here is where we can finally see the real fruits of our labor. All of those subclasses, and delegates, and protocols, for…this. Simple, clean View controller code. Note that we call the fetch method of the GetWhateverRequest class, and pass in self as the acting delegate.

In our success method we take the results, look up a specific value based on a key, and set it into a label. On failure we’d probably want to show an alert or something.

By this point you’re probably thinking that this is a lot of work for a single http request, and you’re right. Where I found this set of abstractions to make sense is when you have a handful of requests, that may be called from several View Controllers, so it make sense to push as much of the setup as possible into the request subclasses. Obviously if your app has a different usage patter, this may not apply. Either way, using this particular set of abstractions my View Controllers are almost entirely free of boiler-plate code and my requests are highly re-usable and portable.

Hopefully this serves as a nice, albeit brief, tutorial into httpriot.

iPhone Development Surprises

I’ve recently started working on a yet to be announced iPhone application. This is my first serious foray into mobile development. I’ve written an occasional script for ASE or way back in the day for my Sharp Zaurus, but nothing serious for the new breed of smartphones. This post is a simple list of surprises that I’ve stumbled across thus far. I’m sure many of these are documented elsewhere, but I felt like documenting them all in one place.

The default buttons suck

Every app you’ve ever used on an iOS device, uses very little of the default graphical resources. I fully expected things like, custom backgrounds, icons and non-standard buttons. What I hadn’t realized is that the default buttons are hideous, so hideous in fact that they just aren’t used. At all. People have either written their own UIButton subclasses or used lots of custom button background images. It’s still surprising that Apple doesn’t provide more common styles by default. Minor problem, but certainly a big surprise when you first get going.

The network activity spinner is a lie

The network activity spinner in the status bar is not triggered by network activity. Each application must trigger the activity spinner as necessary. Apple’s HIG recommends that you only use it for long network requests. This is a great recommendation, but thanks to the mobile nature of iOS devices, there is no practical way to know how long any request is going to take. This means the only safe thing to do is assume that all network requests are potentially long, and manage the spinner for all your requests. It’s all of one line of code to start, or stop the spinner, so it’s not a major investment, just something to be taken care of.

Everything blocks

Coming from a background in web development, I’m very used to the idea that everything is synchronous unless I go to great lengths to make it otherwise. Objective-C and the iOS SDK are built around delegation, which looks and feels a lot like asynchronous operation. It’s really easy to forget with all of the async method calling that you are still sharing a single thread with the UI. Luckily, it’s not incredibly hard problem to solve when necessary thanks to the pervasive use of delegation.

It’s only a few extra lines of code to turn a blocking delegate call, into a non-blocking async operation (look into NSOperation and NSOperationQueue). The more difficult part is handling this sort of thing in the UI. Should I have a modal popup with a spinner? A progress bar? Leave the UI reactive, and just update the UI as necessary? Apple leaves this entirely up to you to handle, which is probably the best, but still leaves some work on your plate.

EXIF data? We don’t need no stinking EXIF data

This one is HUGE. So you want to write an application that allows users to upload photos via the camera, or photo gallery. Excellent, me too! You want to collect some interesting usage statistics, or add an EXIF comment to the image for tracking purposes? Great idea! Too bad you can’t (easily). The only supported method for accessing a user’s photos UIImagePickerController returns a UIImage object that is completely devoid of meta data. Yes, that means all of the nice EXIF data, time, GPS location, flash, shutter settings, all of it are gone. The best you could do is get an EXIF library and insert a few relevant headers yourself. This is NOT a technical problem, as there is code out there that can get the raw file itself, EXIF data and all. The problem is that it uses private APIs and is therefore verboten and will keep you out of the App Store. This has already caused me a ton of heartache and I foresee a lot of extra code and workarounds to get similar functionality as what would already be provided in the EXIF headers. So it goes.

Web Security testing with skipfish

Web App Security – an intro

In modern web applications there is an an alphabet soup of acronyms to keep in mind when writing your code, SQL injection, XSS, XSRF, SSL, just to name the common ones. SQL injection attacks tend to make big news , but due to their publicity are also the most commonly secured vulnerabilities. There is tons of documentation on preventing sql injection but significantly less on properly handling XSRF and XSS attacks. While these kinds of vulnerabilities can be seen by an experienced developer looking carefully over the code, there are very few automated tools for the job. Tools like Nikto and Nessus are great at scanning the underlying web server platform (IIS, Apache, etc), and in some cases identify some commonly known exploits. But they aren’t designed to scan a running web application for unique attack vectors.

Some definitions

According to OWASP XSS is defined as

Cross-site Scripting (XSS) attacks occur when an attacker uses a web application to send malicious code, generally in the form of a browser side script, to a different end user. Flaws that allow these attacks to succeed are quite widespread and occur anywhere a web application uses input from a user in the output it generates without validating or encoding it.

In otherwords, XSS attacks happen whenever a site displays un-sanitized data directly. This is without a question the most common type of attack on the internet. Any application which takes data from the user is potentially vulnerable to this class of vulnerabilities. Most major sites have suffered from at least a limited XSS vulnerability at some point. While they are extremely common, they aren’t easy to predict, or find. Even finding solid tools for auditing your own applications has been difficult until recently.

The other class of attacks I want to look at are the even less well known, XSRF (sometimes listed as CSRF) vulnerabilities. Again to OWASP for a definition:

CSRF is an attack which forces an end user to execute unwanted actions on a web application in which he/she is currently authenticated. With a little help of social engineering (like sending a link via email/chat), an attacker may force the users of a web application to execute actions of the attacker’s choosing. A successful CSRF exploit can compromise end user data and operation in case of normal user. If the targeted end user is the administrator account, this can compromise the entire web application.

Again, simplified, the idea is to pick a fictional link like: http://yourapp.com/site/delete?confirm=yes, and get a user who you suspect is already logged into yourapp.com as an administrator. Take that link and find a method of getting a user to click this link. There are numerous methods for accomplishing this, which I won’t even begin to cover here. If done correctly this will cause a user to execute an action, with valid credentials, that they are not aware they are performing.

As you can see these types of attacks are not specific to any particular web platform and therefore potentially possible in all web applications. So now that you’ve heard the bad news, it’s time to get to some good news! A new tool has been developed that makes identifying these kinds of vulnerabilities easier. That tool is called skipfish. I’ll let you read the description yourself, but in summary skipfish is a tool capable of doing filename fuzzing attacks, analyzing your application and altering it’s dictionary based on keywords from your site, handling authentication cookies, and filling out and validating form data. That’s cool.

Introducing skipfish

Here’s more good news, skipfish is entirely open source. Here’s the bad news, there are not (yet) pre-compiled binaries or official Windows support. It should be possible to compile skipfish under cygwin on Windows. But for the sake of this article we’re going to assume you have access to some sort of Debian based distro (Ubuntu, Knoppix, Backtrack, etc). Now, let’s get to it!

Installing skipfish

wget http://skipfish.googlecode.com/files/skipfish-1.32b.tgz

tar zxvf skipfish-1.32b.tgz
sudo apt-get install libidn11-dev
cd skipfish
make
cp dictionaries/default.wl skipfish.wl
./skipfish

That should download skipfish, it’s dependency (libidn) and then compile and run skipfish. Obviously we haven’t asked it to do much yet so you shouldn’t really see a lot of useful output at the end of this. Now it’s time to get to work! I’m using skipfish to test an application I’m currently developing. I recommend you have a local application to test against as it’s significantly (almost an order of magnitude) faster to test locally than against an internet based site. All error reports posted from here on out relate to my application, yours will obviously show different data.

Testing with skipfish

We’ve got skipfish downloaded, installed, we’ve picked the application to test, now it’s time to actually hit it and see what happens! My test application is available at http://localhost, substitute your URL where necessary. For starters let’s just hit the public facing portion of our app. It’s possible to provide skipfish cookie data for an authorized session and have it look at the internal pages of your app, which we’ll look at later.

./skipfish -o output -U -b i http://localhost

Now skipfish is off and running. Let’s look at the arguments. -o output tells skipfish to put the results into a directory named output, -U tells it to log any external URL’s and emails found (these might be targets for further auditing). -b i tells it to use a valid MSIE User Agent string when making requests.

Depending on the speed of your test machine, the performance and size of your application, and probably a dozen other factors, it might take a few seconds, or several hours. Watch the dialog for a few minutes, gauge the amount of time you have, and then go get a soda, watch some TV, or whatever it is you do while waiting for things to finish. We’ll move on to the next step once this has finished.

My scan finished, and in record time (about half an hour, there’s a lot of pages!). Now, skipfish has generated us an awesome report on what it’s found, and how it ranks the severity of those findings. To open it, browse to the output directory we specified, and open the index.html file in either Firefox or IE (there is a known issue in WebKit browsers that makes opening heavily scripted local files difficult).

In my case it found nothing severe, but found no shortage of interesting things to look it. Under each category it provides a link to the URL it found the issue on, as well as a “show trace” button that will provide the HTTP request/response for that request. I’m not going to get into an analysis of the results in this article as there are a large variety of potential outputs and they will vary greatly with the application being scanned. I’ll leave it as an exercise for the reader to analyze their individual results.

There is though, a secret and amazingly powerful bit of data provided with each scan’s output. One of the most interesting aspects of skipfish is that it runs in a non-deterministic manner. This means that each unique run of skipfish can lead to a unique set of results. While this is great from an initial testing perspective, it makes it difficult to perform follow-up tests to confirm that issues have been fixed. Now, that secret bit of data? In the top right of each output page is a field labelled Random Seed. You can feed this back into skipfish via the -q parameter to perform the exact same run again.

Now let’s take a look giving it an authenticated session. For starters I’m going to login to my local app in FireFox, and look at the cookies. Your application’s login cookies will most likely look vastly different than my own, but I’ve simulated those from my application below.

./skipfish -o authed -U -b i -C authed=true -C userid=12 -X action=logout -N http://localhost/admin

This time we specify a new output directory and -o authed, two cookies -C authed=true & -C userid=12, these need to be replaced with the cookies from your application. There can be as many of these as necessary. We also specify a path to exclude, -X action=logout, this tells skipfish to ignore any URL that contains action=logout which in this case prevents skipfish from automatically being logged out. Just to be double sure, we also specify -N which tells skipfish to ignore any attempts to delete cookies.

Just like before, once this scan completes we need to open our output directory in FireFox to review the results. Lucky for me there are again no high impact vulnerabilities to worry about, just some warnings and medium issues.

Conclusion

So there we have it, a brief run-through of a few of the stickier web app vulnerabilities, and an overview of a brand new tool to look for them! I haven’t used skipfish extensively yet, but it’s definitely a tool I plan to keep in my belt for application testing from here on out.

Jekyll setup and modifications

Now that we’ve established that this blog is now running on Jekyll, let’s get down to the business of looking at the setup of Jekyll, and the customizations that I’ve made.

For starters I took an existing published setup, and used it as my base instead of a vanilla Jekyll install. The particular setup I used was iruel.net, by Bruno Antunes. You can check out his repo for the list of changes over vanilla Jekyll, but they’re fairly basic. The majority of his enhancements revolve around Rakefile tasks to fit his deployment system. I wanted a different setup, so I ended up removing most of it.

The next step was to remove the files and posts that were already there (ignore the adds for now). After that, it’s time to get to work. I didn’t want to just start from scratch, I wanted to import my existing blog posts first. So I went into the Jekyll repo to look at the converter options. Jekyll currently supports importing from CSV, Mephisto, MT, Textpattern, Typo and WordPress. That’s quite a few options, and certainly should cover a ton of folks, but not me. As previously mentioned though I need to be able to import from Google Blogger, which means I need to get busy. Blogger luckily provides an XML based export file of an entire blog. I just needed to import posts, I ignored all of the stored settings and comments. I spent a couple of hours reading through the export file, and hacking up some really simple code to handle the import.

Currently the code supports importing all blog posts, their published date, permalink (which is stripped to just the path), and the posts tags. All of this is able to be imported cleanly into Jekyll, which is awesome. Next I decided to make a few tweaks to my workflow.

I’m a slow writer. Really slow. I’ve rewritten this post at least twice by the time you read it. If you look into my repo, you’ll probably see that it’s been in a draft status for longer than I’d care to know. Because of that I need to be able to easily manage draft posts. I decided that the easiest way to handle this would be through a few simple Rake tasks. I went on and modified the existing Rakefile, to add two tasks (drafts and publish), and modify the ‘post’ task. First I modified the ‘post’ task to create the post with ‘published’ set to ‘false’ which prevents Jekyll from generating an HTML file for that particular textile file. Next the ‘publish’ task goes in, removes this flag, and changes the file name to be the current date. This way the post’s date is current. The ‘drafts’ task is really simple, it just lists all the posts that are still un-published.

The last, but to me most important, part of my customization is my deployment process. To deploy my blog, I simply have to push into my github repository, nothing more. In github I setup a post-receive hook, that calls a script on my site, that has permission to run a ruby deployment script serverside. That script uses my deployment script dsl, to move the current site to a backup folder (just in case), create a new directory, clone the github repository, and then run jekyll. This seems complicated but with the deployscript dsl, it’s only a few lines of code.

This means in one evening I was able to setup Jekyll, customize it, write an importer for my previous blog, and import it, and deploy it all to my server. Not bad, not bad at all.

Now, with more Jekyll

Apparently Google is abandoning FTP support for Blogger blogs this next month. As I’m sure none of you were aware, this blog was hosted via that service. Instead of waiting until the service went away and then cursing loudly, and flailing my way into a new blog platform, I got proactive, and made the move over a month ahead of time.

Being a geek of epic proportions, I couldn’t just use Wordpress or something similar. No, I needed to find something esoteric, complex, hackerish. And I found exactly what I was looking for in Jekyll. It’s a Ruby based static blog engine with a strong preference for being under version control. This means that the same tools I use to code, are the same ones I use to blog. Kick. Ass.

Coming up next will be a post detailing how I’ve got this setup (hint, checkout my github projects)!