Django performance testing – a real world example

28 April 2008

About a week ago Andrew and I launched a new Django-powered site called Hey! Wall. It’s a social site along the lines of “the wall” on social networks and gives groups of friends a place to leave messages, share photos, videos and links.

We wanted to gauge performance and try some server config and code changes to see what steps we could take to improve it. We tested using httperf and doubled performance by making some optimisations.

Server and Client

The server is a Xen VPS from Slicehost with 256MB RAM running Debian Etch. It is located in the US Midwest.

For testing, the client is a Xen VPS from Xtraordinary Hosting, located in the UK. Our normal Internet access is via ADSL which makes it difficult to make enough requests to the server. Using a well-connected VPS as the client means we can really hammer the server.

Server spec caveats

It’s hard to say exactly what the server specs are. The VPS has 256MB RAM and is hosted with similar VPSes, probably on a quad core server with 16GB RAM. That’s a maximum of 64 VPSes on the physical server, assuming it is full of 256MB slices. If the four processors are 2.4GHz, that’s 9.6GHz total, divided by 64 gives a minimum of 150MHz of CPU.

On a Xen VPS, you get a fixed allocation of memory and CPU without contention, but usually any available CPU on the machine can be used. If other VPSes on the same box are idle, your VPS can make use of more of the CPU. This probably means more CPU was used during testing and perhaps more for some tests than for others.

Measuring performance with httperf

There are various web performance testing tools around including ab (from Apache), Flood and httperf. We went with httperf for no particular reason.

An httperf command looks something like:

httperf --hog --server=example.com --uri=/ --timeout=10 --num-conns=200 --rate=5

In this example, we’re requesting http://example.com/ 200 times, with up to 5 requests per second.

Testing Plan

Some tools support sessions and try to emulate users performing tasks on your site. We went with a simple brute-force test to get an idea of how many requests per second the site could handle.

The basic approach is to make a number of requests and see how the server responds: a status 200 is good, a status 500 is bad. Increase the rate (the number of requests made per second) and try again. When it starts returning lots of 500s, you’ve reached a limit.

Monitoring server resources

The other side is knowing what the server is doing in terms of memory and CPU use. To track this, we run top and log the output to a file for later review. The top command is something like:

top -b -d 3 -U www-data > top.txt

In this example we’re logging information on processes running as user www-data every three seconds. If you want to be more specific, instead of -U username you can use -p 1, 2, 3 where 1, 2 and 3 are pids (process ids of processes you want to watch).

The web server is Lighttpd with Python 2.5 running as FastCGI processes. We didn’t log information on the database process (PostgreSQL), though that could be useful.

Another useful tool is vmstat, particularly the swap columns which show how much memory is being swapped. Swapping means you don’t have enough memory and is a performance killer. To repeatedly run vmstat, specify the number of seconds between checks. e.g.

vmstat 2

Authenticated requests with httperf

httperf makes simple GET requests to a URL and downloads the html (but not any of the media). Requesting public/anonymous pages is easy, but what if you want a page that requires login?

httperf can pass request headers. Django authentication (from django.contrib.auth) uses sessions which rely on a session id held in a cookie on the client. The client passes the cookie in a request header. You see where this is going.

Log in to the site and check your cookies. There should be one like sessionid=97d674a05b2614e98411553b28f909de. To pass this cookie using httperf, use the --add-header option. e.g.

httperf ... --add-header='Cookie: sessionid=97d674a05b2614e98411553b28f909de\n'

Note the \n after the header. If you miss it, you will probably get timeouts for every request.

Which pages to test

With this in mind we tested two pages on the site:

  1. home: anonymous request to the home page
  2. wall: authenticated request to a “wall” which contains content retrieved from the database

Practically static versus highly dynamic

The home page is essentially static for anonymous users and just renders a template without needing any data from the database.

The wall page is very dynamic, with the main data retrieved from the database. The template is rendered specifically for the user with dates set to the user’s timezone, “remove” links on certain items, etc. The particular wall we tested has about 50 items on it and before optimisation made about 80 database queries.

For the first test we had two FastCGI backends running, able to accept requests for Django.

Home: 175 req/s (i.e. requests per second).
Wall: 8 req/s.

Compressed content

The first config optimisation was to enable gzip compression of the output using GZipMiddleware. Performance improved slightly, but not a huge difference. Worth doing for the bandwidth savings in any case.

Home: 200 req/s.
Wall: 8 req/s.

More processes, shorter queues

Next we increased the number of FastCGI backends from two to five. This was an improvement with fewer 500 responses as more of the requests could be handled by the extra backends.

Home: 200 req/s.
Wall: 11 req/s.

Mo processes, mo problems

The increase from two to five was good, so we tried increasing FastCGI backends to ten. Performance decreased significantly.

Checking with vmstat on the server, I could see it was swapping. Too many processes, each using memory for Python, had caused the VPS to run out of memory and swap memory to and from disk.

Home: 150 req/s.
Wall: 7 req/s.

At this point we set the FastCGI backends back down to five for further tests.

Profiling – where does the time go

The wall page had disappointing performance, so we started to optimise. The first thing we did was profile the code to see where time was being spent.

Using some simple profiling middleware it was clear the time was being spent in database queries. The wall page had a lot of queries and they increased linearly with the number of items on the wall. On the test wall this caused around 80 queries. No wonder its performance was poor.

Optimise this

By optimising how media attached to items is handled we were able to drop one query per item straight away. This slightly reduced how long the request took and so increased the number of queries handled per second.

Wall: 12 req/s.

Another inefficiency was the way several filters were applied to the content of each item whenever the page was requested. We changed it so the html output from the filtered content was stored in the item, saving some processing each time the page was viewed. This gave another small increase.

Wall: 13 req/s.

Back to reducing database queries, we were able to eliminate one query per item by changing how user profiles were retrieved (used to show who posted the item to the wall). Another worthwhile increase came from this change.

Wall: 15 req/s.

The final optimisation for this round of testing was to further reduce the queries needed to retrieve media attached to items. Again, we shed some queries and slightly increased performance.

Wall: 17 req/s.

Next step: caching

Having reduced queries as much as we can, the next step would be to do some caching. Retrieving cached data is usually much quicker than hitting the database, so we’d expect a good increase in performance.

Caching the output of complete pages is not useful because each page is heavily personalised to the user requesting it. It would only be a cache hit if the user requested the same page twice with nothing changing on it in the meantime.

Caching data such as lists of walls, items and users is more useful. The cached data could be used for multiple requests from a single user and shared to some degree across walls and different users. It’s not necessarily a huge win because each wall is likely to have a very small number of users, so the data would need to stay in cache long enough to be retrieved by others.

Our simplistic httperf tests would be very misleading in this case. Each request is made as the same user so cache hits would be practically 100% and performance would be great! This does not reflect real-world use of the site, so we’d need some better tests.

We haven’t made use of caching yet as the site can easily handle its current level of activity, but if Hey! Wall becomes popular, it will be our next step.

How many users is 17 req/s?

Serving 17 req/s still seems fairly low, but it would be interesting to know how this translates to actual users of the site. Obviously, this figure doesn’t include serving any media such as images, CSS and JavaScript files. Media files are relatively large but should be served fast as they are handled directly by Lighttpd (not Django) and have Expires headers to allow the client to cache them. Still, it’s some work the server would be doing in addition to what we measured with our tests.

It’s too early to tell what the common usage pattern would be, so I can only speculate. Allow me to do that!

I’ll assume the average user has access to three walls and checks each of them in turn, pausing for 10 or 20 seconds on each to read new comments and perhaps view some photos or open links. The user does this three times per day.

Looking specifically at the wall page and ignoring media, that means our user is making 9 requests per day for wall pages. Each user only makes one request at a time, so 17 users can be doing that at any second in time. Within a minute the user only makes three requests so is only counted within the 17 concurrent users for 3 seconds out of 60 (or 1 in 20).

If the distribution of user requests over time was perfectly balanced (hint: it won’t be), that means 340 users (17 * 20) could be using the site each minute. To continue with this unrealistic example, we could say there are 1440 minutes in a day and each user is on the site for three minutes per day, so the site could handle about 163,000 users. That would be very good for a $20/month VPS!

To reign in those numbers a bit, lets say we handle 200 concurrent users in a minute for 6 hours per day, 100 concurrent users for another 6 hours and 10 concurrent users for the remaining 12 hours. That’s still around 115,000 users the site could handle in a day given the maximum load of 17 requests per second.

I’m sure these numbers are somewhere between unrealistic and absurd. I’d be interested in comments on better ways to estimate or any real-world figures.

What we learned

To summarise:

  1. Testing the performance of your website may yield surprising results
  2. Having many database queries is bad for performance (duh)
  3. Caching works better for some types of site than others
  4. An inexpensive VPS may handle a lot more users than you’d think
Filed under: Django — Scott @ 2:58 pm

Serving websites from svn checkout considered harmful

22 April 2008

Serving from a working copy

A simple way to update sites is to serve them from Subversion working copies. Checkout the code on the server, develop and commit changes, then svn update the server when you’re ready to release.

Security concerns

There’s a potential security problem with this. Subversion keeps track of meta-data and original versions of files by storing them in .svn directories in the working copy. If your web server allows requests that include these .svn directories, anything within them could be served to whoever requests it.

Requests would look like:

http://example.com/stuff/.svn/entries
http://example.com/stuff/.svn/text-base/page.php.svn-base
http://example.com/stuff/.svn/text-base/settings.py.svn-base

The first one would reveal some meta-data about your project, such as file paths, repository urls and usernames.

The second one may be interpreted as a PHP script, in which case there’s little risk. Or it may return the PHP source file, which is a much bigger risk.

The third one (assumed to be a Django project) should never happen. The request can only be for files within the web server’s document root. Code itself doesn’t need to be there, only media files do.

Alternatives

Instead of serving sites from a working copy, you can use svn export to get a “clean” copy of the site which does not include .svn directories. If you svn export from the repository, you must export the complete site, rather than just update the changed files, which could be a lot more data.

However, you can svn export from a working copy on the server. It’s still a complete export, but you don’t have to trouble the repository, so it’s typically much quicker.

An alternative is to update a working copy which is stored on the server, but not in the web document root, then use rsync or some file copying to update the “clean” copy in the web document root. In this case, only changed files are affected.

Protection through web server config

If you do serve from working copies, you should configure the web server to block all requests which include .svn in the url. Here’s how to do it for some popular web servers:

Apache

<LocationMatch ".*\.svn.*">
    Order allow,deny
    Deny from all
</LocationMatch>

Lighttpd

$HTTP["url"] =~ ".*\.svn.*" {
  url.access-deny = ("")
}

Nginx

Using the location directive which must appear in the context of server.

server {
location ~ \.svn { deny all; }
...
}
Filed under: Hosting,Security — Scott @ 9:48 pm