Django performance testing – a real world example
About a week ago Andrew and I launched a new Django-powered site called Hey! Wall. It’s a social site along the lines of “the wall” on social networks and gives groups of friends a place to leave messages, share photos, videos and links.
We wanted to gauge performance and try some server config and code changes to see what steps we could take to improve it. We tested using httperf
and doubled performance by making some optimisations.
Server and Client
The server is a Xen VPS from Slicehost with 256MB RAM running Debian Etch. It is located in the US Midwest.
For testing, the client is a Xen VPS from Xtraordinary Hosting, located in the UK. Our normal Internet access is via ADSL which makes it difficult to make enough requests to the server. Using a well-connected VPS as the client means we can really hammer the server.
Server spec caveats
It’s hard to say exactly what the server specs are. The VPS has 256MB RAM and is hosted with similar VPSes, probably on a quad core server with 16GB RAM. That’s a maximum of 64 VPSes on the physical server, assuming it is full of 256MB slices. If the four processors are 2.4GHz, that’s 9.6GHz total, divided by 64 gives a minimum of 150MHz of CPU.
On a Xen VPS, you get a fixed allocation of memory and CPU without contention, but usually any available CPU on the machine can be used. If other VPSes on the same box are idle, your VPS can make use of more of the CPU. This probably means more CPU was used during testing and perhaps more for some tests than for others.
Measuring performance with httperf
There are various web performance testing tools around including ab (from Apache), Flood and httperf. We went with httperf for no particular reason.
An httperf command looks something like:
httperf --hog --server=example.com --uri=/ --timeout=10 --num-conns=200 --rate=5
In this example, we’re requesting http://example.com/
200 times, with up to 5 requests per second.
Testing Plan
Some tools support sessions and try to emulate users performing tasks on your site. We went with a simple brute-force test to get an idea of how many requests per second the site could handle.
The basic approach is to make a number of requests and see how the server responds: a status 200 is good, a status 500 is bad. Increase the rate (the number of requests made per second) and try again. When it starts returning lots of 500s, you’ve reached a limit.
Monitoring server resources
The other side is knowing what the server is doing in terms of memory and CPU use. To track this, we run top
and log the output to a file for later review. The top command is something like:
top -b -d 3 -U www-data > top.txt
In this example we’re logging information on processes running as user www-data
every three seconds. If you want to be more specific, instead of -U username
you can use -p 1, 2, 3
where 1, 2 and 3 are pids (process ids of processes you want to watch).
The web server is Lighttpd with Python 2.5 running as FastCGI processes. We didn’t log information on the database process (PostgreSQL), though that could be useful.
Another useful tool is vmstat
, particularly the swap columns which show how much memory is being swapped. Swapping means you don’t have enough memory and is a performance killer. To repeatedly run vmstat
, specify the number of seconds between checks. e.g.
vmstat 2
Authenticated requests with httperf
httperf
makes simple GET
requests to a URL and downloads the html (but not any of the media). Requesting public/anonymous pages is easy, but what if you want a page that requires login?
httperf
can pass request headers. Django authentication (from django.contrib.auth
) uses sessions which rely on a session id held in a cookie on the client. The client passes the cookie in a request header. You see where this is going.
Log in to the site and check your cookies. There should be one like sessionid=97d674a05b2614e98411553b28f909de
. To pass this cookie using httperf, use the --add-header
option. e.g.
httperf ... --add-header='Cookie: sessionid=97d674a05b2614e98411553b28f909de\n'
Note the \n
after the header. If you miss it, you will probably get timeouts for every request.
Which pages to test
With this in mind we tested two pages on the site:
- home: anonymous request to the home page
- wall: authenticated request to a “wall” which contains content retrieved from the database
Practically static versus highly dynamic
The home page is essentially static for anonymous users and just renders a template without needing any data from the database.
The wall page is very dynamic, with the main data retrieved from the database. The template is rendered specifically for the user with dates set to the user’s timezone, “remove” links on certain items, etc. The particular wall we tested has about 50 items on it and before optimisation made about 80 database queries.
For the first test we had two FastCGI backends running, able to accept requests for Django.
Home: 175 req/s (i.e. requests per second).
Wall: 8 req/s.
Compressed content
The first config optimisation was to enable gzip compression of the output using GZipMiddleware
. Performance improved slightly, but not a huge difference. Worth doing for the bandwidth savings in any case.
Home: 200 req/s.
Wall: 8 req/s.
More processes, shorter queues
Next we increased the number of FastCGI backends from two to five. This was an improvement with fewer 500 responses as more of the requests could be handled by the extra backends.
Home: 200 req/s.
Wall: 11 req/s.
Mo processes, mo problems
The increase from two to five was good, so we tried increasing FastCGI backends to ten. Performance decreased significantly.
Checking with vmstat
on the server, I could see it was swapping. Too many processes, each using memory for Python, had caused the VPS to run out of memory and swap memory to and from disk.
Home: 150 req/s.
Wall: 7 req/s.
At this point we set the FastCGI backends back down to five for further tests.
Profiling – where does the time go
The wall page had disappointing performance, so we started to optimise. The first thing we did was profile the code to see where time was being spent.
Using some simple profiling middleware it was clear the time was being spent in database queries. The wall page had a lot of queries and they increased linearly with the number of items on the wall. On the test wall this caused around 80 queries. No wonder its performance was poor.
Optimise this
By optimising how media attached to items is handled we were able to drop one query per item straight away. This slightly reduced how long the request took and so increased the number of queries handled per second.
Wall: 12 req/s.
Another inefficiency was the way several filters were applied to the content of each item whenever the page was requested. We changed it so the html output from the filtered content was stored in the item, saving some processing each time the page was viewed. This gave another small increase.
Wall: 13 req/s.
Back to reducing database queries, we were able to eliminate one query per item by changing how user profiles were retrieved (used to show who posted the item to the wall). Another worthwhile increase came from this change.
Wall: 15 req/s.
The final optimisation for this round of testing was to further reduce the queries needed to retrieve media attached to items. Again, we shed some queries and slightly increased performance.
Wall: 17 req/s.
Next step: caching
Having reduced queries as much as we can, the next step would be to do some caching. Retrieving cached data is usually much quicker than hitting the database, so we’d expect a good increase in performance.
Caching the output of complete pages is not useful because each page is heavily personalised to the user requesting it. It would only be a cache hit if the user requested the same page twice with nothing changing on it in the meantime.
Caching data such as lists of walls, items and users is more useful. The cached data could be used for multiple requests from a single user and shared to some degree across walls and different users. It’s not necessarily a huge win because each wall is likely to have a very small number of users, so the data would need to stay in cache long enough to be retrieved by others.
Our simplistic httperf
tests would be very misleading in this case. Each request is made as the same user so cache hits would be practically 100% and performance would be great! This does not reflect real-world use of the site, so we’d need some better tests.
We haven’t made use of caching yet as the site can easily handle its current level of activity, but if Hey! Wall becomes popular, it will be our next step.
How many users is 17 req/s?
Serving 17 req/s still seems fairly low, but it would be interesting to know how this translates to actual users of the site. Obviously, this figure doesn’t include serving any media such as images, CSS and JavaScript files. Media files are relatively large but should be served fast as they are handled directly by Lighttpd (not Django) and have Expires
headers to allow the client to cache them. Still, it’s some work the server would be doing in addition to what we measured with our tests.
It’s too early to tell what the common usage pattern would be, so I can only speculate. Allow me to do that!
I’ll assume the average user has access to three walls and checks each of them in turn, pausing for 10 or 20 seconds on each to read new comments and perhaps view some photos or open links. The user does this three times per day.
Looking specifically at the wall page and ignoring media, that means our user is making 9 requests per day for wall pages. Each user only makes one request at a time, so 17 users can be doing that at any second in time. Within a minute the user only makes three requests so is only counted within the 17 concurrent users for 3 seconds out of 60 (or 1 in 20).
If the distribution of user requests over time was perfectly balanced (hint: it won’t be), that means 340 users (17 * 20) could be using the site each minute. To continue with this unrealistic example, we could say there are 1440 minutes in a day and each user is on the site for three minutes per day, so the site could handle about 163,000 users. That would be very good for a $20/month VPS!
To reign in those numbers a bit, lets say we handle 200 concurrent users in a minute for 6 hours per day, 100 concurrent users for another 6 hours and 10 concurrent users for the remaining 12 hours. That’s still around 115,000 users the site could handle in a day given the maximum load of 17 requests per second.
I’m sure these numbers are somewhere between unrealistic and absurd. I’d be interested in comments on better ways to estimate or any real-world figures.
What we learned
To summarise:
- Testing the performance of your website may yield surprising results
- Having many database queries is bad for performance (duh)
- Caching works better for some types of site than others
- An inexpensive VPS may handle a lot more users than you’d think