A fun bug

Posted on 28 March 2014 in Programming, PythonAnywhere

While I'm plugging the memory leaks in my epoll-based C reverse proxy, I thought I might share an interesting bug we found today on PythonAnywhere. The following is the bug report I posted to our forums.

So, here's what was happening.

Each web app someone has on PythonAnywhere runs on a backend server. We have a cluster of these backends, and the cluster is behind a loadbalancer. Every backend server in the cluster is capable of running any web app; the loadbalancer's job is to spread things out between them so that each one at any given time is only running an appropriately-sized subset of them. It has a list of backends, which we can update in realtime as we add or remove backends to scale up or down, and it looks at incoming requests and uses the domain name to work out which backend to route a request to.

That's all pretty simple. The twist comes when we add the code that reload web apps to the mix.

Reloading a PythonAnywhere web app is simply a case of making an authenticated request to a specific URL. For example, right now (and this might change, it's not an official API, so don't do anything that relies on it) to reload www.foo.com owned by user fred, you'd hit the URL http://www.pythonanywhere.com/user/fred/webapps/www.foo.com/reload

Now, the PythonAnywhere website itself is just another web app running on one of the backends (a bit recursive, I know). So most requests to it are routed based on the normal loadbalancing algorithm. But calls specifically to that "reload" URL need to be routed differently -- they need to go to the specific backend that is running the site that needs to be reloaded. So, for that URL, and that URL only, the loadbalancer uses the domain name that's specified second-to-the-end in the path bit of the URL to choose which backend to route the request to, instead of using the hostname at the start of the URL.

So, what happened here? Well, the clue was in the usernames of the people who were affected by the problem -- IronHand and JoeButy. Both of you have mixed-case usernames. And your web apps are ironhand.pythonanywhere.com and joebuty.pythonanywhere.com.

But the code on the "Web" tab that specifies the URL for reloading the selected domain specifies it using your mixed-case usernames -- that is, it specifies that the reload calls should go to the URL for IronHand.pythonanywhere.com or JoeButy.pythonanywhere.com.

And you can probably guess what the problem was -- the backend selection code was case-sensitive. So requests to your web apps were going to one backend, but reload messages were going to another different backend. The fix I just pushed made the backend selection code case-insensitive, as it should have been.

The remaining question -- why did this suddenly crop up today? My best guess is that it's been there for a while, but it was significantly less likely to happen, and so it was written off as a glitch when it happened in the past.

The reason it's become more common is that we actually more than doubled the number of backends yesterday. Because of the way the backend selection code works, when there's a relatively small number of backends it's actually quite likely that the lower-case version of your domain will, by chance, route to the same backend as the mixed-case one. But the doubling of the number of servers changed that, and suddenly the probability that they'd route differently went up drastically.

Why did we double the number of servers? Previously, backends were m1.xlarge AWS instances. We decided that it would be better to have a larger number of smaller backends, so that problems on one server impacted a smaller number of people. So we changed our system to use m1.large instances instead, span up slightly more than twice as many backend servers, and switched the loadbalancer across.

So, there you have it. I hope it was as interesting to read about as it was to figure out :-)

...just resting...

Posted on 12 December 2013 in Linux, Programming

Just a quick note to say that I'm still here! Using rsp as a front-end for this site has usefully shown up some weird bugs, and I'm tracking them down. I'll do a new post about it when there's something useful to say...

Writing a reverse proxy/loadbalancer from the ground up in C, part 4: Dealing with slow writes to the network

Posted on 10 October 2013 in Linux, Programming

This is the fourth step along my road to building a simple C-based reverse proxy/loadbalancer, rsp, so that I can understand how nginx/OpenResty works -- more background here. Here are links to the first part, where I showed the basic networking code required to write a proxy that could handle one incoming connection at a time and connect it with a single backend, to the second part, where I added the code to handle multiple connections by using epoll, and to the third part, where I started using Lua to configure the proxy.

This post was was unplanned; it shows how I fixed a bug that I discovered when I first tried to use rsp to act as a reverse proxy in front of this blog. The bug is fixed, and you're now reading this via rsp. The problem was that when the connection from a browser to the proxy was slower than the connection from the proxy to the backend (that is, most of the time), then when new data was received from the backend and we tried to send it to the client, we sometimes got an error to tell us that the client was not ready. This error was being ignored, so a block of data would be skipped, so the pages you got back would be missing chunks. There's more about the bug here.

[ Read more ]

A brief sidetrack: Varnish

Posted on 2 October 2013 in Linux, Programming

In order to use this blog as a decent real-world test of rsp, I figured that I should make it as fast as possible. The quickest way to do that was to install Varnish, which is essentially a reverse proxy that caches stuff. You configure it to say what is cachable, and then it runs in place of the web server and proxies anything it can't cache back to it.

I basically used the instructions from Ewan Leith's excellent "10 Million hits a day with Wordpress using a $15 server" post.

So now, this server has:

  • rsp running on port 80, proxying everything to port 83.
  • varnish running on port 83, caching what it can and proxying the rest to port 81.
  • nginx running on port 81, serving static pages and sending PHP stuff to php5-fpm on port 9000.

I've also got haproxy running on port 82, doing the same as rsp -- proxying everything to varnish -- so that I can do some comparative speed tests once rsp does enough for such tests to give interesting results. Right now, all of the speed differences seem to be in the noise, with a run of ab pointed at varnish actually coming out slower than the two proxies.

Writing a reverse proxy/loadbalancer from the ground up in C, pause to regroup: fixed it!

Posted on 29 September 2013 in Linux, Programming

It took a bit of work, but the bug is fixed: rsp now handles correctly the case when it can't write as much as it wants to the client side. I think this is enough for it to properly work as a front-end for this website, so it's installed and running here. If you're reading this (and I've not had to switch it off in the meantime) then the pages you're reading were served over rsp. Which is very pleasing :-)

The code needs a bit of refactoring before I can present it, and the same bug still exists on the communicating-to-backends side (which is one of the reasons it needs refactoring -- this is something I should have been able to fix in one place only) so I'll do that over the coming days, and then do another post.

Writing a reverse proxy/loadbalancer from the ground up in C, pause to regroup: non-blocking output

Posted on 28 September 2013 in Linux, Programming

Before moving on to the next step in my from-scratch reverse proxy, I thought it would be nice to install it on the machine where this blog runs, and proxy all access to the blog through it. It would be useful dogfooding and might show any non-obvious errors in the code. And it did.

I found that while short pages were served up perfectly well, longer pages were corrupted and interrupted halfway through. Using curl gave various weird errors, eg. curl: (56) Problem (3) in the Chunked-Encoded data, which is a general error saying that it's receiving chunked data and the chunking is invalid.

Doubly strangely, these problems didn't happen when I ran the proxy on the machine where I'm developing it and got it to proxy the blog; only when I ran it on the same machine as the blog. They're different versions of Ubuntu, the blog server being slightly older, but not drastically so -- and none of the stuff I'm using is that new, so it seemed unlikely to be a bug in the blog server's OS. And anyway, select isn't broken.

After a ton of debugging with printfs here there and everywhere, I tracked it down. You'll remember that our code to transfer data from the backend to the client looks like this:

void handle_backend_socket_event(struct epoll_event_handler* self, uint32_t events)
{
    struct backend_socket_event_data* closure = (struct backend_socket_event_data*) self->closure;

char buffer[BUFFER_SIZE]; int bytes_read;

if (events & EPOLLIN) { bytes_read = read(self->fd, buffer, BUFFER_SIZE); if (bytes_read == -1 && (errno == EAGAIN || errno == EWOULDBLOCK)) { return; }

if (bytes_read == 0 || bytes_read == -1) { close_client_socket(closure->client_handler); close_backend_socket(self); return; }

write(closure->client_handler->fd, buffer, bytes_read); }

if ((events & EPOLLERR) | (events & EPOLLHUP) | (events & EPOLLRDHUP)) { close_client_socket(closure->client_handler); close_backend_socket(self); return; }

}

If you look closely, there's a system call there where I'm not checking the return value -- always risky. It's this:

        write(closure->client_handler->fd, buffer, bytes_read);

The write function returns the number of bytes it managed to write, or an error code. The debugging code revealed that sometimes it was returning -1, and errno was set to EAGAIN, meaning that the operation would have blocked on a non-blocking socket.

This makes a lot of sense. Sending stuff out over the network is a fairly complex process. There are kernel buffers of stuff to send, and as we're using TCP, which is connection-based, I imagine there's a possibility that the client being slow or transmission of data over the Internet might be causing things to back up. Possibly sometimes it was returning a non-error code, too, but was still not able to write all of the bytes I asked it to write, so stuff was getting skipped.

So that means that even for this simple example of an epoll-based proxy to work properly, we need to do some kind of buffering in the server to handle cases where we're getting stuff from the backend faster than we can send it to the client. And possibly vice versa. It's possible to get epoll events on an FD when it's ready to accept output, so that's probably the way to go -- but it will need a bit of restructuring. So the next step will be to implement that, rather than the multiple-backend handling stuff I was planning.

This is excellent. Now I know a little more about why writing something like nginx is hard, and have a vague idea of why I sometimes see stuff in its logs along the lines of an upstream response is buffered to a temporary file. Which is entirely why I started writing this stuff in the first place :-)

Here's a run-through of the code I had to write to fix the bug.

Writing a reverse proxy/loadbalancer from the ground up in C, part 3: Lua-based configuration

Posted on 11 September 2013 in Linux, Programming

This is the third step along my road to building a simple C-based reverse proxy/loadbalancer so that I can understand how nginx/OpenResty works -- more background here. Here's a link to the first part, where I showed the basic networking code required to write a proxy that could handle one incoming connection at a time and connect it with a single backend, and to the second part, where I added the code to handle multiple connections by using epoll.

This post is much shorter than the last one. I wanted to make the minimum changes to introduce some Lua-based scripting -- specifically, I wanted to keep the same proxy with the same behaviour, and just move the stuff that was being configured via command-line parameters into a Lua script, so that just the name of that script would be specified on the command line. It was really easy :-) -- but obviously I may have got it wrong, so as ever, any comments and corrections would be much appreciated.

[ Read more ]

Writing a reverse proxy/loadbalancer from the ground up in C, part 2: handling multiple connections with epoll

Posted on 7 September 2013 in Linux, Programming

This is the second step along my road to building a simple C-based reverse proxy/loadbalancer so that I can understand how nginx/OpenResty works -- more background here. Here's a link to the first part, where I showed the basic networking code required to write a proxy that could handle one incoming connection at a time and connect it with a single backend.

This (rather long) post describes a version that uses Linux's epoll API to handle multiple simultaneous connections -- but it still just sends all of them down to the same backend server. I've tested it using the Apache ab server benchmarking tool, and over a million requests, 100 running concurrently, it adds about 0.1ms to the average request time as compared to a direct connection to the web server, which is pretty good going at this early stage. It also doesn't appear to leak memory, which is doubly good going for someone who's not coded in C since the late 90s. I'm pretty sure it's not totally stupid code, though obviously comments and corrections would be much appreciated!

[UPDATE: there's definitely one bug in this version -- it doesn't gracefully handle cases when the we can't send data to the client as fast as we're receiving it from the backend. More info here.]

[ Read more ]

Writing a reverse proxy/loadbalancer from the ground up in C, part 1: a trivial single-threaded proxy

Posted on 12 August 2013 in Programming

This is the first step along my road to building a simple C-based reverse proxy/loadbalancer so that I can understand how nginx/OpenResty works -- more explanation here. It's called rsp, for Really Simple Proxy. This version listens for connections on a particular port, specified on the command line; when one is made it sends the request down to a backend -- another server with an associated port, also specified on the command line -- and sends whatever comes back from the backend back to the person who made the original connection. It can only handle one connection at a time -- while it's handling one, it just queues up others, and it handles them in turn. This will, of course, change later.

I'm posting this in the hope that it might help people who know Python, and some basic C, but want to learn more about how the OS-level networking stuff works. I'm also vaguely hoping that any readers who code in C day to day might take a look and tell me what I'm doing wrong :-)

[ Read more ]

Writing a reverse proxy/loadbalancer from the ground up in C, part 0: introduction

Posted on 8 August 2013 in Programming

We're spending a lot of time on nginx configuration at PythonAnywhere. We're a platform-as-a-service, and a lot of people host their websites with us, so it's important that we have a reliable load-balancer to receive all of the incoming web traffic and appropriately distribute it around backend web-server nodes.

nginx is a fantastic, possibly unbeatable tool for this. It's fast, reliable, and lightweight in terms of CPU resources. We're using the OpenResty variant of it, which adds a number of useful modules -- most importantly for us, one for Lua scripting, which means that we can dynamically work out where to send traffic as the hits come in.

It's also quite simple to configure at a basic level. You want all incoming requests for site X to go to backend Y? Just write something like this:

    server {
        server_name X
        listen 80;

location / { proxy_set_header Host $host; proxy_pass Y; } }

Simple enough. Lua scripting is pretty easy to add -- you just put an extra directive before the proxy_pass that provides some Lua code to run, and then variables you set in the code can be accessed from the proxy_pass.

But there are many more complicated options. worker_connections, tcp_nopush, sendfile, types_hash_max_size... Some are reasonably easy to understand with a certain amount of reading, some are harder.

I'm a big believer that the best way to understand something complex is to try to build your own simple version of it. So, in my copious free time, I'm going to start putting together a simple loadbalancer in C. The aim isn't to rewrite nginx or OpenResty; it's to write enough equivalent functionality that I can better understand what they are really doing under the hood, in the same way as writing a compiler for a toy language gives you a better understanding of how proper compilers work. I'll get a good grasp on some underlying OS concepts that I have only a vague appreciation of now. It's also going to be quite fun coding in C again. I've not really written any since 1997.

Anyway, I'll document the steps I take here on this blog; partly because there's a faint chance that it might be interesting to other experienced Python programmers whose C is rusty or nonexistent and want to get a view under the hood, but mostly because the best way to be sure you really understand it is to try to explain it to other people.

I hope it'll be interesting!

Here's a link to the first post in the series: Writing a reverse proxy/loadbalancer from the ground up in C, part 1: a trivial one-shot proxy