Fun with network namespaces, part 1

Posted on 13 March 2021 in Linux, TIL deep dives |

Linux has some amazing kernel features to enable containerization. Tools like Docker are built on top of them, and at PythonAnywhere we have built our own virtualization system using them.

One part of these systems that I've not spent much time poking into is network namespaces. Namespaces are a general abstraction that allows you to separate out system resources; for example, if a process is in a mount namespace, then it has its own set of mounted disks that is separate from those seen by the other processes on a machine -- or if it's in a process namespace, then it has its own cordoned-off set of processes visible to it (so, say, ps auxwf will just show the ones in its namespace).

As you might expect from that, if you put a process into a network namespace, it will have its own restricted view of what the networking environment looks like -- it won't see the machine's main network interface,

This provides certain advantages when it comes to security, but one that I thought was interesting is that because two processes inside different namespaces would have different networking environments, they could both bind to the same port -- and then could be accessed from outside via port forwarding.

To put that in more concrete terms: my goal was to be able to start up two Flask servers on the same machine, both bound to port 8080 inside their own namespace. I wanted to be able to access one of them from outside by hitting port 6000 on the machine, and the other by hitting port 6001.

Here is a run through how I got that working; it's a lightly-edited set of my "lab notes".

[ Read more ]


Installing the unifi controller on Arch

Posted on 20 August 2019 in Linux |

This is more of a note-to-self than a proper blog post. I recently got a new Ubiquiti access point, and needed to reinstall the unifi controller on my Arch machine in order to run it.

There's no formal package for unifi, so you have to install the AUR. I use yaourt for that, and if you do a simple

yaourt -S unifi

...then it will try to install MongoDB from source. According to the Arch Wiki, this requires "180GB+ free disk space, and may take several hours to build (i.e. 6.5 hours on Intel i7, 1 hour on 32 Xeon cores with high-end NVMe.)". So not ideal.

The trick is to install MongoDB from binary first:

yaourt -S mongodb-bin

And only after that:

yaourt -S unifi

Finally, activate the service:

sudo systemctl enable unifi
sudo systemctl start unifi

...and then go to https://localhost:8443/, accept the self-signed cert, and you're all set.


pam-unshare: a PAM module that switches into a PID namespace

Posted on 15 April 2016 in Linux, C, PythonAnywhere, TIL deep dives |

Today in my 10% time at PythonAnywhere (we're a bit less lax than Google) I wrote a PAM module that lets you configure a Linux system so that when someone sus, sudos, or sshes in, they are put into a private PID namespace. This means that they can't see anyone else's processes, either via ps or via /proc. It's definitely not production-ready, but any feedback on it would be very welcome.

In this blog post I explain why I wrote it, and how it all works, including some of the pitfalls of using PID namespaces like this and how I worked around them.

[ Read more ]


SHA-1 sunset in Chromium, and libnss3

Posted on 6 August 2015 in Cryptography, Linux, TIL deep dives |

This post is a combination of a description of a Chrome bug (fixed in May), a mea culpa, and an explanation of of the way HTTPS certificates work. So there's something for everyone! :-)

Here's the situation -- don't worry if you don't understand all of this initially, a lot of it is explained later. Last year, the Chromium team decided that they should encourage site owners to stop using HTTPS certificates signed using the SHA-1 algorithm, which has security holes. The way they are doing this is by making the "padlock" icon in the URL bar show that a site is not secure if it has a certificate that expires after the end of 2015 if either the certificate itself is signed with SHA-1, or if any of the certificates in its chain are. I encountered some weird behaviour related to this when we recently got a new certificate for PythonAnywhere. Hopefully by posting about it here (with a bit of background covering the basics of how certificates work, including some stuff I learned along the way) I can help others who encounter the same problem.

tl;dr for people who understand certificates in some depth -- if any certificate in your chain, including your own cert, is signed with multiple hashing algorithms, then if you're using Chrome and have a version of libnss < 3.17.4 installed, Chrome's check to warn about SHA-1 signatures, instead of looking at the most-secure signature for each cert, will look at the least-secure one. So your certificate will look like it's insecure even if it's not. Solution for Ubuntu (at least for 14.04 LTS): sudo apt-get install libnss3. Thank you so much to Vincent G on Server Fault for working out the fix.

Here's the background. It's simplified a bit, but I think is largely accurate -- any corrections from people who know more about this stuff than I do would be much appreciated!

[ Read more ]


Writing a reverse proxy/loadbalancer from the ground up in C, part 4: Dealing with slow writes to the network

Posted on 10 October 2013 in C, Linux, TIL deep dives |

This is the fourth step along my road to building a simple C-based reverse proxy/loadbalancer, rsp, so that I can understand how nginx/OpenResty works -- more background here. Here are links to the first part, where I showed the basic networking code required to write a proxy that could handle one incoming connection at a time and connect it with a single backend, to the second part, where I added the code to handle multiple connections by using epoll, and to the third part, where I started using Lua to configure the proxy.

This post was was unplanned; it shows how I fixed a bug that I discovered when I first tried to use rsp to act as a reverse proxy in front of this blog. The bug is fixed, and you're now reading this via rsp [update, later: sadly, no longer true]. The problem was that when the connection from a browser to the proxy was slower than the connection from the proxy to the backend (that is, most of the time), then when new data was received from the backend and we tried to send it to the client, we sometimes got an error to tell us that the client was not ready. This error was being ignored, so a block of data would be skipped, so the pages you got back would be missing chunks. There's more about the bug here.

This post describes the fix.

[ Read more ]


A brief sidetrack: Varnish

Posted on 2 October 2013 in Blogkeeping, Linux |

In order to use this blog as a decent real-world test of rsp, I figured that I should make it as fast as possible. The quickest way to do that was to install Varnish, which is essentially a reverse proxy that caches stuff. You configure it to say what is cachable, and then it runs in place of the web server and proxies anything it can't cache back to it.

I basically used the instructions from Ewan Leith's excellent "10 Million hits a day with Wordpress using a $15 server" post.

So now, this server has:

I've also got haproxy running on port 82, doing the same as rsp -- proxying everything to varnish -- so that I can do some comparative speed tests once rsp does enough for such tests to give interesting results. Right now, all of the speed differences seem to be in the noise, with a run of ab pointed at varnish actually coming out slower than the two proxies.


Writing a reverse proxy/loadbalancer from the ground up in C, pause to regroup: fixed it!

Posted on 29 September 2013 in C, Linux, TIL deep dives |

It took a bit of work, but the bug is fixed: rsp now handles correctly the case when it can't write as much as it wants to the client side. I think this is enough for it to properly work as a front-end for this website, so it's installed and running here. If you're reading this (and I've not had to switch it off in the meantime) then the pages you're reading were served over rsp. Which is very pleasing :-)

The code needs a bit of refactoring before I can present it, and the same bug still exists on the communicating-to-backends side (which is one of the reasons it needs refactoring -- this is something I should have been able to fix in one place only) so I'll do that over the coming days, and then do another post.


Writing a reverse proxy/loadbalancer from the ground up in C, pause to regroup: non-blocking output

Posted on 28 September 2013 in C, Linux, TIL deep dives |

Before moving on to the next step in my from-scratch reverse proxy, I thought it would be nice to install it on the machine where this blog runs, and proxy all access to the blog through it. It would be useful dogfooding and might show any non-obvious errors in the code. And it did.

I found that while short pages were served up perfectly well, longer pages were corrupted and interrupted halfway through. Using curl gave various weird errors, eg.

curl: (56) Problem (3) in the Chunked-Encoded data

...which is a general error saying that it's receiving chunked data and the chunking is invalid.

Doubly strangely, these problems didn't happen when I ran the proxy on the machine where I'm developing it and got it to proxy the blog; only when I ran it on the same machine as the blog. They're different versions of Ubuntu, the blog server being slightly older, but not drastically so -- and none of the stuff I'm using is that new, so it seemed unlikely to be a bug in the blog server's OS. And anyway, select isn't broken.

After a ton of debugging with printfs here there and everywhere, I tracked it down.

[ Read more ]


Writing a reverse proxy/loadbalancer from the ground up in C, part 3: Lua-based configuration

Posted on 11 September 2013 in C, Linux, TIL deep dives |

This is the third step along my road to building a simple C-based reverse proxy/loadbalancer so that I can understand how nginx/OpenResty works -- more background here. Here's a link to the first part, where I showed the basic networking code required to write a proxy that could handle one incoming connection at a time and connect it with a single backend, and the second part, where I added the code to handle multiple connections by using epoll.

This post is much shorter than the last one. I wanted to make the minimum changes to introduce some Lua-based scripting -- specifically, I wanted to keep the same proxy with the same behaviour, and just move the stuff that was being configured via command-line parameters into a Lua script, so that just the name of that script would be specified on the command line. It was really easy :-) -- but obviously I may have got it wrong, so as ever, any comments and corrections would be much appreciated.

[ Read more ]


Writing a reverse proxy/loadbalancer from the ground up in C, part 2: handling multiple connections with epoll

Posted on 7 September 2013 in C, Linux, TIL deep dives |

This is the second step along my road to building a simple C-based reverse proxy/loadbalancer so that I can understand how nginx/OpenResty works -- more background here. Here's a link to the first part, where I showed the basic networking code required to write a proxy that could handle one incoming connection at a time and connect it with a single backend.

This (rather long) post describes a version that uses Linux's epoll API to handle multiple simultaneous connections -- but it still just sends all of them down to the same backend server. I've tested it using the Apache ab server benchmarking tool, and over a million requests, 100 running concurrently, it adds about 0.1ms to the average request time as compared to a direct connection to the web server, which is pretty good going at this early stage. It also doesn't appear to leak memory, which is doubly good going for someone who's not coded in C since the late 90s. I'm pretty sure it's not totally stupid code, though obviously comments and corrections would be much appreciated!

[UPDATE: there's definitely one bug in this version -- it doesn't gracefully handle cases when the we can't send data to the client as fast as we're receiving it from the backend. More info here.]

[ Read more ]