Giving up on the AI chatbot tutorial (for now)
I'm a big fan of learning in public, and early last year I started trying to do that by writing an AI chatbot tutorial as I learned the technology myself. But somehow it just wasn't working -- perhaps because my understanding was evolving so quickly that each time I sat down to write, I spotted dozens of errors in the previous posts, and felt I should fix those first. So I've decided to give up on that one, at least for now.
So, back to something a bit more achievable! Some lab notes will be coming on things I've been working on, including -- later on this evening -- a post about an oddity I found the other day.
In the meantime, here's a blog post I did for PythonAnywhere late last year: Five steps to create your own PythonAnywhere AI guru, on PythonAnywhere.
Building an AI chatbot for beginners: part 2
[Note that this series kind of dried up; when I started the series, I knew that I knew very little about the subject, but I was hoping to learn better by learning in public. However, as time went by it turned out that this wasn't working. There are a lot of better tutorials out there!]
Welcome to the second part of my tutorial on how to build a chatbot using OpenAI's interface to their Large Language Models (LLMs)! You can read the introduction here, and the first part here. As a reminder, I'm writing this not because I'm an expert, but because I'm learning how to do it myself, and writing about it helps me learn faster. Caveat lector :-)
In this post, we'll give the bot some memory of the conversation so far.
At the end of the first part, we had a program that would accept input from a user, combine it with some static text to make a prompt that an LLM would complete in the character of a chatbot (stopping at the point that the chatbot should stop, and not trying to carry on the conversation), then send it to OpenAI's API specifying an LLM model, and print out the result.
Building an AI chatbot for beginners: part 1
[Note that this series kind of dried up; when I started the series, I knew that I knew very little about the subject, but I was hoping to learn better by learning in public. However, as time went by it turned out that this wasn't working. There are a lot of better tutorials out there!]
Welcome to the first part of my tutorial on how to build a chatbot using OpenAI's interface to their Large Language Models (LLMs)! You can read the introduction here.
If you're reading this and want to get the best out of it, I strongly recommend that you run the code on your own machine as you go along: trust me, it will stick in your mind much better if you do that.
The goal in this post is to write a basic bot script that accepts user input, and just bounces it off an OpenAI LLM to generate a response.
Building an AI chatbot for beginners: part 0
[Note that this series kind of dried up; when I started the series, I knew that I knew very little about the subject, but I was hoping to learn better by learning in public. However, as time went by it turned out that this wasn't working. There are a lot of better tutorials out there!]
Like a lot of people, I've been blown away by the capabilities of Large Language Model (LLM) based systems over the last few months. I'm using ChatGPT regularly for all kinds of things, from generating basic code to debugging errors to writing emails.
I wanted to understand more about how these tools worked, and feel strongly that there's no better way to learn something than by doing it. Building an LLM is, at least right now, super-expensive -- in the millions of dollars (although maybe that will be coming down fast?). It also requires a lot of deep knowledge to get to something interesting. Perhaps something to try in the future, but not right now.
However, using LLMs to create something interesting -- that's much easier, especially because OpenAI have a powerful API, which provides ways to do all kinds of stuff. Most relevantly, they provide access to a Completion API. That, as I understand it, is the lowest-level way of interacting with an LLM, so building something out of it is probably the best bang for the buck for learning.
Over the last few weeks I've put together a bunch of things I found interesting, and have learned a lot. But it occurred to me that an even better way to learn stuff than by building it is to build it, and then explain it to someone else, even if that person is an abstract persona for "someone out there on the Internet". So: time for a LLM chatbot tutorial!
Acquired!
As those of you who know me (and probably a fair few that don't) will already know, PythonAnywhere was acquired by Anaconda, Inc back in June of this year. We're still the same team, and I'm still leading it, but now we're part of a larger company.
It's been quite a ride. Due diligence and negotiation in the months up to the close was just as tough as I'd always been told it would be (and that's despite the fact that according to our lawyers it was a pretty smooth one as these things go). And now I have to get used to having a boss again, which is weird... but is helped by the fact that said boss is a great guy, and is aligned with us (you can tell from the lingo that I work for a larger company now, right?) on keeping the platform up and running as it was, while investing into it so that it can get better and grow faster.
So, all good news :-)
I've been vaguely considering putting together a few blog posts outlining what happens during an acquisition -- just a general discussion of the steps and what they involve. I wouldn't be putting anything in about this particular deal, of course -- there are strict non-disclosures about the terms and so on -- but just a description of what happens might be useful for other people in the position I was in earlier on this year. I had to learn a lot of stuff very quickly, and while our lawyers were awesome and explained things brilliantly, it would have been useful to have some kind of layman's background information.
What do you think -- worth posting?
A somewhat indirect way of reporting stolen cards to the bank
One of the interesting things about having a business that accepts cards on the Internet is seeing what odd things people do when trying to use your site. A case in point is someone we've noticed over the last few months, who appears to be using our site as a rather indirect way to report stolen cards.
The behaviour that we see is that they run some kind of script that signs up for a bunch of accounts, with randomly-generated usernames, and then try to upgrade them all using stolen card numbers.
Naturally, our fraud-prevention systems pick that up pretty much immediately, and we run our own script that identifies every account that they've created, finds the card details used for them, and reports every transaction and attempted transaction as fraudulent. This means that our payment processor, Stripe, can flag the card numbers as stolen, so that they can't be used elsewhere without triggering fraud alerts to the other merchants. And, if a charge actually goes through (most of the cards tend to be pre-paid with no money on them, so most charges fail), then we refund it as fraudulent, which not only notifies Stripe, but I believe notifies the bank that the card number is circulating amongst card fraudsters.
Now, the fact that we do this should be obvious to them. Every time they run their scripts, it causes a minor inconvenience to us (the scripts that we have to handle the problem are getting ever-simpler to use), and it means that every card that they tried on our site is now significantly less valuable as an asset to them. They're essentially paying money for lists of stolen card numbers, and then burning it up.
Given that we're doing this, and they must know that we're doing it, the only explanation I can think of is that they're actually running some kind of strange public service where they buy lists of stolen card details and then get them blocked. It does seem a very roundabout way to do it, though. Surely it would be easier to just tell the banks directly?
But perhaps there's something I'm missing.
Or perhaps they really are dim enough to be using us to check stolen cards for validity, and haven't yet noticed that doing so against a site that reports every fraudulent transaction to the card processor is not a terribly good idea...
COVID-19 breakthrough / re-infection: a personal tale
I'm just recovering from (PCR-confirmed) covid after (I believe) having had it in 2020, and having been double-jabbed with AstraZeneca over the course of the last year. I'm completely fine, and listening to people moaning about their health is rather dull, so I won't bore you by posting at length here. But a number of people I know were really surprised to hear about it, thinking that re-infection and breakthrough infections were rare. Given that I, my partner Sara, and a close friend have all had it again (PCR tested in each case) over the last month, it seems that it might be more common than generally suspected -- so I figured that a first-person account might be of some interest.
Fun with network namespaces, part 1
Linux has some amazing kernel features to enable containerization. Tools like Docker are built on top of them, and at PythonAnywhere we have built our own virtualization system using them.
One part of these systems that I've not spent much time poking into is network
namespaces. Namespaces are a
general abstraction that allows you to separate out system resources; for example,
if a process is in a mount namespace, then it has its own set of mounted disks
that is separate from those seen by the other processes on a machine -- or if it's in a
process namespace,
then it has its own cordoned-off set of processes visible to it (so,
say, ps auxwf
will just show the ones in its namespace).
As you might expect from that, if you put a process into a network namespace, it will have its own restricted view of what the networking environment looks like -- it won't see the machine's main network interface,
This provides certain advantages when it comes to security, but one that I thought was interesting is that because two processes inside different namespaces would have different networking environments, they could both bind to the same port -- and then could be accessed from outside via port forwarding.
To put that in more concrete terms: my goal was to be able to start up two Flask servers on the same machine, both bound to port 8080 inside their own namespace. I wanted to be able to access one of them from outside by hitting port 6000 on the machine, and the other by hitting port 6001.
Here is a run through how I got that working; it's a lightly-edited set of my "lab notes".
Comments are back!
Comments are now back up and running. They were interesting to put together; as a concept they don't play well with a static site, as they are by their very nature dynamic.
I was considering using Disqus, but I do want to try to
keep my data to myself with this blog. I wound up putting together a separate
site, comments.gilesthomas.com
, which is non-static, and handles all of the
comments -- some simple JavaScript injects them into each post page. It uses
Akismet -- the one external dependency I feel I can allow myself -- to filter
spam.
Should be interesting to see how it works! I'll give the new system a few days to bed in, and for a spot of code-tidying, then I'll post on the design of the new blog as a whole. I feel that I have Things To Say.
A new beginning
If you're reading this, you're seeing my new and shiny blog :-)
Blogging has been quite light here over the last few years; as PythonAnywhere has taken off, life has become ever-busier, so, less time to post.
But I also feel like one of the reasons that I've not been posting has been that I was using a Wordpress blog. Not that there's anything wrong with Wordpress, mind, but every time I logged on to it there were a pile of security updates to download and install, which was very demotivating. So often I'd think, "oh, I should post about that" but just never get round to it.
(There's also the faint embarrassment factor of running one of the most popular Python hosting platforms, and having a blog based on PHP...)
For a long time I'd been vaguely planning to switch over to some kind of static site generator like Hugo or Sphinx. They are both well-regarded, but our experience in porting the PythonAnywhere blog over to the former gave me some pause; while Hugo was really configurable, it always seemed to be really hard to configure it the specific way we wanted.
And then I thought, wait a minute. I'm meant to be a programmer. How hard can it be to write a simple static site generator?
That's the kind of sentence that feels like it should be followed by, "it was actually really hard". But it wasn't, because all of the pieces have been coded by generous people already and it was just a case of plugging them together.
With the help of wpparser to parse an export of my old blog (which I fed into a little script that spat out the articles in a Hugo-like format) and then markdown2 to format markdown-based posts, Pygments to highlight my code blocks, and then Jinja2 to let me bung the results in some templates, and feedgen to write out an RSS file, it was pretty easy to put together something that replicated the URL structure of the old blog.
To be honest, I've spent significantly more time fiddling with the CSS to make it all look pretty. I doubt that bit shows.
Anyway, now I have something where I can knock together a quick post in markdown, run a command, and have it published. Welcome to my new blog!
I'll be scanning through the old posts over the coming days and fixing any formatting issues I find.
The next step will be to work out some way of bringing the comments
over, as they (of course) don't really fit in with the whole "static site" side of
things. I have some ideas, though... But if you'd like to leave a comment in the meantime, @ me on Twitter.
(Update 2021-02-22: comments are back!)