Written by Andy Croxall, Hello Lamp Post’s Senior Developer

AI – and GPT in particular – is very much in the zeitgeist right now. It’s no exaggeration to suggest, as many have, that AI as we have come to know and use it in the last few years will go down as the biggest technological breakthrough since the emergence of the internet in the early-mid 90s.

If you’re hazy on GPT, it stands for Generative Pre-trained Transformer. It’s a form of AI that has been “trained” on millions of web pages, books, Wikipedia articles and more. In doing so, its engineers have helped it “learn” about, well, almost everything, and how to use language and construct sentences. Debate rages among philosophers and technologists as to whether GPT (and AI in general) can be said to “think” in any meaningful sense.

So that’s the background. Not that it’s gone smoothly, of course. This being humans, we like to take a while to iron out the kinks. Like with the emergence of the internet, a lot of problems presented themselves pretty quickly. With the internet, we were ill-equipped to deal with spam, scammers, e-commerce fraud, harmful content, dancing hamsters and all sorts of other problematic areas.

And so it is with AI; questions about neutrality, ideological bias and content relating to harmful activity are not even close to being resolved just yet – and never will be satisfactorily. OpenAI, the company behind GPT, has a strict content policy for GPT and has programmed it with all sorts of failsafes (GPT will typically refuse to engage in conversation promoting violence, for example). Moreover, they’re constantly tweaking its source code to stop it doing silly things not predicted when it was built.

But that hasn’t stopped people “jailbreaking” (gaming) it to, for example, generate ideologically-charged content, or provide instructions on how to make Molotov Cocktails. Sheesh. Even when it’s taking time off from advising on bomb construction it can raise eyebrows; in some of our testing it responded to a question on religion by asserting that god was a figment of the imagination. As an atheist, I loved this, until I quickly realised that, just because GPT accorded with my own view on that occasion, it may well accord with someone else’s (potentially hateful) view some other time.

So these are the early days of popular AI, then. But that has meant the barriers to entry are lower than ever before for a company (or just an enterprising bedroom programmer) to set up a chatbot service. And sure enough, these have been springing up everywhere.

Suddenly, anyone with access to a web server and a modicum of basic programming knowledge can provide a chatbot service, simply farming out the user input to GPT via GPT’s API (a means for one system to talk to another in a structured way.) You could spin up such a service in less than a day. Seriously; here’s what the (simplified) code would look like to send user input to GPT and get a response back:

image-for-chat-bot-code

That’s it – 18 lines of code to send a user’s input to GPT and get a response back. The only parts missing from the above are the code to capture the user’s input and the code to send output back to them, but it’s amazing how little code such an app generally involves.

We’ve come a long way since this sort of thing was the reserve of very large companies. AI isn’t new, after all; IKEA had an AI-driven help avatar, “Anna”, on its website as long as 10 years ago. In one of those “seemed hilarious at the time” moments, I remember trying to chat it up. It steadfastly rejected my overtures, and insisted on talking only about shelving and delivery charges. Realising there was little hope for the relationship, I moved on.

At Hello Lamp Post (HLP), we use GPT (and other AI) to supplement our service, but will never outsource content generation to it wholesale. Our approach is, and will remain, that we need to be the gatekeepers of any AI-driven content.

HLP uses a three-pronged approach to content generation:

  • Prescriptive, scripted content saved in our system for a given project, and selected based on user input
  • “Knowledgebase” content using an AI service trained on content relating to a given project
  • GPT – mostly as a fallback if the above have failed to generate a satisfactory answer.

We are also in the early stages of training our own “closed-model” AI, i.e. an AI model solely for our purposes and trained on only our content.

We use this paradigm because we are the experts on our clients’ deployments, whereas GPT isn’t. In this sense the chatbot is only the end result of our work; the hidden part beforehand is gaining insight, and generating human-written content about our clients’ deployments. It’s that content that forms the vast majority of what ultimately gets used in our chatbots, not AI-generated content.

Even where we do use AI, though, we have checks in places. We have real-time software to scan AI-generated responses for certain “stop words” i.e. words that might suggest inappropriate content. An obvious example is swear words, but it goes deeper than that, looking out for words relating to politics, religions and more.

This is just as well. As noted above, AI is capable of being “gamed”, and content is king. If the content we serve to users is jeopardised, we and our clients suffer.