XIMNET - The Leading Tech Agency In Malaysia
Hello, I'm Xandra. How may I help you?
Ask Xandra
Talk to Xandra
Pick a date
Transformers and the future of conversational AI
An illustration of bat-filled Deer Cave at Gunung Mulu, Sarawak, Malaysia
As a full-time resident of the Internet, you’ve probably had a moment when AI sent a chill down your spine. A moment where you thought: “Hang on, are machines getting a bit too smart?” Maybe it was when Facebook started tagging your photos automatically (in which case, fret not, it’s shutting down… for now). Or perhaps the first time you read that one Guardian article written entirely by a robot.
For the latter, you can thank the ever-powerful Transformer: A machine learning model (read: a syllabus but for our computers) that has taken the Natural Language Processing (NLP) world by storm since it came on the scene four years ago. It wouldn’t be a stretch to say that recent breakthroughs in getting machines to understand language today are all using some variation of the Transformer model — it’s the ‘T’ in BERT and GPT-3. And in the spotlight, it looks poised to remain.
Machine learning for all
While the Transformer is every bit as awesome as the papers make it out to be, it doesn’t magically teach a computer to read. Training a Transformer to achieve groundbreaking results comes at a cost. An estimation by cloud infrastructure company Lambda Labs puts the computational cost of training the aforementioned GPT-3 at US$4.6 million (or a total of 355 GPU years).

Fortunately for the rest of us who do not rake in Elon Musk dollars, there’s open source. On the Huggingface Model Hub, individuals and organizations alike band together and provide a multitude of models, each pre-trained to address a specific machine learning problem so you don’t have to. Hopelessly lost in translation? Try Google’s T5. Feeling lonely? Try a conversation with Microsoft’s DialoGPT. When it comes to pre-trained NLP models, there is no better resource.

Making Your Machine That Much Smarter
But how can we make use of these models? While putting a fully-fledged conversational AI model front of house for your company is still probably not a good idea, there are plenty of other ways to use existing models to give your devices some extra conversational smarts. In this article, we’ll use a simple implementation of the Question Answering (QA) model to turn your computer into a trivia genius.
Choosing a model
When you filter for the “Question Answering” on the Model Hub, you’re going to end up with quite a large selection of pre-trained models. One way to go about it is to pick the most downloaded model, and certainly, you can’t go wrong with doing just that. However, here are some extra considerations for when you really want to dive into things:


  1. Model size — You will notice that most models follow a particular naming convention, e.g. albert-xlarge-v2-squad-v2, with the ‘xlarge’ portion intended to give you a sense of the model size. A bigger model size equals more resources needed to draw answers from the model, so for those of us running a tight ship, the smaller the better.
  2. Model accuracy — When browsing a model page, you will often see some indicator of accuracy, commonly an F1 score. This indicator will tell you how a model performed during some test, and a higher score generally means a better model. However, not all tests are created equal, and comparing two F1 scores for different models on different tests is meaningless. The current standard for testing QA models in English is the SQuAD2.0 dev set: For accurate comparisons, you will want to look out for models evaluated using this.

Alas, model size and model accuracy tend to increase in tandem, and so it will be ultimately up to you to choose the best balance of speed and accuracy for your own uses. Fortunately, switching out one model for another is easy, so you don’t need to commit to any one model just yet. For now, let’s go with electra-base-squad2.
Setting up your environment
Before we get down to business, let’s lay down a few requirements you’ll need to follow through this part of the article:

  1. You are running on a Windows machine, and the version is Windows 8 and up.
  2. You have Python installed on your machine.
  3. You have Visual Studio Code (or your preferred text editor) installed on your machine.

Now that we have that out of the way, let’s get down to it. We are first going to use the command prompt to (1) get to the Desktop, (2) make a folder called ‘qa-demo’, (3) navigate into the folder, and (4) create a Python virtual environment inside it. We can start by launching the Command Prompt and entering the following commands, line by line:

cd Desktop
mkdir qa-demo
cd qa-demo
python -m venv venv


If you’ve completed the steps correctly, you should be able to see the ‘qa-demo’ folder on your desktop, with a folder ‘venv’ inside it. Now, go back to your command prompt and enter the following to launch the virtual environment:

venv\Scripts\Activate

Once it’s all done, you should see (venv) in front of your command line — and if you do, congratulations, you’re in!

Installing the necessary packages
Don’t close the Command Prompt just yet, though. Now that you’re in the environment, you will need to install the FARM package — this makes working with Transformer models a little easier. To do that, enter:

pip install farm

You might have to wait a little while as it downloads: There should be loading bars and text flashing by on your screen. While it is tempting at this stage to start mashing your keyboard as they do in the movies, I wouldn’t advise it. Once it’s done, open the folder in your preferred text editor. To launch Visual Studio Code via the Command Prompt, key in:

code .
Creating the Python program
It’s time to dive into the code. In Visual Studio, create a new file and name it ‘qa.py’ (the name doesn’t actually matter, other than the fact that it must end in .py, denoting a Python file). Then, paste the following code into the file.

If you read the code, you will notice that there is currently a paragraph on the formation of Malaysia (thanks, Wikipedia) and a question about the Malayan Union. You can replace both of these with a context and question of your liking. Once you are ready, save the file (CTRL + S) and return to your Command Prompt (if you haven’t closed it by now). To run the program, type in ‘python ’ followed by your filename:

python qa.py

If you have closed the Command Prompt at this point, fret not. You need only (1) navigate back to your ‘qa-demo’ folder, (2) start the virtual environment, and then (3) run the file:

cd Desktop\qa-demo
venv\Scripts\Activate
python qa.py


Et voilà, we’re done! Now all that’s left is to wait for your machine to crunch some numbers and perhaps download a file or two. Once that’s finished, you should see your question and answer on-screen.

Looking to the Future
Admittedly, even after all that work, your computer isn’t exactly ready to take on the world. For that, it would have to probably have to ingest way more than just a few paragraphs’ worth of content (you could try Haystack). And it isn’t even on the Internet yet (but Flask could make that happen)!

While we haven’t created an amazing, sci-fi-machine-from-the-future conversationalist, we do have access to the tools. And it’s a great time to start building.

XIMNET is a digital solutions provider with two decades of track records specialising in web application development, AI Chatbot and system integration. XIMNET is launching a brand new way of building AI Chatbot with XYAN. Get in touch with us to find out more.
about the article
This article is first published on XIMNET Medium. More resources available here.

XIMNET is one of the leading tech agency for AI Chatbot in Malaysia.

other good reads
Copyright 2021 © XIMNET MALAYSIA SDN BHD (516045-V).  All rights reserved   |  Privacy Policy  |  Get in Touch  |  Powered by XTOPIA.IO
Ooops!
Generic Popup