Create a Dad AI with Azure Open AI Studio

Posted June 15, 2023

In celebration of Father's Day, let's create a Dad AI with Azure Open AI Studio (including dad jokes).

Two things are hot in June 2023: generative AI and Father's Day. Let's harvest those sweet, sweet clicks by combining them into a single subject. In this article, we'll be creating an AI assistant with Azure AI Studio that, like any good father, will give advice along with bad jokes.

This tutorial will focus on the concepts of generative AI rather than coding, which I'll get to in a future article. I also hope to demystify what generative AI actually does, how it's trained, and how it won't take away your Dad's job.

dadbot

What You'll Need

  • An Azure account (you can create one for free if you don't already have one)
  • Access to the Azure OpenAI service and Azure AI Studio. At the time of this article's publication, you will need to request access for the Azure OpenAI service.

About Azure OpenAI and Azure AI Studio

Announced at Build 2023, Azure AI Studio is a UI that sits on top of the Azure Open AI service. It simplifies tasks like deploying models, training models, and creating AI assistants. It also offers playgrounds for testing so that you can experiment with models without having to write any code. Not everything Microsoft showed off is available yet, but it's still a powerful tool.

To access Azure AI Studio, in the Azure Portal, go to your Azure OpenAI resource and click Explore on the Overview blade.

Deploy a Base Model

Before we can create and train our AI assistant, we need to choose and deploy a base model. In Azure AI Studio, go to the Models blade.

models

The details of each model are beyond the scope of this article, but you can read more about them here.

For purposes of our conversational AI, we'll want to pick gpt-35-turbo. This is the same model that ChatGPT currently uses. Click on the checkbox to the left of gpt-35-turbo and then click Deploy.

GPT-35 Turbo vs. GPT-4

You might have already heard about GPT-4, the latest version of the GPT model, integrated into Microsoft Edge and used by Bing.

Both GPT-35 Turbo and GPT-4 are generative models. GPT-4 is even more powerful, trained on far more data, and can access information on the internet (GPT-3 models only have access to information as of late 2021). However, GPT-4 is not yet available to us normies. OpenAI apparently didn't want Microsoft to integrate it into its products so quickly. No worries - GPT-35 Turbo is still very powerful.

After the model is deployed, go to the Chat Playground blade. Make sure that the model you created is selected in the Deployment dropdown.

Experiment with the Base Model

Entering the Chat Playground, we see that the only thing provided for us is a standard system message: You are an AI assistant that helps people find information.

Chat Playground

In short, the system message tells the model how it should behave and answer questions. It's like giving the model a basic identity.

Let's ask our basic GPT-35 Turbo model a typical dad question and see how it answers.

Question:

What temperature should a medium burger be?

Model Response:

A medium burger should be cooked to an internal temperature of 160°F (71°C).

That's certainly useful information, but it's not very dad-like. Where are the jokes?

Prompt Engineer Dad-GPT

Before we do anything, let's define exactly how we would expect a Dad AI to answer questions.

  1. It should be knowledgeable.
  2. It should be helpful.
  3. It should tell bad jokes.

To achieve this, we'll do something called prompt engineering. Prompt engineering is the process of giving the model specifics on how users will ask questions and how responses should be generated. The more specific we are and the more data we give it, the better results we'll get.

Customize the System Message

Changing the system message can lead to drastically different results. For example:

You are an AI assistant that helps people find information. You are also a cowboy from the old west.

As silly as that is, it actually works!

Model Response

Well howdy partner! A medium burger is typically cooked to an internal temperature of 160-165°F (71-74°C). It's important to make sure the burger is cooked thoroughly to avoid any potential health risks from undercooked meat. Yeehaw!

Yeehaw, indeed.

Given our list of attributes for a Dad AI, let's change the system message to this:

You are a fatherly AI assistant that gives advice like a dad would. You are helpful, knowledgeable, and like to insert corny "dad" jokes whenever possible.

Let's see how this changes the response to our burger question.

Model Response

A medium burger should be cooked to an internal temperature of 160-165°F (about 71-74°C). It's important to cook burgers thoroughly to kill any harmful bacteria. You can use a meat thermometer to check the temperature. Remember to insert the thermometer into the thickest part of the burger, and not touching any bone or gristle. And hey, don't forget to add some cheese and bacon on top!

That's definitely more dad-like. Still we're not getting much in the way of jokes, are we? Let's fix that.

Use Few-Shot Learning

We have customized our system message for our Dad AI, but we need to do more to make sure it tells jokes. To do this, we'll provide some question and answer examples with a technique known as few-shot learning. Under Examples click Add an Example. You can now add a question from the user and an example of how you would expect the AI to answer it.

User: What kind of bees make honey?
Assistant: Worker bees make honey. They're the buzz of the animal kingdom!

User: What is the latest version of Windows?
Assistant: The latest version of Windows is Windows 11. The answer is so transparent!

Not my best work at dad jokes, but that should get the job done. Let's see how the answer changes now.

Model Response

The safe temperature for a medium burger is 160°F (71°C). You want to make sure your burger is cooked to perfection. We don't want anyone beefing about undercooked burgers!

Now, that seems more like it!

Turning Up the Temperature

Let's say we want longer and more random responses. We can change the parameters of the model to achieve this.

  • Click on the Parameters tab under Configuration
  • Set the Temperature to 0.9. Increasing the temperature will make the model more creative and less predictable. Decreasing it does the opposite.
  • Set the Max Tokens to 1000. The subject of tokens is beyond the scope of this article, but know that increasing this value allows for longer responses.
parameters

Model Response

The USDA recommends cooking ground beef to an internal temperature of 160°F (71°C) to ensure that harmful bacteria like E. coli are destroyed. This means a medium burger should have an internal temperature of 150°F to 155°F (65°C to 69°C). However, it's important to note that the temperature of the burger will continue to rise a few degrees after it's removed from the heat, so you may want to take it off the grill or stove when it's a few degrees below your desired temperature. Don't worry, you're not on your own, grill me later if you need more help!

Awesome. Just to verify the dadness of our AI, let's ask it something else.

Question:

Where is a good place to adopt a cat?

Model Response:

There are many great places to adopt a cat, including animal shelters, rescue organizations, and humane societies. These organizations often have cats of all ages and breeds available for adoption, and they can help match you with a cat that will be a good fit for your lifestyle and home. You can also check with local pet stores or breeders, but I always recommend adopting a cat from a shelter or rescue organization since there are so many cats in need of a loving home. Purr-haps you'll find your purr-fect furry companion there!

Our model is now, as the kids say, cringe. Mission accomplished.

So What Now?

While we made a suitable model inside of the Chat Playground in Azure AI Studio, this does not mean that the GPT-35 Turbo model was permanently changed nor did we create a custom, fine-tuned model.

At the time I wrote this article, fine-tuning a model was not available in Azure AI Studio. When enabled, fine-tuning will let you select a base model (like we did here), feed it data for prompt engineering, and save the custom model for later use.

Azure AI Studio is somewhat limited now, with more features on the way. You can still use this model in the real world, but with some gotchas.

Export Code from Chat Playground

The Chat Playground has a feature to export some starter code for the model. As of now, it exports C# and Python code and also raw curl requests with the JSON object for the request.

The starter code sets up a client connection to your Azure OpenAI service and sends the system message and the other prompts we wrote. From there, you can use this model in your own applications, whether it be a console app, website, or mobile app. Just make sure to save your API key in a secure place (which is beyond the scope of this article, but Microsoft does give you pointers on how to do this).

The primary gotcha is that with this method, the model is set up fresh each time you spin up a chat instance (maybe a good case for a singleton service). You will be charged for every request you make to the OpenAI service. So, for high-traffic situations, this is not ideal. I personally would only do this for a proof-of-concept.

Coming Soon

Microsoft introduced quite a lot of new features for Azure OpenAI and Azure AI Studio at Build 2023 that should make it possible to use Azure AI Studio to for production-ready scenarios.

Among these are:

  • Azure OpenAI Service on Your Data - This will allow you to train base models on your own data in a variety of different formats. We can also choose for models to only respond based on custom data so chat assistants can be more focused and granular. more info
  • Azure ML Prompt Flow - Prompt Flow will assist in making powerful and complex prompts

They Took Our Jobs!

You can't move your mouse in a straight line without hitting an article about how AI is going to take your job. Layoffs have been incorrectly attributed to AI rather than an economic downturn and increased cost of capital. In reality, AI is not replacing anybody anytime soon, and companies foolish enough to try have been met with disastrous results. Even a lawyer who tried to automate his job found that it was not such a good idea.

For example, AI tools like GitHub Copilot and even ChatGPT itself can save a bit of time for developers by providing code examples and autocompletion. They cannot, however, fully replace a human developer who goes to meetings, gathers requirements, develops a full solution, works with other developers and testers, deploys the application, maintains it, and so on. Just like our Dad AI can't replace a real dad.

Next Episode

Next time, we'll dive into some code and create a C# console application that uses our Dad AI.

© 2024 Josh Greenwald