How to build a GPT-3 powered chatbot in Next.js

Awa Dieudonne
11 min readFeb 9, 2023

This tutorial will guide you through the process of building a chatbot using Next.js and GPT-3’s API from OpenAI that can understand and respond to user queries similar to the way ChatGPT does it.

Check out the full article originally published on Awa Creates

If you don’t know what GPT-3 is, head over to the Open AI GPT-3 documentation.

By the end of this tutorial, you should have a fully functional chatbot that can understand and respond to a wide range of user queries.

Whether you’re a developer looking to add a new tool to your skill set [just like me], or a business owner looking to improve customer engagement, this tutorial is for you.

Setup Next.js and Chakra UI project

The following prerequisites are required for us to start building our GPT bot:

  1. A Next.js application. If you don’t have one, you can create a new Next.js app using the following command:
    `npx create-next-app my-chatbot`
    Or you can simply clone one of the Next.js starter templates from vercel.
  2. Chakra UI configured on the Next.js application. You can follow this link to set it up: Next.js Chakra installation guide.
  3. Configure a Chakra UI theme for the application. You can do that by creating a theme.js file in the src folder and pasting the following code:
// theme.ts
import { extendTheme, theme as defaultTheme } from "@chakra-ui/react";

const fonts = {
heading:
"-apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Oxygen, Ubuntu, Cantarell, Fira Sans, Droid Sans, Helvetica Neue, sans-serif",
body: "-apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Oxygen, Ubuntu, Cantarell, Fira Sans, Droid Sans, Helvetica Neue, sans-serif",
};

export const colors = {
...defaultTheme.colors,
blue: {
500: "#5686f5",
400: "#1c2433",
},
"button-color": "#222730",
"border-color": "#2b303b",
"main-bg": "#16181d",
"main-color": "#bbc3d3"
};

const theme = extendTheme({
...defaultTheme,
colors,
fonts,
});

export default theme;

We’ve extended the chakra theme with some custom colors and fonts for our application.

What is GPT-3?

GPT-3 (Generative Pre-training Transformer 3) is a cutting-edge language processing model developed by OpenAI. It uses machine-learning techniques to generate human-like text.

How was GPT-3 trained?

Training GPT-3 involved a deep learning technique called Transformer architecture and unsupervised learning on a massive amount of text data gotten from the internet.

The text data is pre-processed and tokenized into individual words or chunks of characters and then fed into the Transformer network, which uses self-attention mechanisms to generate a representation of the input text. This representation is then used to predict the next word in the sequence based on previous words.

The text data used to train GPT-3 is obtained from the internet, specifically from websites, books, and other digital documents. OpenAI uses web scraping techniques to collect a large amount of text data, which is then pre-processed to clean and filter the data for quality and relevance.

The pre-processing step involves removing irrelevant information such as HTML tags, special characters, and non-text elements. The text data that remains after removing the irrelevant information is then tokenized into individual words or sub-words, and the resulting sequence of tokens is used as input for the training process.

Setup Open AI in Next.js and generate API key

In order to setup Open AI in our next.js project, we are going to install the openai node.js library from npm using the command:

yarn add openai

Important note: this library is meant for server-side usage only, as using it in client-side browser code will expose your secret API key. Visit the API reference for more details.

In order to configure the library, we will need to generate an API key. Head over to the Open AI platform, sign up, and generate an API key if you don’t have it yet.

Once you’ve generated the API key, create a .env.local file and set the Open AI API key as follows:

# .env.local
OPENAI_API_KEY="sk-your-api-key"

You can now replace sk-your-api-key with the API key generated from the Open AI website.

Create Next.js API route with GPT-3 configuration

The next thing to do is we are going to create a Next.js API route inside pages/api directory.

In pages/api directory, create a new file and name it generate.ts, and paste in the following code:

// pages/api/generate.ts
import { NextApiRequest, NextApiResponse } from "next";
import { Configuration, OpenAIApi } from "openai";

const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);

export default async function handler(
req: NextApiRequest,
res: NextApiResponse
) {
if (!configuration.apiKey) {
res.status(500).json({
error: {
message:
"OpenAI API key not configured, please follow instructions in README.md",
},
});
return;
}

const message = req.body.message;

try {
const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt: message,
temperature: 0.6,
max_tokens: 3500,
});
res.status(200).json({ result: completion.data.choices[0].text });
} catch (error: any) {
// Consider adjusting the error handling logic for your use case
if (error.response) {
console.error(error.response.status, error.response.data);
res.status(error.response.status).json(error.response.data);
} else {
console.error(`Error with OpenAI API request: ${error.message}`);
res.status(500).json({
error: {
message: "An error occurred during your request.",
},
});
}
}
}

The initial lines in the file contain the configuration for the openai node library. We are instantiating an OpenAIApi class to create an instance of the openai object.

Then, we have our exported async handler function. The function starts by checking if the configuration.apiKey is present. If not, it sets the response status to 500 (Internal Server Error) and returns an error message, indicating that the OpenAI API key has not been configured.

If the API key is present, the function extracts message from the request body, and uses the openai object to create a completion by calling the createCompletion method. The method takes in an object that specifies the parameters for the completion request, including the name of the model to use, the prompt text, the temperature, and the maximum number of tokens to generate.

If the completion request is successful, the response status is set to 200 (OK) and the JSON result is returned with the first choice’s text.

If there’s an error with the API request, the error handling logic checks if the error has a response property. If so, the error status and data are logged to the console and returned as the response. If not, a 500 error status with a generic error message is returned.

Create a client that will consume the API route

To kick off, let’s install react-markdown, a React component that can display Markdown content. We are utilizing it as the output from the OpenAI API is typically in Markdown format.

yarn add react-markdown

Subsequently, we will create a chat component in the components folder within the src directory, specifically at src/components/Chat.tsx.

// src/components/Chat.tsx
import { Avatar, Flex } from "@chakra-ui/react";
import { useEffect, useRef, useState } from "react";
import { ReactMarkdown } from "react-markdown/lib/react-markdown";
import { motion } from "framer-motion";

const Chat = ({ message, user }: { message: string; user: "me" | "gpt" }) => {
const chatStringIndex = useRef(0);
const [chatMessage, setChatMessage] = useState("");

function appendChar() {
setChatMessage((prev) => prev + message[chatStringIndex.current]);
chatStringIndex.current++;
}

useEffect(() => {
if (chatStringIndex.current < message.length - 1) {
const appendCharInterval = setInterval(appendChar, 50);
return () => clearInterval(appendCharInterval);
}
}, [chatMessage, chatStringIndex.current]);

return (
<motion.div
style={{
alignSelf: user === "gpt" ? "flex-start" : "flex-end",
width: "auto",
maxWidth: "90%",
}}
initial={{
opacity: 0,
translateY: "100%",
}}
animate={{ opacity: 1, translateY: 0, transition: { duration: 0.3 } }}
exit={{ opacity: 0, translateY: 0 }}
>
<Flex gap="5px" w="full" flexDir={user === "gpt" ? "row" : "row-reverse"}>
<Avatar
name={user === "me" ? "Me" : "GPT"}
size="sm"
bg="border-color"
mt="-6px"
/>
<Flex
borderWidth={1}
borderColor="blue.400"
bg="main-bg"
p="0.5rem 1rem"
w="auto"
rounded={user === "gpt" ? "0 20px 20px 20px" : "20px 0 20px 20px"}
fontSize="18px"
flexDir="column"
>
{user === "gpt" && (
<Flex
alignSelf="flex-end"
fontStyle="italic"
opacity={0.4}
fontSize="10px"
as="small"
fontWeight={500}
>
GPT
</Flex>
)}
{user === "me" && (
<Flex
alignSelf="flex-start"
fontStyle="italic"
opacity={0.4}
fontSize="10px"
as="small"
fontWeight={500}
>
Me
</Flex>
)}
<ReactMarkdown rehypePlugins={[]}>
{user === "gpt" ? chatMessage || "" : message || ""}
</ReactMarkdown>
</Flex>
</Flex>
</motion.div>
);
};

export default Chat;

In this component, we’re rendering a chat box that consists of a space for the message, and the entity (me or GPT) posting the chat.

Within the component, we have the code snippet below. What the snippet does is basically animates the display of the message string by gradually concatenating characters from the string to the chatMessage state. As a result, we will have something similar to the way ChatGPT renders responses.

// src/components/Chat.tsx

...
const chatStringIndex = useRef(0);
const [chatMessage, setChatMessage] = useState("");

function appendChar() {
setChatMessage((prev) => prev + message[chatStringIndex.current]);
chatStringIndex.current++;
}

useEffect(() => {
if (chatStringIndex.current < message.length - 1) {
const appendCharInterval = setInterval(appendChar, 50);
return () => clearInterval(appendCharInterval);
}
}, [chatMessage, chatStringIndex.current]);
...

That is going to give us the following animation on the chat component.

Let’s also create an `InputField` component still in the `components` directory.

import { Box, Flex, Input, InputProps } from "@chakra-ui/react";
import React from "react";
import PlaneIcon from "../icons/PlaneIcon";
import { colors } from "../theme";

type Props = {
onSubmit: (e: React.MouseEvent<HTMLElement>) => Promise<void>;
inputProps: InputProps;
};

const InputField = (props: Props) => {
const { onSubmit, inputProps } = props;

return (
<Flex
maxW="800px"
as="form"
w="full"
mx="auto"
borderWidth={1}
borderColor="transparent"
_focusWithin={{ borderColor: "blue.500" }}
bg="border-color"
rounded="md"
align="center"
pr="8px"
boxShadow="md"
>
<Input
p="1rem"
h="60px"
borderColor="border-color"
_hover={{ outline: "unset" }}
_focus={{ outline: "unset" }}
_focusVisible={{ borderColor: "transparent" }}
resize="none"
{...inputProps}
/>
<Box
p="8px"
as="button"
rounded="md"
transition="all ease 0.2s"
_hover={{ bg: "main-bg" }}
onClick={onSubmit}
>
<PlaneIcon fill={colors["main-color"]} />
</Box>
</Flex>
);
};

export default InputField;

This component expects just two props — the `onSubmit` and `inputProps`. The `onSubmit` handler gets called when the `PlaneIcon` icon is clicked or the user hits the `enter` or `return` key.

And `inputProps` are just properties of the input element.

// src/icons/PlaneIcon.tsx
import React from "react";

const PlaneIcon = ({ fill = "#2b303b", ...rest }: any) => {
return (
<svg
width="20"
height="20"
viewBox="0 0 20 20"
fill="none"
xmlns="http://www.w3.org/2000/svg"
{...rest}
>
<path
fillRule="evenodd"
clipRule="evenodd"
d="M8.80494 11.8178L12.4619 17.7508C12.6219 18.0108 12.8719 18.0078 12.9729 17.9938C13.0739 17.9798 13.3169 17.9178 13.4049 17.6228L17.9779 2.17777C18.0579 1.90477 17.9109 1.71877 17.8449 1.65277C17.7809 1.58677 17.5979 1.44577 17.3329 1.52077L1.87695 6.04677C1.58394 6.13277 1.51994 6.37877 1.50594 6.47977C1.49194 6.58277 1.48794 6.83777 1.74695 7.00077L7.74794 10.7538L13.0499 5.39577C13.3409 5.10177 13.8159 5.09877 14.1109 5.38977C14.4059 5.68077 14.4079 6.15677 14.1169 6.45077L8.80494 11.8178ZM12.8949 19.4998C12.1989 19.4998 11.5609 19.1458 11.1849 18.5378L7.30794 12.2468L0.951945 8.27177C0.266945 7.84277 -0.091055 7.07877 0.019945 6.27577C0.129945 5.47277 0.680945 4.83477 1.45494 4.60777L16.9109 0.0817716C17.6219 -0.126228 18.3839 0.0707716 18.9079 0.592772C19.4319 1.11977 19.6269 1.88977 19.4149 2.60377L14.8419 18.0478C14.6129 18.8248 13.9729 19.3738 13.1719 19.4808C13.0779 19.4928 12.9869 19.4998 12.8949 19.4998Z"
fill={fill}
/>
</svg>
);
};

export default PlaneIcon;

At once, we’re going to also create the loader for the chats. This loader is rendered as API request to Open AI is in progress.

// src/icons/ThreeDotsLoader.tsx
import React from "react";

const ThreeDotsLoader = () => {
return (
<svg
xmlns="http://www.w3.org/2000/svg"
xmlnsXlink="http://www.w3.org/1999/xlink"
style={{ margin: "auto", background: "none", display: "block", shapeRendering: "auto" }}
width="30px"
height="30px"
viewBox="0 0 100 100"
preserveAspectRatio="xMidYMid"
>
<circle cx="84" cy="50" r="10" fill="rgba(255, 255, 255, 0.2)">
<animate
attributeName="r"
repeatCount="indefinite"
dur="0.7352941176470588s"
calcMode="spline"
keyTimes="0;1"
values="10;0"
keySplines="0 0.5 0.5 1"
begin="0s"
></animate>
<animate
attributeName="fill"
repeatCount="indefinite"
dur="2.941176470588235s"
calcMode="discrete"
keyTimes="0;0.25;0.5;0.75;1"
values="rgba(255, 255, 255, 0.4);rgba(255, 255, 255, 0.9999);rgba(255, 255, 255, 0.8);"
begin="0s"
></animate>
</circle>
<circle cx="16" cy="50" r="10" fill="rgba(255, 255, 255, 0.4)">
<animate
attributeName="r"
repeatCount="indefinite"
dur="2.941176470588235s"
calcMode="spline"
keyTimes="0;0.25;0.5;0.75;1"
values="0;0;10;10;10"
keySplines="0 0.5 0.5 1;0 0.5 0.5 1;0 0.5 0.5 1;0 0.5 0.5 1"
begin="0s"
></animate>
<animate
attributeName="cx"
repeatCount="indefinite"
dur="2.941176470588235s"
calcMode="spline"
keyTimes="0;0.25;0.5;0.75;1"
values="16;16;16;50;84"
keySplines="0 0.5 0.5 1;0 0.5 0.5 1;0 0.5 0.5 1;0 0.5 0.5 1"
begin="0s"
></animate>
</circle>
<circle cx="50" cy="50" r="10" fill="rgba(255, 255, 255, 0.6)">
<animate
attributeName="r"
repeatCount="indefinite"
dur="2.941176470588235s"
calcMode="spline"
keyTimes="0;0.25;0.5;0.75;1"
values="0;0;10;10;10"
keySplines="0 0.5 0.5 1;0 0.5 0.5 1;0 0.5 0.5 1;0 0.5 0.5 1"
begin="-0.7352941176470588s"
></animate>
<animate
attributeName="cx"
repeatCount="indefinite"
dur="2.941176470588235s"
calcMode="spline"
keyTimes="0;0.25;0.5;0.75;1"
values="16;16;16;50;84"
keySplines="0 0.5 0.5 1;0 0.5 0.5 1;0 0.5 0.5 1;0 0.5 0.5 1"
begin="-0.7352941176470588s"
></animate>
</circle>
<circle cx="84" cy="50" r="10" fill="rgba(255, 255, 255, 0.8)">
<animate
attributeName="r"
repeatCount="indefinite"
dur="2.941176470588235s"
calcMode="spline"
keyTimes="0;0.25;0.5;0.75;1"
values="0;0;10;10;10"
keySplines="0 0.5 0.5 1;0 0.5 0.5 1;0 0.5 0.5 1;0 0.5 0.5 1"
begin="-1.4705882352941175s"
></animate>
<animate
attributeName="cx"
repeatCount="indefinite"
dur="2.941176470588235s"
calcMode="spline"
keyTimes="0;0.25;0.5;0.75;1"
values="16;16;16;50;84"
keySplines="0 0.5 0.5 1;0 0.5 0.5 1;0 0.5 0.5 1;0 0.5 0.5 1"
begin="-1.4705882352941175s"
></animate>
</circle>
<circle cx="16" cy="50" r="10" fill="rgba(255, 255, 255, 0.9999)">
<animate
attributeName="r"
repeatCount="indefinite"
dur="2.941176470588235s"
calcMode="spline"
keyTimes="0;0.25;0.5;0.75;1"
values="0;0;10;10;10"
keySplines="0 0.5 0.5 1;0 0.5 0.5 1;0 0.5 0.5 1;0 0.5 0.5 1"
begin="-2.205882352941176s"
></animate>
<animate
attributeName="cx"
repeatCount="indefinite"
dur="2.941176470588235s"
calcMode="spline"
keyTimes="0;0.25;0.5;0.75;1"
values="16;16;16;50;84"
keySplines="0 0.5 0.5 1;0 0.5 0.5 1;0 0.5 0.5 1;0 0.5 0.5 1"
begin="-2.205882352941176s"
></animate>
</circle>
</svg>
);
};

export default ThreeDotsLoader;

Lastly, we’re going to update `pages/index.ts` file with the code below. The component has three state variables — the `chats`, `value`, and `isSubmitting`.

The `chats` state stores the list of conversations within the chatroom in an array of `Chat` objects.
The `value` state holds the current value that the user’s text (query/prompt).
And the `isSubmitting` state is a boolean indicating that the query/prompt is being processed.

We’re going to use the `fetch` function to handle the API call to our `/api/generate` endpoint.

/* eslint-disable react/no-unescaped-entities */
import { Flex } from "@chakra-ui/react";
import type { NextPage } from "next";
import React, { useState } from "react";
import ThreeDotsLoader from "../src/icons/ThreeDotsLoader";
import { colors } from "../src/theme";
import { AnimatePresence } from "framer-motion";
import Chat from "../src/components/Chat";
import InputField from "../src/components/InputField";

type Chat = {
user: "me" | "gpt";
message: string;
originalIndex: number;
};

const Home: NextPage = () => {
const [chats, setChats] = useState<Chat[]>([]);
const [value, setValue] = useState("");
const [isSubmitting, setIsSubmitting] = useState(false);

const handleSubmit = async (e: React.MouseEvent<HTMLElement>) => {
e.preventDefault();

try {
setIsSubmitting(true);
setValue("");
setChats((prev) => [
{ user: "me", message: value, originalIndex: prev.length },
...prev,
]);

const response = await fetch("/api/generate", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
message: [{ user: "me", message: value }, ...chats]
.reverse()
.map((d) => d.message)
.join(""),
}),
});

const data = await response.json();
if (response.status !== 200) {
throw (
data.error ||
new Error(`Request failed with status ${response.status}`)
);
}
setChats((prev) => [
{ user: "gpt", message: data.result, originalIndex: prev.length },
...prev,
]);
} catch (error) {
console.log(error);
} finally {
setIsSubmitting(false);
}
};

return (
<Flex bg="main-bg" h="100vh" color="main-color">
<Flex
maxW="1000px"
flexDir="column"
justify="space-between"
bg={colors["button-color"]}
maxH="100vh"
rounded="xl"
w="full"
mx="auto"
pb="1rem"
pt="2rem"
px="1rem"
>
<Flex
flexDir="column-reverse"
justify="flex-start"
align="flex-start"
maxW="800px"
h="full"
w="full"
mx="auto"
gap="2rem"
overflowY="auto"
px={[0, 0, "1rem"]}
py="2rem"
>
{isSubmitting && (
<Flex alignSelf="flex-start" justify="center" px="2rem" py="0.5rem">
<ThreeDotsLoader />
</Flex>
)}
<AnimatePresence>
{chats.map((chat, index) => {
return (
<Chat
key={chat.originalIndex}
message={chat.message}
user={chat.user}
/>
);
})}
</AnimatePresence>
</Flex>
<InputField
inputProps={{
onChange: (e) => setValue(e.target.value),
autoFocus: true,
value,
}}
onSubmit={handleSubmit!}
/>
</Flex>
</Flex>
);
};

export default Home;

You can access the complete source code here: Mini-ChatGPT

Conclusion

In conclusion, building a chatbot using GPT-3’s API and Next.js can be a fun and challenging project. By following the steps outlined in this blog post, you should now have a good understanding of how to use GPT-3’s API to create a chatbot with Next.js. Whether you want to experiment with the API or build a chatbot for a production-ready application, the skills you have learned here should serve as a solid foundation.

Finally, if you found this article helpful, consider signing up for our newsletter to receive tips on web development. You can also check out my YouTube channel, FullStack Mastery, for more full-stack development content.

Now it’s your turn — what do you think of the article and the chatbot? Let me know in the comments section.

Thank you!

--

--