Read through this guide to set up OpenAI & get familiar with how it is used in the different demo apps.

Set up OpenAI

First, create an OpenAI account or sign in. Next, navigate to the API key page and “Create new secret key”, optionally naming the key. Make sure to save this somewhere safe and do not share it with anyone.

Once you have your API key, paste it in your .env file:

OPENAI_API_KEY=your_openai_api_key

Demo apps

Below is a list of demo apps using OpenAI. Chat applications use OpenAI through LangChain, while other demo apps interact directly through the OpenAI API.

For detailed information on each application, please check their respective section.

Chat with PDF

Upload a PDF and use OpenAI to chat with it

Structure

Under lib/services/openai/* you’ll find, for each model, the different functions used to make an API call to OpenAI.

This makes it easy to call the AI models from anywhere in our app. For example, you can call GPT directly from anywhere in your app using this logic:

example
import { generateOpenAIResponse } from "@/lib/services/openai/gpt";

const responseData = await generateOpenAIResponse(
  prompt, // your prompt
  functionCall, // your JSON schema to define output structure
  toolConfig.aiModel, // the model, GPT3, GPT4 etc
  toolConfig.systemMessage // the system message so you can give the AI model a role
);

You’ll notice that each API route follows a similar structure:

  1. Takes in input from the front-end
  2. Generates a new prompt based on the user inputs
  3. Calls the API using the prompt + JSON schema
  4. Extracts the JSON response
  5. Stores the JSON response in supabase
  6. Returns a uuid to the end-user, which is handled by the frontend to fetch & display the response from Supabase.

More information on structure of the codebase can be found here:

Structure

Understand the project structure of the codebase