Convex AI Workshop
Welcome and thanks for joining!
To learn more about Convex, check out https://www.convex.dev (opens in a new tab).
We'll build an AI-powered app in this workshop, you can play with a deployed version of it here: https://ai-workshop-demo.vercel.app (opens in a new tab).
As you follow the steps below, mark whether you've completed a given step.
Step 1: Set up your dev environment
Make sure you have the following installed on your computer:
- a text editor (VS Code (opens in a new tab) is a good option)
- POSIX-like terminal (VS Code includes one)
- Git (https://git-scm.com/download (opens in a new tab))
- Node (https://nodejs.org/en (opens in a new tab))
Step 2: Install the template
In your terminal run:
npm create convex@latest ai-workshop -- -t get-convex/ai-workshop
Then navigate to the newly created directory as the command suggests:
cd ai-workshop
Step 3: Launch the template
In the newly created directory, run:
npm run dev
This will ask you first to login to Convex. You'll create a free account, and a new project for your backend.
Afterwards it should open two pages in your browser:
- One is the Convex dashboard for your backend
- The other is the template web app running on
localhost
In the web app you can type a prompt and get a text response
Check the dashboard. On the Data
page, as you add prompts, they show up in the
prompts
table.
Step 4: Get your own OpenAI API key (optional)
To get the web app working, you'll need to get an OpenAI API key.
-
Click on this (link (opens in a new tab)) to open your Convex dashboard deployment Settings page, Environment Variables section.
- If you navigate to the page manually, click on
+ Add
and as a name paste inOPENAI_API_KEY
- If you navigate to the page manually, click on
-
Go to this page: https://platform.openai.com/api-keys (opens in a new tab).
-
If you don't have a free OpenAI account, create one.
-
Click on the
+ Create new secret key
button and create a key. -
Finally paste the key as the value for the environment variable on the Convex dashboard and hit
Save
.
You can now test your app again. It should work and generate text.
Step 5: Unblock image generation
Right now your version of the app cannot generate images. Let's fix that!
First, we'll add the Image option in the UI.
Open the src/App.tsx
file and, below this line:
<SelectItem value="text">Text</SelectItem>
Add an option for images:
<SelectItem value="image">Image</SelectItem>
Hit save and check that the web app now let's you select Image as the output type.
Back in the src/App.tsx
file, if you scroll a little bit up you can see that
this will enable the client to set the outputType
to "image"
. This state is
passed to the call to addPrompt
, which is a call to "a mutation" called
api.ai.addPrompt
. If your editor supports Jump to Definition, click on the
second addPrompt
.
Step 6: Understand the addPrompt
mutation
You should now have the convex/ai.ts
file open (if not, open it, and find
export const addPrompt
).
You should see the addPrompt
mutation definition.
This is a public endpoint exposed by your backend, which your web client calls
when you hit the Generate
button in your frontend.
You can see that the mutation specifies three args
: sessionId
, prompt
and
outputType
. The v.
syntax allows Convex to both precisely validate that the
arguments to your endpoints are what you expect AND allows TypeScript to infer
the argument types (if you're familiar with Zod, you're right at home).
This mutation does only two things:
-
It writes (
insert
s) a document into the database that represents the prompt we submitted. This document hasresult
set tonull
because, well, we don't have any result yet. -
The mutation schedules an action
internal.ai.generate
, and passes it theid
of the document we just created.
Step 7: Implement image generation
If you scroll down you can see that the generate
action currently does one
thing, calls generateText
.
You can read the source of generateText
below. It uses fetch
to get a text
completion result from OpenAI. I used fetch
here to show that you can call any
API from an action, although we could have used the
OpenAI TypeScript (opens in a new tab) library as well.
generateText
then calls an internal mutation to write the result to the
database, or to delete the prompt in case an error happened.
Let's add image generation, replace:
await generateText(ctx, args.prompt, args.id);
with
switch (args.outputType) {
case "text":
await generateText(ctx, args.prompt, args.id);
return;
case "image":
await generateImage(ctx, args.prompt, args.id);
return;
}
And add the the generateImage
function anywhere in the file:
Show code
async function generateImage(
ctx: ActionCtx,
prompt: string,
id: Id<"prompts">
) {
const response = await fetch("https://api.openai.com/v1/images/generations", {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${await getOpenAIKey()}`,
},
body: JSON.stringify({
model: "dall-e-3",
prompt,
n: 1,
size: "1024x1024",
}),
});
if (!response.ok) {
await generateFailed(ctx, id, response, `OpenAI API error`);
}
const json = await response.json();
const imageResponse = await fetch(json.data[0].url);
if (!imageResponse.ok) {
await generateFailed(ctx, id, response, `Image download error`);
}
const result = await ctx.storage.store(await imageResponse.blob());
await ctx.runMutation(internal.ai.setImageResult, { id, result });
}
This function is similar to generateText
, but uses the OpenAI image generation
API.
All the way at the bottom, it does something a bit different: it calls
ctx.storage.store
to save the file into Convex file Storage.
Step 8: Show generated images
Give the app a try now, and you'll find that it still doesn't work!
If you wait about 15 seconds after submitting an image prompt, you'll see that
the box changes to show "Image URL is invalid"
(feel free to open your browser
developer tools and see what the img src
attribute is).
This is because we saved a _storage
ID into our table, but the frontend needs
a URL. Luckily Convex can serve files directly from its file storage!
We'll need to amend the listPrompts
"query" function in ai.ts
to return URLs
instead of IDs.
Replace
return prompts;
with
return await Promise.all(
prompts.map(async (prompt) => {
if (prompt.result?.type === "image") {
return {
...prompt,
result: {
type: "image",
value: await ctx.storage.getUrl(prompt.result.value),
},
};
}
return prompt;
})
);
As soon as you save the ai.ts
file, you can see that the UI starts showing
your images. This is because, after we pushed a new version of our code, the
backend reruns all existing queries.
And that's it, we got image generation working!
Step 9: Deploy to production (optional)
It's a good practice to make sure your app works in production - and it allows you to share it with others and get feedback!
Let's use GitHub and Vercel to deploy this app to a public URL - but you could use any hosting service of your choice!
Let's push the code to Github. Run:
git init && git add . && git commit -m "initial commit"
Then create a new GitHub repo at https://github.com/new (opens in a new tab), pick a name and whether you want the repo to be public or private. Leave the rest of the options at their defaults.
Now follow the push an existing repository from the command line
instructions.
From here follow the instructions in Convex docs: https://docs.convex.dev/production/hosting/vercel (opens in a new tab).
While you're in the Convex dashboard, in your Prod deployment settings, set
the OPENAI_API_KEY
as well.
If you got all these right, and you hit Deploy, you will get a public
vercel.app
URL hosting your app. This app is talking to your production
backend, so it's not gonna have any data initially. This way changes you make
while developing won't break your public app.
Next steps
If Convex piqued your interest, I encourage you to expand the capabilities of the app:
- Add real user login with Convex's authentication integration (opens in a new tab)
- Add speech as output type with https://platform.openai.com/docs/guides/text-to-speech (opens in a new tab)
- Enable searching prompt via text search (opens in a new tab)
- Enable semantic searching via vector search (opens in a new tab)
- Allow loading more than a fixed amount of results with pagination (opens in a new tab)
Or build a completed different AI app, for example based around chat: https://stack.convex.dev/ai-chat-with-convex-vector-search (opens in a new tab)
Find more about Convex at https://www.convex.dev (opens in a new tab).