What if you could build an app that builds itself? That's the question Bryant seeks to answer in one hour on this AI themed episode. Follow along as he builds a custom Directus extension that connects with the OpenAI GPT-4 API and updates the projects underlying data model based on a simple prompt.
Speaker 0: Hi guys. Welcome back to another episode of 100 apps 100 hours. I'm your host Brian Gillespie, developer advocate at Directus, and today we are tackling the elephant in the room, AI. So we're gonna be building an AI app generator. By that, I mean an app that can update itself with new data models and new interesting ways to work.
So what's the inspiration for this? So I've been, over the last couple weeks, I've been seeing all these videos on X or Twitter, whatever it is, that show, this application TL draw where you can go in and sketch something out inside this quick little application, click a button, and it will use the open AI vision API and some of their other tools, I guess, to actually generate real code that makes whatever you sketch come to life. So really excited by that. We're gonna try to replicate something similar here. I'm calling this the mighty morphing App Ranger because I grew up with Power Rangers.
It was one of the I think it was one of my first Halloweens. So if you're new to the series, let's dive in. There are 2 rules. We have 60 minutes to plan and build this application, no more no less, and the second rule is just use whatever you have at your disposal. So in this AI episode it should be pretty interesting to see how far we can get with this.
One of the ways that I wanna start here is just by charting things out, and we'll try to upload that chart to chat gpt or open AI and and spit something back out that we can use. So without further ado, let's dive in and start building. So like I said, I like to plan everything first. So let's just draw some stuff on our artboard here and kind of sketch out what we want this actual application to do. So the very beginning we're gonna draw a diagram of diagram diagram of an app.
Upload, we want to, what, send that to the application. Let's shrink this down. That's really large. Alright, you can tell even though I use Figma quite a bit it's still not my strong suit. So we'll draw a diagram of an app.
We will send, use OpenAI Vision to create a data model based on the diagram, and then we're going to update our app based on that code or data model automatically. So we want the app to automatically update itself, but before we dive into that, let's just try to proof the concept out. Right? So let's draw an app. Let's start with something simple like a CRM.
Here's our CRM app. What are the different data models or the different tables, the the different collections as we call them inside Directus, which is what we're gonna be using for our back end and what we're gonna build our application with. So we've got contacts. Well, we probably got some organizations. We've got a relationship between those.
We probably have some deals, and we can use some arrows for the relationships. And if we just look at contacts and orgs, we probably have maybe like a junction table or something here. I don't think it left enough room, but let's call this, what? Organization contacts. Give that a little more room.
Okay. Alright. So there's our basic diagram. Maybe we go in and inspect some fields out for this as well. We got a first name, last name, email, phone, title.
K. We've got a name for the organization. We've probably got an address. We've got, what, what else is on the address? I got lost on my Figma file here.
So address, name, we got some contacts. We probably got organizations that those contacts belong to. That's probably good enough. For the deal, we probably have a deal name, deal value. And what else?
Close date maybe? Alright. So that is a pretty simple CRM app. Let's make our diagram pretty. And now let's just take a screenshot of this and let's just proof this concept out.
We can use just the regular chat GPT for this because it it has that underlying vision API that we're gonna use once we start building our app. But I can upload this into chat GPT and say something like this. You are a SQL, post grace, and direct us master. Please create a SQL query, SQL query that will create the data model for the this CRM application as designed in the in the what, in the diagram attached. Please fill in and make the data model more robust.
So, honestly, I'm I'm just curious to see if this will actually work to begin with. What happens? Oh, no. Did I lose all of that? Did that really happen?
Open this up. Oh, no. Okay. Well, that's a hazards of the job. Right?
Restore recently closed. Please, you are a post grace and directus master. No hidden camera tricks on this show. Just fun here. Sometimes you fat finger one of these.
Please create a SQL statement SQL query that will create the data model for our app as in the diagram. No. The conversation is not helpful so far. Let's do a new chat. Restore recently closed.
Alright. Sometimes the technology doesn't work out so well. Please create a SQL query for Postgres, that's the database we're using, that will create an app create the data model for an app based on the diagram. Alright. I'm gonna copy this just so I don't have to what in the world is going on?
Alright. Let's just try this again. Holy moly. Wow. Okay.
So we've eaten up a lot of time there just messing with chat gpt. That's a lot of fun, but let's see what this comes up with. Looks like it has to create a SQL data model, SQL code. So if we just look at this, we've got a contacts table. That's good.
We've got the organizations table. We've got a associative table. That's a junction table. We have a table for our deals, and then we have some kind of statement that is altering the table. Okay.
Please rewrite to use UUID to add a primary key of ID for all the tables and use UIDs. Just want to make sure we all have a primary key of for each one. Alright. Cool. Okay.
So now we've got the IDs. We're using the UUID instead of just an integer there. Alright. Great. Okay.
So looking at this, we can test it out. Right? How are we gonna test this out? So, what I've got set up, for this app is basically a Docker Compose file that I'm using to generate my back end, which is Directus. And if I pull up my Directus instance and just log in, so we go to admin atexample.com.
We do password. We'll log in and if I zoom in just a little bit you can see this is a completely blank app. Nothing here. So what Directus does behind the scenes, it sits alongside your Postgres database and it will introspect that database, which basically means anything that any changes that you make inside that database or any SQL database, doesn't have to just be Postgres, it will mirror those changes in real time. Sounds great.
How does it work in application? Alright. So we go in, let's just pull up this database inside TablePlus. Right? So I could see all of my directus tables here.
There's no other tables. We've got some items in here for, like, post GIS, just so we can do geo data inside here. But let's just copy and paste this inside the application, or inside table plus, and we're gonna run this SQL query. The only thing that I'm gonna do here is just change this application. I'm gonna change the organization contacts, And what I'm gonna do here, I'm just gonna change this so that there is a ID and that's the primary key instead of a composite key.
But other than that, let's just run this thing. So hit run all. Okay. Altered table has been altered. If I refresh, we can see that we have created those different tables and, you know, the associated columns.
Great. Now if I load up Directus, I can see that Directus, just refreshing the screen, has recognized that those tables are inside our database. So I can go through and click to configure these tables. We can see that my different fields are coming through, and if we click on those we can configure what the actual UI would look like for these things. So if I just go through and click on a few of them, maybe we want to hide this specific field.
Directus really easily allows you to update this data and the presentation of it. So basically now we're just controlling the form that displays when we add a new contact. So let's just edit that a little bit, put the title down there. Great. Yeah.
So if we go in, we refresh, we've got our contacts, and now I could go in and add my contact. Great. 555-555555. Looking great. Brian@directus dotio, and I'm a developer advocate.
Cool. So now if I go into table plus here, we can also see there's my record. Right? So Directus is really nice in that it mirrors everything, but it also gives me a REST based API. So I could go in and any of those collections, if I were to, just give read access to just the general public, we could also create different user roles here.
But if I were to go in and just copy my address I could do something like this where do items slash contacts and boom I get it ready to use REST or GraphQL API that we can use to build our back end with. That's great, this is really cool. How can we take this further, right, how can we make this application able to adjust itself, right, what does that actually look like? So Directus has built in automation, using flows. So flows you could set up, any type of simple or complex automation for your data.
You can make third party API calls, receive incoming webhooks, all of those great things. And I'm thinking we can use a combination of flows and chat GPT or the OpenAI API, that's a mouthful, to actually be able to send this a prompt, have it update the data model automatically. Now there are multiple ways to adjust the schema or your data model of a Directus application. So if we look at the documentation, Directus actually has some schema endpoints where you can go and snapshot your data model. So you make a request, your schema Let's see if we see the sample response.
This is the diff, so it actually takes a snapshot of the existing schema and then compares that, with what you've got and comes up with a difference. But it's basically a bunch of JSON data that represents the data model inside our database, inside our direct instance. It is a fairly specific and complex syntax, though. So I'm not sure about OpenAI being able to actually parse this. You know, I'm really concerned about the accuracy for this because it's kind of a a niche syntax that it has to adhere to.
Right? So maybe we could go the SQL route because it seems like, OpenAI understands SQL pretty well. Just just guessing. I'm not sure if it does or not. I guess we'll find out.
Right? So how can we actually do this? Direct is it doesn't allow you to run raw SQL, rightfully so. Easy way to blow up your database. But I think that's how we're gonna have to do this.
So what can we do? Let's reach for a custom extension. So I'm just gonna go to the docs. We're going to create an extension, and so let's pull up our application. I'll just open this up.
Alright, so we've got the npx command. Yes, We will create a Directus extension. And there are a couple different types of Directus extensions that you can create. I say a couple, I mean there's a lot of Directus Extensions, but what we're going to do is create an endpoint that we can call and run that actual with a SQL query provided by OpenAI and run that actual query. So first off foremost, Nat, maybe we can get, like, a big disclaimer on here.
Do not do this sort of thing in production. Our engineering and security team will probably both be mad at me for this, but we're going to do the custom endpoint extension. What are we gonna call this? Let's just call it raw SQL is the name of our extension. We can use typescript, that's fine.
And boom. So Directus will go through and scaffold out this extension. I could see it here inside my raw SQL. Cool. Alright.
So now what I wanna do, I'm just gonna drag and drop this into our extensions endpoints folder directory and I'm gonna do one other thing here as well. When we actually build this extension, I'm just gonna drop it into the root of that raw SQL directory, instead of a distribution or dist directory, because it has to be inside the root directory for Directus to pick up on that. Alright. So now we're gonna go to extensions slash what endpoints/rawsequel. Okay.
We're gonna do npm run dev, or it says npm run dev. So let's run that. And one of the other things that I've got, if you are working locally with Directus, you're building extensions, probably one of the things that you wanna do inside your config is set up this extensions auto reload, which basically means anytime that you rebuild that extension, which, you know, we're doing here, it will reload that without having to restart the direct assistance for you, which is nice when you are building. Alright. So if we take a look at our existing extension here, we hit save.
It's gonna build that endpoint for us. And looks like Directus is reloading extensions. So now if I go to our Directus URL, local host 8055, and I do raw SQL, we get this hello world. Right? Great.
But what do we want to actually do now? We want to actually run a a raw SQL query. So how do we actually make that work? How do we get that done? Underneath the hood, Directus is currently using a query builder called connects or nex or that.
I never know how to say this thing, if you pronounce the k or not. But it does have a raw it has a raw query object or a raw method that you can call to actually run a raw statement against your database. Again, giant disclaimer, the lawyers say don't do this in production because it is super easy to blow up your database, and also don't trust AI fully to build your application for you. Lots of red flags on this one, but should be interesting nonetheless. Right?
So what are we gonna do? We are going to look for our custom endpoints, and Directus gives us access to that underlying database instance. So the endpoint receives the router and the context. So if we do something like this, we've got our context, if I can actually type, and we're gonna do let's make this a post route and where we let's just call it raw SQL run. Alright.
Cool. So we'll wrap this in a, like, a try catch. And what are we gonna do? We are going to pick up the query from the body. Yeah.
Okay. And we are going to do what? We're going to do something like this where we have context dot databasecontext.database.raw, and then we're gonna run that query, and then we are going to return the result. Alright. Let's do our catch error, error dot message.
Now let's see what happens if I refresh. Raw SQL, nothing happens on this, but let's go to run. Raw SQL does not exist. That's that's great. We want it to not work when we run that, but let's just open up something like Postman, so we could quickly test this.
I I'm not really a curl guy. I always struggle with the command line. Alright. So we will pop this in. We've got raw SQL run, and we should be passing what?
A raw JSON. K. And we've got our query. And let's just say select from what contacts. Alright.
We hit posts. Raw SQL run does not exist. Why is that? Rawsequel/run. Why do we not exist?
Great question. We are 36 minutes in. Raw SQL run does not exist. Alright. Let's see.
We're getting a wait can't oh, the, this needs to be what? Async. Okay. Boom. Alright.
So we just ran a raw database query. So you could see that query here, and it is returning the result from that query. Alright. Great. One of the other things that we could do here just to make this a little more robust would be to use the accountability object that gets included inside our services here.
Where is this at? So each request we could take a look at the accountability. So we do something like this where only an admin user could do this. So if rec_.accountability ability dotuser.isisadmin? Is that what it is?
I think that's it. Only admins can run raw SQL queries. Did I spell that right? Req.accountability. Let's see.
Only admins can run raw SQL queries, so now we have to be logged in to actually run that. And, cool. So now if I go to our admin user and just generate an access token, and if I pass that in our authentication here as a bearer token. Okay. Cool.
So we've added a little bit of auth to this endpoint to, you know, run our our SQL queries against the production database. Great. Alright. So now we've got our application here, and I refresh. I could see we've got contacts, deals, organizations.
How we doing on time? Let's go in and let's add a new collection. We're just gonna call it, what, app, transformations, mighty morphin' power rangers. You know, we could probably even call it, like, AI migrations or something like that. Right?
We'll generate a type for it. Do we really need these extra fields? Maybe we can do that. Alright. Alright.
So let's give a let's have a here's the prompt. And I'm trying to think of ways that we could make this more robust. Let's see the SQL. Let's call this a query, and let's add another field. Maybe we could ask OpenAI to give us a way to undo.
Okay. Alright. So now if we do this, Yeah. Maybe we should have had that status field as well. Has it been applied or not?
So let's just do a drop down. So status, we'll do applied, unapplied, unapplied. Okay. And then any default values here will be unapplied. Cool.
Alright. Great. Okay. So now we've got our AI migrations. You know, I I could go with the chart thing, but we could also make this fun where anybody can create new data for those or new applications.
So let's do let's start by building our flow first. Alright. So flow flows are how we automate data inside Directus. So let's create a new flow, and we'll say call open AI, and we could trigger this a couple of different ways. We could trigger it manually or we could do it on a event hook in this case.
So when a different event happens inside the platform, we trigger this flow. Let's do the action non blocking, and then we'll do anytime we create a new item in AI migrations, we'll trigger this flow. So I'm just gonna stop here. We'll go to AI migrations and we'll say create a postgrace data model for an LMS. Right?
So I save this and if I go back to my flow and if I actually make this a little larger where you could see it, I can see my logs over here on the right hand side. Here's our payload. There's the prompt, and then we could see other things like our accountability. Is this a logged in user or not? Great.
Alright. So let's move on to the next step. Right? Once we've got that prompt, we'd wanna pass that to OpenAI. So let's call, the the the what?
OpenAI? I I don't even know what to call this. Call AI. We're gonna call the API, and we are looking for the webhook and request URL. So we'll go in and I'm pretty sure this will be a post request, but let's just open up OpenAI.
Let's open up their platform. Got our API keys. So we're gonna need a new API key. This will be deleted by the time you guys watch, so please don't try to steal my API keys here. But if we look at, their documentation, right, we've got the different models like GPT 4.
That's probably the the best one that we wanna use. GPT 4 turbo improved instruction following. So maybe that's the model we wanna use. Let's do this. Chat completions.
So if we're doing this, we curl. Alright. Here's the endpoint that we're probably going to hit. Great. Okay.
Alright. So if I just separate these 2, we'll drag that over. Get our text generation. So let's copy this URL. There's our chat completions.
For our headers, we're gonna have content dash type. Got application JSON. Alright. And then we have our authorization. We have bear bear, and I got my API key.
I'll just copy paste that there. Alright. So in the body of the request is where the meat and potatoes are here. Alright. So for our model, let's use the most expensive one.
Right? Where's our models at? GPT, GPT 4 turbo. That one has a vision. Okay.
Great. Let's just use this guy here, 1106. Fancy, fancy. Alright. So we've got our model that we're gonna use, and then we give it a prompt.
Right? You are a helpful assistant, user assistant. Let me just edit this in my text editor real quick. Alright. So we're gonna do something like this where this is gonna be our trigger dot payload dot prompt.
We'll delete the rest of these, and then let's give it some system instructions. Right? But let's use chat GPT for that as well. Right? So write some instructions for, chat, GPT 4 model, write some system instructions that will always make it generate SQL queries that, create well well rounded data models for apps as described by users.
Let's see what it comes back with this. Really weird kind of having a conversation with AI in this case. Blah blah blah. Normalization. Include comments.
Is this actually going to give me something or not? Let's take a look at where we're at. We got 25 minutes left. Chat GPT here is, like, killing me with the details here. Alright.
So I guess we could just go with this. Let's just type something out. You are a Postgres and Directus expert full stack developer. Users will describe an application, and you will write a SQL query that will build the data model for that application. Use your expert knowledge to fill, to create a more robust application than the user has described has described.
Do not use composite keys. Use only UID for primary keys. Call the primary key fields. What? ID.
Use your best judgment to create the data model. Okay. Sounds good. There's our prompt. Okay.
We'll just paste this into the body. Hit save. And let's just look at this real quick. There's our payload. Trigger payload prompt.
Okay. Alright. Let's see what we're gonna get back from here, and then maybe I just want to go in and actually update that. So we'll say update migration. And maybe we can make this a 2 step process where, would maybe we'll just run it.
We'll see. So we got update data. Let's go into our migration, and then we have trigger dot key. And the payload here is going to be query, and what did we call that? Call let's just do this.
Just leave that blank for now. Let's give it a shot. Right? Let's see what happens. Please help me build a LMS.
There are many courses. Each course has several modules and each module has many lessons. Okay. So if I save this, that should call the OpenAI API and return some data or return something. Right?
Why are we not getting any logs here? Did this actually happen? What's going on here? Alright. Let's try it one more time.
Or maybe it's still still building. What's going on? Not sure. We are at 21 minutes remaining. Okay.
Yeah. It just takes a little while. Alright. So here's our payload from OpenAI. Okay.
So we can already see there's a a problem here. Right? It is returning it's returning additional stuff, which kinda sucks. So we need to adjust our prompt for that. Right.
You will only return the SQL query. And what was that I saw about JSON mode? Maybe that is something we can use here. Text generation, JSON mode. Okay.
Yeah. That's probably what we need to enable. How do we do that? Response format type JSON objects. Okay.
Let's give this a shot, shall we? Alright. So save this. And maybe I will, let me just delete these other prompts that we've got. And, actually, we could probably we've got enough data now that we should be able to pick this up, get choices back.
Okay. So we would just have to access this, but let's run it one more time just to see. Let me build an LMS. There are many courses. Each course has several modules.
Blah blah blah blah blah blah. So we are waiting for chat GPT to come back. But while we do that, oh, less than a minute. That's quick. Bad request.
The dumb dumb. Okay. Response format, JSON, JSON object, messages. Goofed something up. We'll just leave that there.
This is valid JSON. Looks like it. Let's try again. Alright. Delete.
Okay. Prompt. Flows. Logs. Okay.
So now it's doing its thing. What could we do next? Right? We're gonna run that SQL that it returns against that endpoint automatically. Kind of a scary thing, but let's see what we've got.
Has it returned yet? Nope. Still hasn't returned. Alright. We'll wait on that.
There we go. Okay. Alright. So was the payload coming back? Okay.
So there's the SQL statement. Okay. And Alright. So it's just returning the query. When using JSON mode, produce some JSON via message for conversation.
Response format. Yeah. I feel like we need to set this response format. I'm not sure what I did wrong the last time. Model response format type JSON object.
This is invalid JSON or something? Not sure. It looks fine to me. Let's try it one more time. Otherwise, we'll just have to do something more interesting.
I would just have to like replace those last 2 or 3 characters. Clipboard. Help me build an LMS. Cool. Alright.
If this bricks, we'll just go back to what it was. Yep. Bad request. We cannot parse the JSON body of your request. Yeah.
I don't I don't know why why it's doing that. Alright. Okay. So, anyway, there's our our data. Alright.
So what are we getting back from the API when this actually works? We're getting something like this. So let's do a I'm just gonna copy and paste this. So here's our response. K.
Great. Alright. So let's do an intermediate step where we're just gonna clean this data up. We'll go into run script, clean up query, and we're gonna do what? We're gonna get the data.
So the query equals data dotco underscore call AI dot what? Dot data dot choices, first item in the array dot message dot content. Okay. And then we are going to, what, remove the first three characters. Right?
Okay. And let's just use AI again. Right? Fun. Fun.
Fun. Where did OpenAI go? Alright. Chat GPT. Let's create a new conversation.
Maybe we could use 3.5 for this because it's faster. Write a JavaScript function to remove the 3 backticksandthe. SQL. Okay. Cool.
So remove the back ticks in the SQL code. Cool. Alright. There we go. There's our cleaned SQL code function, and then we're just gonna return that.
Return cleaned SQL code. Alright. So let's make sure that's gonna get what we need. We got data, We got replace content, choices message content. Okay.
Let's give it a shot. Okay. And now for the next trick, we are going to post that, to our endpoint. Right? So we will grab our local host.
We're gonna do raw SQL slash run. It's gonna be a post. We're gonna do authorization bearer and this is gonna be our token that we used here. Alright. So this is the token for our direct as user.
What else do we need? Do we need anything else? Authorization. We probably ought to add content type. Application slash JSON.
Great. Alright. Okay. So what are we looking at? What this should do is anytime we create a migration it will call OpenAI, ask it to generate that SQL, clean up that SQL, and then run that SQL command against our database.
So if this goes right, it could basically be an application that could build itself. If it goes wrong, it's gonna blow this thing up entirely. Alright. So we got about 11 minutes left. Let's just test this out.
Right? We've got our LMS prompt. Let's do it and see. Help me build an LMS. Each course has several modules.
Each module has several lessons. Alright. Flows. Okay. So it hasn't come back yet.
It is running. If I pop open Directus, let's open our Docker container. Go to the dashboard. We got Docker running here. We got our 100 apps.
I can see the activity here, but we're not sure what's going on. Alright. So still running still running. We take a look at our data model. Nothing's happening yet.
Here we go. Okay. So we got the data back from OpenAI. SQL code is not defined. Okay.
So some issues, and it's still not returning what we need. The raw SQL query do not return anything else except for the raw SQL query. So sometimes AI is unreliable, and then we have what I must have goofed when I SQL code dot replace. Oh, duh. That's a query.
Okay. Alright. Let's try again. Just delete this guy. Now this could get really fun where, like, you have AI itself, like on a cron job or something, just manually creating these prompts or automatically creating these prompts.
So k. That kicks everything off. What are we gonna do in the meantime while we wait? Right? I don't know.
I've got a a list of dad jokes that we could go through here if you guys want. I struggle with Roman numerals until I get to 159, then it just clicks. C l I x. Yeah. Okay.
So if I look, here is my raw SQL that ran. If I look at the data model, no. Close. Alright. So let's look at our logs.
Here's a minute ago. Internal server error cannot read properties of replace. Okay. So there's our raw SQL. Did I forget to actually call the SQL?
What an idiot. Sometimes you have brain farts. Alright. So I forgot to actually paste in the the query there, so it wasn't running anything. So this is gonna be cleanup query.
Alright. So if you remember, we are passing a query string into this. So we'll do this, clean up query, and that should solve the problem. Alright. Last time.
Right? Hopefully this goes through. Help me build the LMS. Fun with AI and code. How we doing on time?
Got 7 minutes left to prove this concept. All the engineers on our team are probably very upset if they're watching this. They put a lot of hard work into security and and making sure people don't do silly things like this. Alright. So webhook request, bad request, invalid payload.
Why is it an invalid payload? JSON. Body query. Should we just wrap that in a what do we need to do here? Alright.
So we reformat dot replace. Should we that bit? I don't know if we need to JSON stringify that. And I guess we could. Return JSON dot stringify.
Not sure what that's gonna do. And then maybe even let's at least save that query so we can pull it up later. Right? So we'll update that. We'll do this with the, what, cleanup request.
Cleanup query. Alright. Working on a time crunch here. What's going on? Fields to resolve.
Save. Edit. Some kind of weird behavior happening. We save JSON Unexpected token. I don't know.
Let's try it again. Help me build an LMS. Flows. Flows. Flows.
Alright. So we'll just wait a minute. So also think about these AI. They take a little time, especially the the really fancy models. Right?
Logs less than a minute ago. Bad request. Why can't we get this to run? Unexpected token. Payload.
Did it update? Let's just look at the okay. So if we were to pop this in there, assuming we remove this, if we were to pop this inside table plus. Right? So if we open up our database, we go in, we run this SQL query.
Does this actually work? Run all. Okay. So it doesn't really jive. Okay.
So whatever we had previously should be what we go back to. Alright. When in doubt, turn to AI? I don't know, man. This one is challenging.
I I thought we could totally get this done. We are not able to pass this. Let's just do this where we go into our query. We have return. Query.
That's gonna be our cleaned SQL code. Alright. So we're returning that, and then we're gonna pass that cleanup query directly. So if I go into edit raw value, clean up query. Okay.
And then I'm just gonna disconnect this piece for now. Man. Okay. Cutting it against the clock. I don't I don't know why I'm always up against the clock on this particular show.
Alright. Let's try something else. Right? Help me build a CRM. No.
Build an ecommerce platform for selling watches. Products, categories, prices. Let's just see what it comes back with. Flows less than a minute. Call OpenAI.
Bad request. Okay. Why is that? Let's just try the standby here. Help me build.
Alright. Okay. So that's doing its thing. We will try this as well, and otherwise, if it doesn't come back in the next 50 seconds, it feels like a public fail to me. Man, I was really hoping we could get this one to work properly.
Sometimes it doesn't work out though. This one would be one to do a follow-up on though. Right? Oh, boy. Okay.
Okay. Okay. Let's see what we got here. Right? Did this actually do what it wanted to do?
Oh, there we go. How about that? Right? So now we've got our courses. Let's take a look.
We've got a title and description. If we look at our lessons, what do we have? We've got title and content. We've got our modules that has our course relationship all ready to go for us. Right?
Woah. Look at that, guys. At the buzzer, we came through. This is now a mighty morphing app Ranger that will build itself. Right?
What can we do next with this? We could go in and create these prompts, but I could go in and do this, like, call a cron job. Let's just explore it further. I just wanted to take this, just to to one one more stop. So if we do create new prompts and at what, let's just make it whatever.
I forget the syntax. What is the Google what's the Kron syntax? I wanna say it's like a we use the 6 digits syntax. Okay. Seconds, minutes, hours.
Okay. So, again, chat gpt. Just for fun, what is create a cron syntax statement that will run every 2 minutes. Okay. Alright.
So running every 2 minutes. There we go. Got that. So I'm just gonna go back into my Directus application. Where are you?
Where did it go? Got too much going on. Is it over here? Okay. Alright.
So we've got our cron syntax. That's gonna trigger and then but what if we call OpenAI again? Okay. I tell you what, let's do this on a follow-up. We're gonna eat up a lot of time here, but, just to prove that this is working again, let's do please help me build an ecommerce platform for selling watches.
Pricing, let's just leave it at that. Right? Let's see what it does. Call create new prompts 3 minutes ago. Okay.
This is coming back. And, again, I could watch my directus instance just to see if it is running that SQL statement. I don't see it running just yet. OpenAI is still thinking. But, wow, guys.
This one was really fun. Pretty crazy what you can generate with AI. I hope you guys enjoyed this one. Stay tuned for more episodes in the series. Did it actually work?
500 internal server error. What happened? Please make sure. Okay. It tried to kick something else out to to that.
So it requires some fine tuning on the AI, but I think this is totally viable and totally an interesting project to build an app that can update and change itself. So that's it for this episode. Thanks for sticking around. Catch you on the next episode.