Bryant takes on his most ambitious challenge yet: building an AI Copilot that integrates directly with Directus data. Watch as he races against the clock to create a custom chat interface that can query your database, implement tool calling functionality, and (attempt to) persist conversations. Things get meta when Bryant uses AI to help build his AI assistant—with some hilariously chaotic results along the way.
Speaker 0: Welcome back to another episode of 100 apps, one hundred hours. I'm your host, Brian Gillespie, for Directus. And today is a very ambitious hour. I've got no clue how this is actually gonna go, but we are going to try and build an AI copilot that sits right alongside our data, inside our back end, which is Directus. Alright.
If you're new to the show, 100 apps, one hundred hours, we have sixty minutes to plan and build an application, a clone of Airbnb, did some something we can actually ship and pat ourselves on the back at the end of the day. Or, you know, more often times than not, we struggle and then we, fail publicly. You know, sometimes that stinks, but it's all in good fun. The second rule of 100 apps, one hundred hours is use whatever you have at your disposal. So, AI tools, past projects, you know, if I my kids could code at this point, I'd probably leverage them.
But, it should be a fun episode today. Let's dive in and try and figure out what we're actually doing. Let's put sixty minutes on the clock. Boom. AI Copilot go.
So, at this stage of the game, AI is very impossible to ignore, and, you know, chatbots are everywhere, very commonplace. You know, lots of folks using Claude, chat GPT on a daily basis. Even my kids are aware of what it is. So, what we're gonna do is try to build an AI Copilot into our Directus instance. So I've got this blank Directus instance.
I'm imagining this as a module. Directus is super flexible as a back end in the CMS because we got full control over what's going on. I'm imagining it as a a module with, like, a standard chat interface. I don't know if we'll get there or not, but let's just discuss functionality. Right?
We want to be able to chat with AI models, specifically LLM models. Cat. You have to actually learn to spell Bryant. Alright. We wanna be able to chat with the LLMs.
We want to be able to use tool calling. I think that's the technical term to fetch Directus data. Data stored in Directus. Data stored in Directus. We wanna do this do all this through a beautiful chat interface.
Alright. That's our functionality. Right? We're gonna probably have as far as our data model, we'll have, like, a conversations to, like, persist these. We've also got our data, which I'm going to leverage some of our existing, like, starter templates.
Great. Okay. So there's the arrows drawn in the sand. Let's let's dive into this thing. The first thing I'm gonna do, I'm gonna load up some sample data.
So what I wanna do is go in here and create a token. I could just use my standard password as well, but we're just gonna copy that out. This is set up at local host 8055. I'm gonna come into the terminal, open up a new terminal instance here, and once that spins up, I'm gonna do directus m p x. I guess I can make this big so we can actually see it.
Right? Directus template CLI at latest. Apply is the command I'm gonna run, and this should load up some templates for me or the ability to load a template. Fetching. Okay.
So we're gonna apply a community template. Looks great. Let's do let's just do our simple CRM for this one. You know, I could the simple website CMS, super excited about that template. I love that one.
For, like, a assistant, maybe CRM makes more sense. So we're defaulting to eight zero five five. I'm just gonna add my token in there, and you could probably steal this token. This instance is gonna be destroyed, and I'm not sure I'm I'm sure somebody could figure out how to get access to my local host. I know we got a lot of really great hackers out there in the community.
That template was applied successfully. And now I'm just gonna hit reload, and the first thing I see is Michael Scott there. Alright. So now I've got some sample data to at least work with. We got some organizations.
Great. So let's dive into how we're actually gonna pull this off. If I map this out, basically, what we're gonna take advantage of are extensions in Directus. Extensions, extensions, extensions. Two types specifically.
Right? So for secure communications to our LLMs, we want to have a API proxy endpoint. So we're gonna set up an endpoint, and I'm pretty confident we can get this done. The actual chat UI, I'm not sure that will be what we call a module. So a module is basically just whatever functionality that you want.
We give you tools like composables to access data inside the Directus instance, and then we give you free reign. So, like, a good example of a module is our command bar or command pallet. This is an extension that you can install from the marketplace. What it does is it adds a command k search to every bit of Directus so I can quickly search no matter where I'm at, whether I'm on contacts and I wanna search organizations, blah blah blah. It also comes with, like, a a settings page, and it's MIT licensed.
So, you know, if you are looking at building, like, a custom module, this is a good one to copy because it has a ton of functionality and, it had a small hand in it. Kudos to Hannes from our team for, dude, just making this freaking wicked awesome one. Alright, but back to the task at hand, we're going to create a custom endpoint for this. So locally, I'm not sure if you can see this or not, we've got our databases here and then we've just got this Extensions folder. There's the Registry, which is when you install extensions from the marketplace.
We're just going to create a new extension for this. So I'm going to open up this inside the terminal somewhere here, get this back to a reasonable level where I can actually develop, and I'm gonna do npx create create directus extensionlatest. Just look at the screen, I'm terrible at narrating. We'll go ahead and install this. It's been a minute since I've done this.
And we offer another type of extension where it is, which is called a bundle. So a bundle is basically wrapper, like we're building a ChatGPT wrapper. A bundle is essentially a wrapper where we can, like, bundle some extensions together. That's a bundle. Right, we can distribute an endpoint and a module within a bundle, that's what we're gonna do here.
So I'm gonna scroll all the way down, we're gonna do bundle, we're gonna call this Directus, let's just call it AI Copilot, that's what we'll call it. Yes. We wanna auto install dependencies. And now inside our extensions folder, you can see Directus is fleshing this out for us. We've got the Directus Extensions SDK, and then we can see there's nothing else in here.
Right? So the next thing that we wanna do is CD Copilot, and we're going to hit npm run add so that we can add an extension to this bundle. Yeah. The if you're not using a bundle, like, it should automatically initiate the extension type for you. But in this case, we want to do an endpoint.
We're going to call this Chat Endpoint. Let's use TypeScript and this will create a new Directus endpoint for us. Very simple, You'll notice a Request Response here behind the scenes, just an Express app. And next, we're going to talk about what we're actually going to use. Giving myself a seizure here, switching back and forth.
So, I've been experimenting in other projects with the AI SDK from Vercel. That's what we're gonna use here and try to get as far as we can and try to make this really nice in a short period of time. So let's fire this up, right? Docs. What do we need to install first?
Right? Do I want to use OpenAI or Anthropic? I've got I think I've got an Anthropic API key locally already. Do I do I not do, EMP? Yes.
I do have a API key locally. Great. So let's do that. Right? We are going to do an overview.
We want to install PMPMIAI. We're gonna use Anthropic as a provider, AI SDK Anthropic. And because we're using Express here, we're gonna wanna import Express as well, or install Express. We're probably also going to wanna have Zod. Right, if we get into, like, the tool calling, Zod is a big one, so we'll add Zod.
That should give us the tools that we need to work on this, and let's just look at our custom endpoint now, right. The next thing I'm going to do is hit pnpm dev inside this. This will set up a watcher for our extension, and I'm just going to stop our container and restart just so it picks up that extension now that we've built it. And what this is gonna do, it should now be available at slash chat endpoint. Let's see.
Local host. So if I refresh, we should see that extension. There's our chat endpoint. Let's hunt down the URL for this. Chat endpoint.
Hello, world. There it is. Boom. Boom. Boom.
Great. So how can we control that if we want? Set up some of this documentation here. Alright. So the the next thing that we're going to want to specify, I could specify, like, an endpoint or an ID for this when we define the endpoint.
Right? If I put these up side by side, export defaults. We're gonna call this the chat. Chat. We're gonna have a handler for this.
And in the handler, we'll provide the router and the context. Alright. Now we save. This should reload because I've got that set. And now if we load this again, chat endpoint doesn't exist, but now chat will.
Right? Great. So now we can control what we want that endpoint to show up as. Awesome. Alright.
What's next? Right? We're going to actually pull this in and set this up. Alright, so a couple of things that we're going to do that you probably noticed here, right? If we want to pull the API key from our environment, we're gonna want to actually pull that out, right.
So let's do Destructure This. We're also gonna do we actually need that? No. Let's just keep it simple for now. Let's look at this AI SDK, and we're gonna import createanthropic, and we're going to create an anthropic client.
Create an anthropic API key. We're gonna leave the base URL the same. Great. I'm not sure why we're duplicating that. Right.
We're gonna add a router. This is gonna be a post method. So let's now take a look at was it did I have a I have an express option in here? Okay. So here's what a typical express setup might look like.
So we're gonna import z from za. We'll go ahead and import za. Let's also do stream text from AI. We're gonna use router dot post, and let's see what AI comes up with. Message request dot body stream twenty twenty.
Let's see the newer model there. Provider instance, anthropic provider. Does this list out their models? Okay. Yeah.
Here's the latest model for Claude SONNET. That's what we're gonna use in this case. And we got messages, role, user content, message. We're just gonna send the stream. What is the is there a helper inside here for this?
Result dot stream. Results dot pipe stream to response. Let's call this the result. Alright. And let's see if this is actually going to give us what we want.
Alright. So we post a message. This should send something to the AI. I'm just gonna go into hundred apps. We're gonna create a new request.
We'll say HTTP local host eight zero five five slash chat. Oh, that's the name and not the actual URL. Chat. Alright. We got a body.
We're gonna create a JSON body. It's gonna have a message. What is two plus two? Send HTTP request wrong version number or prototype blah blah blah blah blah blah. Maybe we should wrap this in a try catch, see what we got.
Oh, clean that up for me. Oh, well, no. No. No. No.
No. Try catch console .log dot error. Do we have formatting? I do have some formatting enabled in this. Let's just try to send the results.
Pres that's in result. Extension's reloaded. What is this showing? StreamText. Create anthropic API key.
Console dot log. Do we have the actual ENV? We do have the ENV. Did I name it the correct thing? Oh, god.
What an idiot. HTTPS should be HTTP base stream, send result. I don't know that the actual Bruno here will, like, handle streaming. What if we just tried, like, generate text? Generate text.
Awaits. Let's just make sure this is gonna actually work first before we get crazy with it. Generate text results dot send. Result. Alright.
Let's pull chat over here. What's two plus two? We hit send. Okay. What is two plus two?
Finished reason, two plus two equals four. There's the text we get back. Okay. So now we've got, like, some type of AI proxy going on. Right?
Let's jump to, like, tool calling at this point. We have, like, forty minutes left. You know, I read through the documentation for this. Not sure we're gonna get this correct, but alright. This may be where we just lean on AI to build AI and call it good.
What I'm gonna do let's just copy all of this. Right? And this wouldn't be fun if we didn't, like, make it super meta and have AI create AI. Let's say something like this. Here's the documentation for tool calling with Vercel AI SDK in Node.
Create a tool for fetching data from the Directus instance using the items service. Alright. So when it comes to, like, custom endpoints in Directus, one of the things that you do have access to is all the services that we use internally as in Directus, and, hopefully, this is gonna do a somewhat decent job of that. We'll kinda see what it comes up with. But if I take a look at, like, the extensions docs and we go to extension services, like, how do I access items?
Right? Whenever I define a custom endpoint, we receive the, you know, router instance that we can plug into, and then we get this context. So the context has the EMV, has our services, it has this helper function for getting the schema. And then we define a new item service based on the collection that we're gonna call. Right?
Let's see what we come up with. So I'm just gonna hit apply. We'll kinda go through this. We got stream text, generate text, item service, this looks okay. We're also gonna need schema, services get schema.
We're gonna need that as well. Alright. So what is this actually doing? It's creating tools, multiple rounds, query from a a Directus collection. What's the filter?
Object equals await get dot schema. Request dot accountability. Can we just leave that part off? That will be read by query, filter, success, return the items, the count, accept the file, await generate text. Something is off here.
We got, like, one too many. Where are we going wrong? There we go. Okay. Alright.
So, hopefully, if we invoke this tool, there's a description. Now let's try a new message, and let's just ask it let's pull up Directus. Right? Simple question. Right?
Like, how many organizations do we have? How many organizations are in our Directus instance? There are six organizations in the Directus instance. These are Health Plus, Technovus, Solartec, Tesla. Boom.
So nailed it spot on. We can see kind of what's going on here. There's the context that that we provided, I guess. I could see, like, the answers there. That's great.
I'll help you. Okay. So there's the actual organizations. Cool. So now we've got kind of this tool calling functionality into this endpoint, which is pretty nice.
Do we want to make it aware of the schema of Directus? I'm not sure, like, is this giving it enough information? Or tools, query collection. What does our actual schema look like? Right?
Console log, await. Console schema equals await git schema. Oh, it's gonna be a async handler. Hansel dot log schema. We'll just stop logging that.
Directus flows, schema, deals, activities, deal contacts. Yeah. So the other thing to note that this could get expensive pretty quickly. Alright. Okay.
So now we are cooking at least with something. Right? Let's try to go in and focus on, like, this chat interface. Right? Now how are we going to do that?
We're going to go back to our extension. I'm going to disable this for now. We're going to hit npm run add. We want to add a module for this. So this will give us a, we'll call this a Copilot module.
Chat module, whatever. There we go. So this is gonna add another folder within our AI Copilot. So now we got the chat endpoint. We've got the Copilot module.
And within this module, now if we just build this, we should see this module becoming available in our settings for the project. Custom. Custom module. There we go. So I can actually define this.
Right? So each module or a lot of the interface extensions have this index dot ts. We're gonna call this our Copilot module. Let's call it Directus AI Copilot Magic. Is it gonna be Magic?
Oh, there's not a Magic icon. What do we have as far as icons? Designer OCD gets me every time. A recovering designer. What is that one called?
Magic button. Magic button. See if that gets us where we wanna be. Discard. Refresh.
There we go. Alright. So I'm gonna enable this. We're gonna move it up to the very tippy top. There's our module.
Here's our custom content. Boom. Didi boom. It's just a simple view component. Right?
We can get as complex or as detailed as we want. On the client side here, we have access to some composables. Right? So on the back end, like the the Node. Js side, we've got, like, this item service we can call.
On our other extensions where's the extension types? There we go. Like interfaces, for example, accessing internal systems, we can access, like, stores through this. We can also access, like, the used API or the SDK. Great.
We'll just pull in the API. And now let's look at the documentation. So by no means am I an expert here, and this is probably where things are gonna get pretty hairy. Like, if we recap, we've got a proxy endpoint. We've implemented just a tool to fetch Directus data and include that in our Copilot.
Now we need to actually build this, like, this chat UI that we're so used to seeing, right? And Directus is using Vue, which I love Vue. Vue is amazing. Most of these AI, like, pre built chat UIs are React based. So if you know of a good one that's for Vue, for building chatbots really quickly, let me know.
So let's go here to SDK UI. Right? They do have support for, like, Vue JS. I see that here, this use chat function. We're gonna need that AI SDK view, so we might as well go ahead and install that as well.
AI SDK view. Okay. There's the API endpoints, conversational interface. Is there, like, a simple recipe for this? Stream checks with a chat prompt.
Alright. So, again, this might be a great thing to lean on AI for using here's an re a React example from our the Vercel, SDK. Or, actually, let's just piggyback off that last chat. Let's see. Here's a React example from for chat UI from Vercel AI SDK.
Oh, actually, I have to show that. Build a view version of this that works with our chat endpoint. There we go. See what AI comes up with. So it's creating another composable.
Why are we just use the built in used chat from Vue AI SDK. How do I go back to that? Chatbot AI SDK, chatbots. That was what? Under the API reference?
Use chat view. And I just yeah. SDK view. Okay. And now we're getting somewhere.
We got some messages. We got v button, v text area. No. Alright. Let's let's try and look at what's going on here.
Handle submit, use chat, use API, AI Copilot, message content. So here's our messages. The v text area, v button, these are built in components within Directus. It is using Tailwind, which we we don't use Tailwind to Directus. So, use vanilla CSS instead, And then we can hit save and see if this is actually gonna work.
Right? Now, I will tell you, like, I I've been using Cursor a lot recording this season of a hundred apps, hundred hours. I like it a lot, especially for prototyping. I found myself definitely suspect of the code sometimes, so don't just tap, tap, tap, accept. Yeah.
Be very deliberate. And before you ship anything to production, make sure you test all this and go back and understand what's going on. This is not going to work, obviously. I just know our variable syntax, all of these have dash theme, dash dash in there. So hopefully, this solves that, but those are the things that you gotta watch and sometimes where, like, the AI Copilot stuff can get in the way.
Right? How many orgs, organizations are in Directus? There is no icon button here. What would that be? Send.
Google material symbols. Is there a send? There is a send. We don't have a icon prop here as well, so we're gonna do something like this, like v icon name equals send. That should give us the icon that we're looking for.
Yeah. There we go. Okay. How many organizations are in Directus? Alright.
So is this actually gonna work? Who knows? We'll see. Chat is not coming back. Internal server error.
There's the messages. I don't think we're gonna need to pass the ID of this. Right? So if we look for a unique identifier for the chat, let's not use that. We got slash chat.
Where is the local host slash chat? Alright. If we look at our endpoint, right, we need to adjust this. Right? We're gonna do StreamText, and we want to do the result for that.
So we're gonna pipe that stream. Messages, content dot messages. What is the example for this look like? Is there where's that node? Like, so much of development for me is diving through documentation and just bouncing back and forth.
Express. You can set up an Express server, stream, generate text, stream text with a chat prompt. Stream object, stream stream text. You probably just wanna look at it like the actual APIs here. Stream text.
What are we passing to stream text? We've got messages that we're gonna pass. So those are just gonna be coming from the body. Right? Those are gonna be messages.
We'll just pass those messages. And is that gonna get us what we want? Tools is already there. Do we need, like, a max tokens? No.
Alright. Let's just see how far this gets us. How many organizations are in Directus? Okay. There's the so we're seeing the response.
I can see the actual response there, but I'm not seeing it show up in our actual module here. Right? Messages. Alright. Use chat.
Append. Let's just test what they're doing. Yeah. So that's kind of what we want there. Append content inputs.
Result to data stream response, system helpful assistance, messages, I don't know why we have handle input change. Set appends. Let's see what's actually going on. This is where things get tricky. Right?
Is there a handle input change inside the use chat model? Use chat. Initial message, initial input on. Handle submit. What does this return?
Handle submit. I don't think there is a handle input change. Right? So, again, this is where, like, things get dangerous with AI, where you take 10 giant steps forward, and then you take, like, six giant steps backwards just trying to figure out what's actually happening here. So, chatbots.
Where's the core StreamText. Well, actually, let's look at use chat on this side. Right? Initial input, initial messages on tool call, on response. If set to stream protocol, stream protocol should be text, I guess.
Let's see how many orgs are in Directus. Oh, there we go. Okay. Now we're getting somewhere. Right?
Okay. So now we're seeing the text being streamed in. If I wanted to select this, I can't because of what we've got set up in Directus. But, eighteen minutes in, like, we've got chat with AI models. We've got tool calling.
We are somewhat through. Like, maybe we strike through half of that. We do all this through a chat interface. Alright. Now alright.
Let's dive into, like, persistence. What do you think of those names? What do you think of those names? Okay. Yeah.
We should probably have some type of, like, markdown formatting here as well. Is that what's coming back from the actual API? Is it just markdown? Yes. Okay.
So do we have do we have a markdown helper inside Directus? Directus dash Directus issues. Cisco markdown. Directives markdown. Can we access this directive?
The markdown. The markdown. Or we're buying that as MD? BMD? BMD.
Okay. So can we do this? VMD equals message.content. Obviously, we're gonna lose that. Hey, o.
Yeah. There we go. Now we're in some formatting. Looking nice. Looking nice.
Okay. Get specific fields from a collection. Here to help you work with direct to collections. What do we have inside the database? Alright.
Contacts. Who is the coolest contact in our CRM? Each of these contacts has their own interesting qualities. Without knowing your specific criteria for coolness, it is hard to definitively say who is the coolest. Edon kind of sounds like Elon.
This might be a playful reference. Who knows? Who knows? Alright. So this is super interesting.
Right? Let's work on persistence. Let's work on persistence to our directus database, what would the best data model be knowing the structure for messages. So if we look at stream stream object, stream text, And, like, our endpoint here is just streaming back the pipe text stream two response. Is there, like, a pipe data stream?
Yeah. There you go. That's how we could get the actual messages. Pipe data stream to response. Send message data object.
Okay. And then we should be able to like, inside our module, we can use the stream protocol. Let's just set this and see see if this is gonna work as well. Hey, o. Okay.
But now instead, what we're getting back is actually the message data as well, which is probably what we're gonna want. Alright. What did you come up with, friend? So we've got messages, conversations, messages. So this is actually trying to give us a direct to schema, which kinda scares me a bit.
So we got a user conversations. We got an ID. We got a title. We got a message schema. Messages, ID, conversation, role, content, tool calls, tool results.
Okay. So this is the type of response we're getting back. I'm just gonna save that. Right? Alright.
That was the payload that we sent. What is the response that we got? Okay. And now we got twelve minutes left just to, like, try to round this out and see if we can get this persistence part of it. Alright.
So we're gonna create conversations. This is gonna be a UUID for the ID, created at for the time stamp, created by, updated at, updated by I will just go with what AI said here. I really don't wanna think too much about this. We'll do a mini to one relationship here. We'll call that the user.
The related collection we're gonna use is directus underscore users. It'll show a link to the item. Great. And then we're gonna go in and create, what, messages. Messages.
These are gonna be what did it say there? That's a UUID. Yeah. UUID. Conversation?
Yes. I'll link those together. Content. Oh, not created at. Created at for the time stamp.
Created at. K. We're gonna have a role for that. Role is gonna be what? That could be a drop down, so we could set that up as a drop down if we wanted to.
User, that's gonna be user, or assistant. Too many a's. Assistant. Too many s's. Too many s's.
Alright. And then we have the message content, which I I think is gonna be marked down, content. We could say tool calls, tool calls, Tool results. I don't know if this is actually gonna be it or not. Tool results, ID, conversation.
Then we're just gonna link this together. I add a mini to one relationship here. Conversation. The related collection is gonna be conversations. We're gonna show a link.
And what I'm gonna do, I almost always go into the advanced field relationships or the advanced, advanced mode when creating fields. I'm gonna add the reverse in here. So I'm gonna add all the messages to this conversation. Great. And okay.
So now create conversation, load conversation, save a message, use chat persistence. Okay. Yeah. Alright. Let's just roll with it.
This is gonna create a new composable. Use chat persistence. Sometimes it's fun just to shut your brain off and totally, just not think about it. It's fun that this show, honestly. Like, how fast can we turn something?
So let's go back to our module. We're gonna import this. The actual import is just gonna be from use chat persistence. And we're not there is no handle input change, right? Handle submit.
Load conversations. Handle submit. Use chat. Stream protocol. Save message.
V model input, we do have inputs. We need to import on mounted watch from view. Use chat persistence. What's that gonna be? Dot JS.
Or is that just like a default? Nope. Export function, use chat persistence. Why is it not picking that up? Oh, because it's in the same directory.
Duh. Alright. Seven minutes remaining. Let's see what happens now. AI on top of AI, building AI.
How can you help me? Alright. So we'll just watch the network request here. That's finished. I don't see another network request.
Oh, maybe you gotta refresh first. How can you help me? Oh, boy. I'm not sure if you can see what's going on here, but we are going to be bricking this instance very quickly if we continue that. So, conversations, right, what do we do here?
We've created some type of infinite loop. Thank you, AI. Love it. We're gonna load conversations on that. Watch conversations.
Where are we getting the ID? Oh, why are we passing it? Yeah. Why do we want we don't want that. Is that what it is?
The save message Is that where the problem came in at? I don't know. It's saving the same ID test. Nope. Still there.
Alright. So you messed up, bro. You introduced an infinite loop. Why did you do that? Is it somewhere in our composed load message?
Handle submit function calls itself recursively. Yeah. Okay. Yeah. Okay.
Okay. Well, what did you change? Handle submit, await, append using Alright. Now is it possible to nuke all of these? Let's just nuke all these messages.
Right? I'm gonna go straight to the database. Boom. Boom. Boom.
This is fun. I like messing around with this AI stuff. We're just gonna nuke all of these conversations. There's 80,000,000 conversations as well. Maybe now this will actually work.
Oh, some type of error handling there. It just disappears. Hey. Okay. So now we should be getting back to where we need to be.
Messages. There we go. It's not saving the conversations correctly. Conversation. Current conversation.
What is the current conversation? Current conversations dot ref. Well, no. There we go. We're creating a conversation.
So current conversation dot value, save message, current conversation dot value. So why isn't it creating a conversation? Let's say create conversation. So we're gonna create a conversation when we load this. And maybe that'll fix the persistence issue.
There's the conversation. Test message. Some type of stream issue going on there. But if we now go to conversations. No.
Gremlins in the AI. This is why you don't have AI do your coding for you. Super helpful, though. Right? No chance I would have gotten here as far as we have in an hour without this tool.
Obviously, lots of bugs and interface issues to squash before that. What's how many deals do we have outstanding? Let's try a simpler query. Deal stage ID. Are there deals?
I thought there were deals. And maybe it doesn't know enough about the it just doesn't know enough about it. I know I've introduced some bugs. I kinda wanna end on a high note here, but we're running out of time. Right?
We're gonna call it good. Let's end on a good one. What kind of collections do we have here? Organizations deals. You could probably feed it, like, the current user as well.
That's gonna be, like, accountability, maybe. I don't know. How many open task activities activities are there? Structure of the activities collection. Man, I would just ruin this.
How many orgs are there? I can't even add another working example. Sometimes that's how these things go. How many organizations are there? Yeah.
We're now getting, like, crazy mad errors that we broke something. Alright. So there it is. Some type of weird infinite recursion glitch that I have introduced by just leveraging AI and tabbing and accepting. So now there's a lot of debugging involved in this, but this is a great example of the power of Directus, the power of AI, Vercel AI SDK, whoever's working on this, if you watch this, amazing work on this to make, this sort of stuff really easy and accessible and quick to pick up.
Boom. Goes the dynamite. That's it for this episode of 100 apps, one hundred hours. I hope you will stay tuned for the next episode. Ciao.