Join us for The Changelog, taking you through the month’s Directus updates including product updates, new content and community contribution highlights.
Speaker 0: Hello everyone! I hope you're having a wonderful day. Welcome to September's version of the changelog. If you're new here I'm Beth and I'm gonna be taking you through what we've got in store for you for this month. If you are joining us live, do let us know along the way if you've got any questions.
And if you are joining us later, there'll be multiple places that you can ask your questions. If you're on LinkedIn, if you're on YouTube, ask there. If you are finding this somewhere else, community.directors.i0 is the place to go in the meantime, and I'm kicking it off with what we have new with product. Hello, everyone. It's been a little while since we last had our product update here with us being away from the change log in August, so let's not waste any time and let's jump into what's been going on with Directus since we last spoke.
For Directus 11.1, there's some potentially breaking changes. Firstly is snapshots behavior. We now exclude database only tables from snapshots, meaning tables not tracked in Directus collections. The second breaking change affects typescript extension developers. Services exposed to API extensions are now fully typed instead of any which means you might see new type errors when building extensions.
Things like item service constructor now expect strict string types and methods like read one and read many expect specific types for primary keys. WYSIWYG adds through improvements. We've also added a code tool to the WYSIWYG text editor which gives you more flexibility when editing content plus various accessibility improvements for anchors iframes and labels. Flows updates. There's a new error operation in flows and we've added support for private key JWT auth method in the open id driver.
Other notable updates. We've upgraded the extensions SDK with the latest versions of rollup and vite, though this does raise the minimum node. Js version to 20.19.0 plus the usual batch of bug fixes and optimizations throughout the platform. On to directus 11.11, which has some potential breaking changes around content versioning. However, since I'm also gonna be talking about directors 11.12 right now and next up, I think I'll leave talking about content versioning until it's the most up to date version.
But if you are wanting more information about content versioning and how it affects 11.11 check out the breaking change docs on github. Within 11.11 we improved the wysiwyg editor with proper link styling and fixed some code mode issues. We also upgraded esbuild, write, and updated nodemailer to AWS sesv2. And we added a new field to conditions for clearing hidden fields on save. Short and sweet, moving on to 11.12.
For directors 11.12, there's some potentially breaking changes related to content versioning. We fixed how user created, user updated, date created, and date updated values work in content versioning. These fields now correctly reflect the actual user and timestamp of creation or last update rather than the user and date of promotion. Also requesting a non existent version will now return a forbidden error. Beyond those potentially breaking changes we've got some pretty significant improvements for content versioning.
We've completely rewritten how content versioning handles relational data and query parameters. This was a complex undertaking but the result is a much more robust and predictable versioning system. Content versions now support all the same query parameters as regular content. Filters, sorting, field selection, and aggregation all work just like you'd expect. We've also got proper relational data merging so versioned content with relationships now merges correctly giving you accurate representation of your content at any point in time.
And the new implementation is significantly faster when working with complex relational structures. This is just one step towards improving our content versioning experience. We have lots more planned so stay tuned. Right to left language support improvements. We've also expanded our studio accessibility with some important improvements for localization.
Context menus now properly align and display for RTL languages and we've got better handling of directional layouts across the data studio interface. These improvements make directors much more accessible for teams working in RTL languages including Arabic, Hebrew, Farsi, and Urdu just to name a few. MCP support and technical improvements. On the technical side, we've also got some MCP support rolling out. This is still in beta so we'll have much more to share on that front soon.
We fixed OAuth flow to allow for 2FA setup for users without passwords, where previously if you did not have a password linked to your account you couldn't set up 2FA, now you can. Bug fixes note: there are also a bunch of smaller improvements and bug fixes. If you'd like to see the full release notes they're available on GitHub. And that's it for 11/2012. As always let us know if you've got any questions the best place to do that is community.directus.io.
We also just generally love hearing about how these updates improve your work with Directus. Thank you so much for taking the time and we'll catch you on the next release. Alright and next up we have both Brian and Ryke getting together to talk about the marketplace updates.
Speaker 1: Thanks for the intro, Beth. Ryke and I are here, and we're both super excited to announce the public marketplace. Ryke, what are we talking about when we say public marketplace?
Speaker 2: We're talking about the marketplace on the website this time. So if you recall, the marketplace started life as internal tooling to solve, you know, extension distribution. How do people get their hands on extensions? Both for the ones that we built, but also the ones that other developers like to share with our large user base. So we've had the marketplace in the studio of Directus itself for a little while.
And I'm happy to announce that starting today, that is no longer in beta as well. But we're here to announce another great update to that marketplace, which is the public marketplace. So the public marketplace is a version of the marketplace. You see the same extensions available on our website. So the rest of the internet can explore it as well.
Brian, you wanna give a little demo of what that looks like?
Speaker 1: Absolutely. Thank you, sir. Alright. So the public marketplace is all about solving discoverability. Unless you had a Directus instance, you were unable to view the amazing extensions that our community has put together.
That changes today. So there are three components to the marketplace. You can find it at directus.io/extensions, and you could see all the different extensions that are available to install inside your directus instance. We've got filtering capabilities, so you could sort by or filter by the different types of extensions. Super easy.
We can search for extensions. This is typo tolerant search as well. So if you're like me, quick with the the fingers, this is, a great search experience. Some of the new capabilities that we have in the public marketplace are the ability to see trending extensions over the past week or the past thirty days. So you could see who's installing the most popular extensions over a certain period of time.
When you click into a given extension, you'll see a full overview of the extension. You can see all the statistics, which versions of Directus it works with, how many total downloads, even report issues for that. And then once you're ready to install that extension, all you have to do is click install extension, enter in your Directus URL, like so, and hit install extension, and that will take you right inside your Directus instance to that extension ready for you to install. Alright. Other components to the marketplace include integrations.
These are basically roll ups of the different extensions. So if you are looking to see if Directus integrates with HubSpot, for instance, we've got the HubSpot integration page where you can see all the different extensions that, interface with HubSpot, as well as general overviews. So we can quickly see our AI extensions or integrations here. And if we're looking at OpenAI, there's a lot of different extensions that use OpenAI, and you can quickly find all of them. Last but not least, we have templates.
So templates are starters for your next Directus project. We have some of our own starters as well as community contributed starters like the adventure business toolkit by Mimi Paul. And here you can get started with a fully baked Directus project, ready to customize in no time at all. That's it for the demo of the public marketplace. Reich, any last words before we kick it back over to Beth?
Speaker 2: Yeah. This is awesome, Brian. Thank you very much. And another another step in in sort of our extension story and how to customize Directus. Where'd you say people could find this again?
Speaker 1: It is directus.io/extensions. Awesome.
Speaker 2: Directus.io/extensions. And with that, back to you, Beth.
Speaker 0: Alright. And in very exciting news, this is not the last of Ryke and Brian that you will see in this show. So for now, I'm gonna send it over to one of my I say this every single time. It's my favorite segment that we do. It is the community showcase.
And for the very first time, we have someone coming back that has previously been on this show. So we have Josh with the note taking project.
Speaker 3: Today, I would like to show you how to supercharge AI batch processing using Directus MCP with a self learning note taking workflow. Batch processing is really effective for, for situations where you need raw speed, but it kinda breaks down when you need to make creative decisions on individual items and on the batch as a whole. You can't really do that without a human element. AI models, I found, are great for those sorts of tasks, but they break down when you give them a whole bunch of data. And they also struggle with finding the right information to pull in to make those creative decisions.
So the solution I found is to teach AI to take notes. By implementing a workflow that lets Directus MCP take notes directly on whatever it's working on, we gain the ability to do things like long running task tracking because the model can pick up where it left off and and, start fresh with a new context window. And we also get things like self improving database access. You take a note on all the different things that you ran into difficulty with and what you did to solve them and how it worked out. And it speeds up future runs because it can read that note and go, okay.
So don't do this. Do this, and I'll be able to just continue with the task I was working on. And then by letting it take notes, we can consolidate a whole bunch of data that was stored in the database. Say, for example, you wanted to pull out all of the aliases for your articles that are related to rabbits and you also want the ID for each one. Well, you can just dump that in a note and analyze it in future runs.
So in our case, Directus is the knowledge backbone. Every note is just a key value pair. I'll go ahead and show you the notes table here. It's a very simple setup. We have a key, which is just a string, and this is what the AI model uses to kind of categorize what the different notes are about.
And we have the value, and that's just a big markdown, field. It's a very simple setup, but it's surprisingly powerful. By using semantic keys and explicit instructions on how to record and review notes, we give the AI models a very quick and effective way to find the context they need. So this database has a problem. We have a whole bunch of articles.
We have a 124 articles, and the titles don't really make any sense. Oh, the bodies are in Latin. And I I can't work with this. So I've created a prompt for Claude to read the article titles and come up with a concept proposal for each one. What could this article actually be about?
So let's look at the system prompt here. You can see that there's two things that it's informing the AI model about. That there's an AI notes table in the rough structure of that table and that it needs to read and record its database insights, especially how it solved problems it ran into and just update that note on every run. This gives it the self learning ability we were talking about earlier. It's able to figure out what it ran into last time and fix it on the next run.
Now back to the task. I've told the AI model it's a skilled content analyst and writer. We have a huge collection of articles. All the bodies are full of nonsense, all that. Alright.
So its job is to record the concepts that it comes up with in an article concepts note. It's not sure what to do with an article based on the title. Be creative. Find a wacky concept. Then we have some very specific instructions.
We say always read the template and template article concepts, this is a note, to determine how to structure your output. Always create a new article concepts note for each run. Always give it the exact name. Always process 30 articles. Always review the last note you created before starting to make sure you don't duplicate any work.
This template lets us make sure that we get consistent output on every run. Alright. Let's go ahead and run this a few times and see what its output looks like. I turn on auto refresh, and then I'm gonna go into Claude, add a prompt from Directus, and examine articles and propose concepts. If you want to know how to use this feature, go ahead and take a look at the Directus MCP documentation, and then we're just going to send it.
Now, one of the fun things you can do with read and write access to Directus is if the model makes a mistake, you can ask it to update the prompt to fix its mistake in future runs. Article concepts. It has identified directly that just about every article appears to be lorem ipsum gibberish and occasional test content. Alright. So came up with a bunch of different articles on a bunch of different ideas.
In a bit, we're gonna use this to actually write all these different article concepts. But for now, I just wanna keep going through the batch process to show you how it's able to pick up where it left off and continue. On this run, if you notice, it actually picked up that it needed to look up the translations and didn't have to figure that out from the schema this time. Alright. It is a new day.
My cloud usage limits have reset, and we are ready to continue with step two, which is generating the articles based on the concepts that we've put together. All of the, article concepts have now been saved in these, concept notes. They take the article title. They try to figure out what on earth, the article should be about and propose a concept. So now we have the next step of the process, which is to generate a actual article for each of the articles we generate concepts on in both English and German.
And we're gonna go in batches of 10 and see how well that works. We'll take this prompt and it should automatically read over all of our article concept notes and start filling in articles for those. Let's go. So we're going to use the turn concepts in the full fledged articles prompt. That's going to go and read through all of our article concept notes and start filling in those articles, and I'll show you those articles as it writes them.
I have no idea if these articles are gonna make any sense. This is gonna be fun. I'm having fun reading through these articles. I have no idea how helpful any of this is, but it at least sounds very convincing. Now, the key advantage to this approach is that if we were generating concepts and writing articles at the same time, we'd have to use much smaller batches.
But because we split the process into two steps, we've saved a ton of context window, and we're able to work in larger batches. And as a bonus, we can do multiple things with those article summaries, while we're working on generating those articles from those summaries. Now this isn't the most efficient workflow for generating articles. There's way better workflows for that. We're just demonstrating this concept.
But a nice thing about this approach is that you can perform multiple tasks at the same time. For example, while we were working on these articles, we could also be working on something completely different using the notes that we're generating the articles from. I have this prompt here, categorize article concepts, which will allow Claude to suggest an article taxonomy based on the summaries it's already generated. It'll just read all the article concept notes and create a new note suggesting that new taxonomy. I like the way it describes these articles as a fascinating collection of technical and business focused article concepts with creative jargon filled titles that have been transformed into practical valuable content ideas.
I'm not quite sure I'd be so positive about it, but Nate gets the idea across. Alright. Let's take a look at the categories that are generated and the opportunities that identified. So here's some content gaps, AI and ethics, sustainable technology operations, human centered digital transformation, all sorts of other different things. And here are some of the categories that's come up with.
System architecture, it's put a bunch of different articles in that category, Ecommerce, digital business, data management and analytics, user experience and interface design, business operations or business optimization management, automation and AI systems, project management and collaboration. So So it's basically taken all of those articles that we put together and categorized them according to common themes in those articles. This could be really useful if you have an existing content collection that you're trying to build a new categorization system for. And what's great is you could just take this and create another prompt to actually apply that taxonomy to articles, and create categories and all sorts of different things for that in Directus. So by the end of this run, here's what the AI model has built: concept proposals for every article in our database, full body content for many of those articles in English and German, a whole new content categorization system for the articles it wrote, progress logs so that it doesn't lose its place, and database access hints that documents how the challenges were solved so that on future runs, it could do even better.
Instead of treating AI like a disposable worker or one that can infinitely stuff and stuff and stuff and work on something until it gets all confused, we're treating it like a teammate that tracks and follows up on what it's done across multiple days, multiple runs, starting with a clean slate on each task and only pulling in the information that it needs. We turned a human in the loop batch processing task into an AI creative workflow by splitting it into discrete steps and letting it kind of pick up where it left off. We gave the model a way to track progress. We built templates to make the results more consistent and reusable, and we recorded information about the database to help the AI model get smarter over time. I've had a ton of fun nailing down this workflow and testing it on all sorts of tasks.
For my work, we're actually using it to do things similar to this where we're analyzing a huge amount of content and trying to figure out how to organize and categorize and and do all sorts of things, and it's it's doing really well. The ability to take notes, record what it's done, and just kind of start over and create its own context has been incredibly helpful with turning a workflow that works really well for one or two articles into something that works well across thousands. So thank you for your time. I really look forward to seeing what everyone does with Directus and Directus MCP going forward, especially as Directus MCP evolves, AI models get more capable, and just the overall
Speaker 0: core gets stronger. The future is gonna be fun. Thank you once again to Josh for sending in that amazing showcase. This actually started because it was a post on the community forum, so do if you have been inspired and you've got any questions for Josh, he is around at community.directors.io. And, speaking of the MCP and AI, I gave a bit of a challenge to Bryant and Reich with an AI update.
I was like, please can we have an AI update for the change log? And I gave them no more context, and they delivered. So here you have an AI update from Brian and Reich.
Speaker 1: Alright. And we're back. So in this segment, Beth has challenged Wrike and I to have a conversation on AI and Directus. Where that conversation is gonna go, let's see. Right?
But Wrike, you know, what's what is our take on AI at Directus? We'll just What is
Speaker 2: our take? That is a very six hour long answer in a five minute segment. So let's let's start
Speaker 0: at the
Speaker 2: beginning. Well, as you know, most everybody else in the industry, we've been using AI a lot internally to try to figure out and get a feel for what works, what doesn't, where are the opportunities, where are the challenges that to then figure out what do we want to do with AI in the context of Directus. Right? So the way we see it right now, there's basically two major touch points for artificial intelligence in the world of Directus. It is one, what can external tools do with Directus on the user's behalf?
So think about a a cloud desktop or a JetGPT. And secondly, what can we do using LLM technology within the studio and API to enrich the user experience of Directus itself? So Brian, you wanna start with the with the first?
Speaker 1: Yes. Absolutely. So what can external LLMs do with the data inside Directus? Right? We've already previously shipped a local MCP server, and we've, we've gotten a ton of great feedback with that.
Now, what we are currently in development with is an MCP server inside Directus. So you'll be able to connect remotely to your Directus instance and use the tools that are available to add content, update blog posts, build landing pages. But more than that, you will be able to edit your actual data models. Your update, all of your schema, add new fields, improve, the, the user experience for your content editors. So that really comes together nicely as a complete picture.
But, on the other side of the coin, maybe you wanna talk about the the second piece. Right? What we plan to do with AI inside Direct Us.
Speaker 2: On the internal side, we as a team really strongly believe in shipping features that add actual value. So we have seen, you know, some uses before, I I won't name names, that are a little gimmicky and not really customer friendly or actually useful in a day to day. So right now we're we have some solid plans and some early stuff in flight, that add some real value into Direct to Studio. Both using generative AI for generating and helping, optimize content and other data, but also in a way to sort of help some of the more click ops heavy configuration pieces of Directus itself. Right?
So think about things like, flows or data model and things like that. On the first part though, I know we're flipping back and forth a little bit, but both both tied together. On the MCP front, one thing you mentioned there, Brian, is like it can do data modeling, it can do, you know, data access. Is is that just a a free for all agent that is just gonna go rogue on my database? Or how do we how do we protect that?
Speaker 1: I I'm glad you mentioned that because that is one of the scariest pieces of AI in my mind right now. There's other MCP servers where you can connect directly to a Postgres database and basically run any raw SQL queries against that. So you and I both, probably break out in a sweat when we talk about something like that. The MCP server inside Directus is beautiful because it is scoped by the permissions that we make, available through the API. So any, user connecting via the MCP, you can only perform the different operations that that user has access to, which means, improved security compared to the, maybe some other MCP implementations.
And also we prevent you from taking super destructive actions that, could, result in a lot of data loss or just general heartache for both content editors and developers.
Speaker 2: Phew. Thank God. Yeah. Because to your point that, that does make me break out in, in sweat. And the same goes for for the work that we're doing with LMS within the studio at the moment.
Right? I think it's important to mention. So where, in in our testing with the MCP server, we find, you know, a lot of the LMS as of September 2025 because the stuff change all the time. It is oftentimes a little trial and error. Right?
You see an LOM tries something, it fails, it gets an error, tries again, it fails, it tries again, and it succeeds, and sometimes it gets there, and sometimes it doesn't. We spend quite a lot of time, really optimizing the system prompts to get that reliability up and up and up and it's in a spot where I'm like, this is actually fairly reliable. Like, as in as reliable as we can get it from an L. M. But by by integrating the L.
M. Technology straight into the studio piece of it, we have the opportunity to do an extra layer of validation as well. Which means that we can make sure that we know what the l l m is about to do or what it wants to do and we can allow the user to validate and verify that before actually running some of these pieces. You know, akin to what you might have seen in a in a cursor or Copilot right before it runs terminal commands, things like that. So anyways, we'll we'll leave it at that because I don't wanna, you know, tease too much before before it's ready for the prime time.
But we have some very exciting stuff coming. So I believe the MCP server, will ship somewhere in this month. We're sort of in the final testing phases. If you're interested, please do check out the PR if this goes out before it's merged. Otherwise, you'll see it on the main branch and in the upcoming release.
For the internal AI piece, like I said, it's very much research and development early testing. Just feel free to ping me anywhere on the internet if you're interested to sort of learn more about that. We're always happy to chat and learn from, you know, what people actually wanna use this for. But expect more on that in the near future. So stay tuned for that.
And with that, back again to Beth.
Speaker 0: Alrighty thank you once again to Brian and Reich especially since I gave them not much context I was like I'm just gonna bring you into a recording studio and we're gonna talk about some things. So that was really helpful and I hope you're all excited if again if you've got any questions, do let us know. We're now gonna get over to Brian with a section for the community hotline.
Speaker 1: And we're back with another episode of the community hotline. Welcome. Today, we have Wylus, Wylies. Sorry if I get the name wrong. Welcome to the community.
Glad to have you, friend. Alright. So our question today, is building a front end CMS within Directus possible? So we'll tackle that in a moment. Let's read the rest of this.
Good day, gurus. I'm looking for a powerful tool to support the following. SaaS, hold hosting multiple tenant within one project, creating pages if possible, but how could I render it within Directus? I saw tutorials, but only using Directus as a back end. How can we make this work potentially?
Alright. So first and foremost, right, Directus is set up to be a headless CMS and back end. So that paradigm is very important to understand. WordPress is, has a headless option, but it's ideally a monolithic kind of solution where you've got your CMS and you've got your front end and the templating and all that is mixed together, whereas Directus is headless and that's the way it's designed. So you can, send that data to multiple sources.
Maybe you've got a a front end, maybe you've got a mobile application, maybe you've got a kiosk or a display that you need to send data to or fetch data, from a back end. Directus is great for that. When it comes to building a front end CMS within Directus, basically, you're talking about the Directus App Studio. Right? I can not only just add my own team members to this, but I could potentially create a multi tenant CMS out of the Data Studio itself.
So, if you take a look at our CMS starter, this is available, you know, through the website. If you go to directives.io, you hit this command line, or this script here. Just pop this into your terminal, select the CMS option, or if you go to our cloud, log in, you spin up a new project inside cloud, just pick the CMS template and you can get access to what I'm gonna show you here today. Right? So this comes out of the box.
This is a a CMS. It's exactly what you would expect out of a CMS. We have pages, we have posts, etcetera. Now if I wanted to make this multi tenant, how would I go about it? Well, first, I would create a tenant collection.
Maybe that's sites or, properties or whatever we wanna call it. And then for each collection within this, I add a relationship, many to one, back to that site's collection, back to that tenant collection. And then within our access policies, you've got everything you need to basically scope access down. So you can filter based on the site, so that one tenant cannot read the, data from another. Now, there's an specific episode of 100 apps, one hundred hours where we cover this exact thing.
So just look for multi site. That mission is season three episode two. You can follow right along with this and see how to set up a multi tenant CMS using Directus. Now to your other question. Right?
How can I create pages or render within Directus? Now Directus itself is extremely flexible, and this is where our extensions come into play. And a good example of that, if we just go to directus.io and we look at our sandbox demo, there's a little extra module in here that is not, baked into directus core. This is just an extension that we built to give you a welcome page that does get rendered inside Directus. Now this is coming from a separate Directus instance, but again, all of Directus is modular and extensible.
So this is actually a module extension. But if we just go to our docs, we look for our guides, we go to the extensions section, you can learn all about extensions. So you can customize the way that you interact with data inside the forms. Those are interfaces. You can control the way that data is displayed in the different layouts throughout the studio.
That is a display. You have layouts themselves so you can control the listing of these items. You know, I think tables or kanban, views or maps, whatever. We can add panels to the dashboards and kind of the overall level where you've got total control over what gets rendered on the page, you have a module. So if you want flexibility on how things are rendered inside Directus, Maybe you wanna add some extra pages to it or you just want something, a blank canvas where you could totally experiment with whatever you want.
Custom modules are what you're looking at. So, with that in mind, I hope that is enough to answer your question. Wireless. Wireless. Sorry.
Again, I hate if I get it wrong. But that's it for this episode of Community Hopeline. If you want me to answer one of your questions on the next change log or or on the next community hotline, make sure you hop into the Directus community, community.directus.io, and I'll see you around.
Speaker 0: We want to take a moment towards the end of the changelog for thanking our community contributors who give their time to improve the director's project. Since last changelog, there have been 11 contributors. Thank you to Josh for fixing require selection check for manual trigger flows, Klayvo for adding a message property to the SDK error object, Amos for removing duplicate code in fields read all, Gerard for fixing a bug that was preventing translations from displaying in the calendar layout, Danton for fixing a bug that prevented pop ups from working in the WYSIWYG interface when opened in a drawer, Hughes for adding WebSocket authenticate filter hook, Abdullah for fixing the code tool to the WYSIWYG text editor and for fixing links in WYSIWYG missing underline and pointer cursor styling, Matt for adding TypeScript support for services within the extension context and for fixing an issue for empty states not being centered in RTL languages, Tim for standardizing batch mode for raw group fields, Gloria for enabling text selection in the studio, studio, Jens for adding the ability to override the email from property. Thank you again to everyone mentioned. You can see their specific full request inside of the full release notes on GitHub.
Lastly, we also want to take the time to thank our GitHub Sponsors of July and August who financially contribute to Directus' development. A huge thank you to Weifan, Jens, Mike, Fergus, Omar, Marcus, Mission Control, Utomic, Steven, James, Nonlinear, Andreas, John, Wayne, Burb, Adam, Jason, Yuya, Vincent, c k, Valentino, and Hadi. The money we are given from our GitHub sponsors goes straight back to community members who build tooling and extensions for the director's ecosystem. Thank you again for being part of that. Alright.
And that is it for the change log of September 13. Thank you if you are still with us and you've made it to the end. You're my favorite people. No matter whether you are new to this changelog or you've been returning each time, we hope to see you again. As we've said multiple times within this changelog, if you do have any questions, let us know.
And, yeah, that's it. Have a great rest of your day, your week, and your month, and we hope to see you really soon. Have a good one, everyone. Take care. Bye.