In this recording of our live event on August 15 2024, Daniel, Jonathan, and Rick discuss import and export options for flows
Speaker 0: Welcome to another episode of request review. What are we talking about today?
Speaker 1: We're talking about import and export options for flows.
Speaker 0: Oh. Now then if he says a like that, it sounds
Speaker 2: Why don't we just edit then? You know? It should be quick fix. Right?
Speaker 0: I think that's a great idea. Well, with that being said, thank you so much for watching. Find this episode on of course, I'm kidding. There's there's interesting, as per usual, these episodes, that's why we do them. Some fun edge cases here to consider.
First and foremost, for those, you know, out of the loop flows, automation workflows, you can set up, you know, triggers. If somebody saves a thing, somebody hits an endpoint, somebody clicks a button in the app, do something. Right? And that something is a set of individual operations, different blocks, so to speak. You can connect together to make it do things.
Now if you create multiple similar flows or you wanna move them to and from different installations, having some sort of way to import them and export them makes sense. Right? Just as a core set of functionality, absolutely. Somebody in the chat too. You know, flow and operation import is the number one thing that breaks when we sync our schemas and database across different projects.
So why doesn't it not exist then, or why does it not work properly?
Speaker 2: That's a good question. I think, that was before my time. So you're probably better equipped to answer that question.
Speaker 1: There's there's a variety of things that happen here. Right? So schema differentials. Right? So if the scheme is different ahead of the flows being imported, if you're watching and monitoring things that don't exist in that import side, problematic, variables, referencing data and information that doesn't exist in one system to another.
There's a flows are moving flows, I mean, the APIs are there. We do it all the time with our templating stuff, but we do that in a very controlled way where we're importing and handling the relationals. Because once again, now you've got relational data operations to the flows themselves and all of the data. What I've got a marketplace operation that's not installed in the next environment, there's, you know, there's a there's a great many things that have to be validated and checked, especially on import. Export, not so much, but on import, import gets
Speaker 0: Absolutely. Has to be And then another important part of those same edge cases there is what about options that need to be different installation to installation, but, you know, 90% of the rest of it is the same. So for example, a a super simple flow that I oftentimes use as an example is, you know, you make a change and you trigger a build on your CD platform, like a Netlify build or something. That production Netlify URL is different project to project because you're building different websites, but the flow configuration, the trigger, you know, the setup for making the request and all that is the same. Cool.
So looking at this discussion here on GitHub, basic example, motivation. Now the motivation makes a lot of sense. That's just basically, you know, what we're what we're talking about here. I think there's no no discussion on, the usefulness here. When it comes to actual implementation, there's a question of, do we tie this to part of the schema snapshot and apply logic?
Right? Do we really treat this as it's part of your project's configuration, and we wanna move that between different instances? Or do we treat this more as a kinda more traditional import export like we have for regular collections where you can just say, well, give me all of the operations, and we'll reimport them back in.
Speaker 2: To me, personally, it sounds more of the traditional way. Like, the correct way would be the traditional way because flows are not specifically schemas. Right? Like like, they're entries in the table, and that's not the schema, those would be items, and I think we should sync them separately. But yeah.
Speaker 0: And immediately, people start chat typing in the chat. I love it. So this is this is a fun discussion that that has come up in the past in in these request review sessions anyways, but we're we're yet to reach the the the silver bullet answer for what is schema or what is configuration. Right? Flows is to me is one of those points where I think 80% of it is configuration of your project.
And if you wanna duplicate your project or if you wanna move that from dev to prod or you wanna, you know, use it as a template starting point for something else, you can make the argument that it's definitely part of the same thing that is just one unit of export for the whole configuration of your project. Right? So that's schema and and flows. Last time we were talking about it, you know, it was in the context of roles and permissions, which is a similar but different, you know, part of data, part not. But to to just Josh's point here in the chat, in the data model, you know, the flows are just data in the database.
But for the perspective of that end user, they're all part of the application that you're, you know, configuring, right, which makes sense. And in that case, you can definitely make the argument, oh, it's just part of, you know, the the application model settings, schema export, and and apply. But that still raises the question, is it the blanket everything? How do you subset it? Because for the schema right now, it's it's everything.
And how do you deal with options that are different? And how do you deal with active versus inactive? Do we deactivate them on import by default just to be safe. Because you could have a cron thing that could go haywire and talk to, you know, 3rd party production systems if you just spin it up in a new thing.
Speaker 2: Very valid points. And I and I do see the argument also from Joshua. It kinda it does make sense, right, if you interpret the flows as being part of your infrastructure, then they do kind of feel
Speaker 0: schemey. Scheme.
Speaker 2: I'm gonna use that. Scheme esque. Yeah. I I can see that argument. Okay.
I'm kinda okay. Like in, in the beginning I was like, no, no, no, no. The data, they just stayed up. Those are rules. But now thinking about it, yeah.
Speaker 0: It's the rules.
Speaker 2: Damn. Interesting. Yeah. Kinda I'm kinda torn. I'm kinda in between.
Like, it does it does make a hell of a lot of sense, but,
Speaker 0: they got a lot. It's funny because we we keep finding ourselves coming back to the age old question of what is and isn't, project configuration that should be sort of version controlled is kinda the the where that's going. Right? Where you wanna have a file that you version control and that includes a bunch of stuff versus within the data, so you differentiate between the 2. And this is this is kinda what earlier when I said, oh, if I make if I build flow, the whole flow but I want the URL that we're talking to to be different.
So that URL should be in the same static file export. Then by definition, it needs to be in this static file. Right? Because otherwise, you have to do both. Is that easier to maintain?
Is that worse? Right? But if you're treating it as I wanna have a template that I use to create new projects, but then manage the rest in the project itself, that would still work. If you're treating it as the file becomes the source of truth, now you have to make duplicates of the same, you know, config files to make it work for your different projects. Might be a good thing.
Might not be a good thing.
Speaker 2: Did we did we actually, I think there's another small thing, related to this with, like, if if I just want to quickly export one specific one or, I had a not sure if this is, like, a slight tangent that we should go down on, but, when I played around with flows, I had the problem of, I created, like, 15 or 20 different ones, because I was, you know, trying it out and, they got automatically generated. And I had this little gripe with our way of displaying them because there was no, possibility to actually select multiple ones and delete multiple ones. And that kinda sounds to me like if we touch this, you know, with selecting them, exporting them, maybe, you know, filtering them so you only export a couple of them, it feels like selecting them would also fall or, like, lend itself to also be done with this issue. Right? But this is kinda a little different.
Sorry. Just came to mind.
Speaker 0: No. Yeah. You're absolutely right, though. Yeah.
Speaker 2: Okay. So Joshua says, for a user story, we use tons of flows in production and development across 2 applications with the following workflow. We run the whole application locally, modify flows, code extensions. Okay. So far so good.
Export the schema flows, collections, fields, etcetera etcetera, using a schema sync extension. Okay. So far so good. Commit the schema to version control and build images. Okay.
In production, import the schema collections and flows as defined in the image. Alright. The result is that flows are always consistent between development and production, and the production flows are immutable. It makes sense. But this, I I guess this is to the point that I said earlier.
Right? With then, you know, if you want to have, like, development flows production flows, then you have to duplicate them and both of them.
Speaker 0: Here we go the route in the chat as well saying, you know, what I usually do is try to keep the flows clean by saying, the only thing I export is the values everywhere. And then for the parts where you use different day the data model, there's a sort of settings collection, I imagine, for those flow operations.
Speaker 2: So you're actually fetching your needed stuff out of your database, and those fields would be different on your dev machine and the production environment. Correct?
Speaker 0: Right. Yeah. It's basically the way to have that sort of 90%, 10%, you know, split I described earlier solved by using a a settings collection for that, which which makes me think, is that something that could be a native thing where at the top of whatever the file export is for flows, there is a set of very tools you can define, reuse within the file. So you have a single file source of truth that then the app, once you're importing it, the requests, what values do you wanna use for these variables to get food for thought. Right?
For for those new in this session, it's always about divergently thinking through all of the options, and all of the edge cases and then trying to converge back to, okay, what is something we can do today versus the long term plan? That would be a kinda cool in between, I think, where, you know, in the static file, we have to make sure it's human readable and easy to manage, of course, but you can have you can just describe at the top of the file. Here's a couple of variables that I wanna be installation installation, and then reuse those in your operation settings. The moment you you know, from within Directus where you're using those flows, we can show a a sort of mini form in options that says, okay. What what are those variable values?
Okay. Okay. Then I think one other thing in, in close that has been there's there's a couple of I mean, I don't know if it's a bug, but it's kind of annoying to deal with is that right now flows make an operations to tables. And it works right. So if you're using the current APIs, you have an export for operations expect you have an export for flows.
Flows are sort of it's it's a one to many type of thing. Right? You have more 1 or more operations per flow. A very normalized SQL data model for that, which in its, you know, theoretical purity is correct. But it does also definitely mean that sometimes if you're importing operations where the flows doesn't exist, you get foreign key constraint problems.
Right? Because you're pointing it to a flow that doesn't exist. It also means that when you're exporting it, you end up with 2 different export flows, one for the operations. So that is a bit of an order of insertion interesting thing. It also means that every operation points to the next operation it needs to trigger.
So that also means that operation insertion order matters, in that table. And, again, that all makes sense from a SQL database perspective, but definitely not, you know, the most user friendly way to do it. And that is just, you know, the the thing you learn over time in in terms of the data model. The question actually came up just now. I see it in the chat, Joshua.
Out of curiosity, you know, why is that a separate table instead of a big JSON field? Concerns about size and speed, script operations. The, it's partially because JSON fields were just not as well supported or existed at all. And at that point, it did become a bit of a, you know, performance concern in some databases. Luckily, you know, I know 2024, that is that is a different, picture now or, you know, even SQLite has support for for JSF, JSON fields, which is fantastic.
Concerns about speed has been sort of resolved because of that because now databases can store them, you know, efficiently instead of the large text blob. Size is a concern still a little bit, and that is a similar thing where before it was just a text field, you know, databases were fairly inefficient around storing, you know, unbound blobs of text that would have been JSON but that should also now be way more doable across the SQL vendors. So for what it's worth, I actually think that now, especially now with SQLite support and Oracle DB also being on the JSON train, this is now a new question, right? Should operation just be a nested object on a flow? To which the answer is probably yes, actually, because it's always a tight coupling.
You're not really moving an operation from one flow to another And storing the actual flow as a nested tree of operations is more efficient, so we don't have to stitch together that tree from from the c port rows. That was just the thing we couldn't do yet, you know, when we when we shipped flows, but nowadays, for sure. Yeah. So that is that is also an interesting thing, which helps with exporting and importing and helps with the ease of authoring it as a file. Right?
Because if we're saying that operations are represented as a nested tree, kinda like, for those who are familiar with it, like CICD pipelines in GitHub or or those tools, it's oftentimes just a YAML file that says it runs step, step, step, step. Right? So a flow could be a very similar syntax in that sense if we store it that way.
Speaker 2: Sounds reasonable.
Speaker 0: And it's only a tiny method for I can change.
Speaker 2: Okay. So we talked a little bit. How does it look like? How could it look like? Why would this feature be not as easy as you think?
It's probably our standard go to.
Speaker 0: So I think in its purest form, just export everything from one flow and import everything from one flow, either through a duplicate button or a separate endpoint. That is not super tricky. The insertion order when you're doing the import is a bit of technical complexity on the implementer side. But from a user experience perspective, it shouldn't be the end of the world. I think the real questions that we have to prepare ourselves for are around what happens if you have a flow that does something destructive and like a cron.
And the moment you import it, it starts wreaking havoc. Right? Do we import them as disabled by default? Is that what you want? If you're in an environment where your prod is just importing it and and otherwise, it's considered immutable, you want them to be active instead of inactive.
Right? So how do we do with deal with that? And then around that variableness of you have the flow that's the same, but you have some of the settings that need to be different. Is that something that we have to bake in, or is that something that we say, well, sucks. That just doesn't exist, and you have to duplicate files.
I think it's worth at least thinking through those questions even if the initial implementation is just going to be, you know, almost the exact same what was described here, which is just import flows, export flows. Bang. Right?
Speaker 2: According to Joshua, having a, destructive crone is just a skill issue. Fair enough. Problem problem fixed. One less problem to worry about. Alright.
And, yeah, okay. So so the so the template ability, it it really does sound like a mustache template kinda thing. But, like, I'm I mean, we had a similar thing before, right, with, like, the script operation, for example, where it pulls in, like, your environment. So technically, the environment variables could be different. So you can actually have differing logic inside of them.
Speaker 0: Yeah. And we do allow you know, in every operation, we allow you to use a value out of a previous step. So using, you know, the the suggestion that we set as a settings table is is a very good alternative, I think. Honestly, it makes a lot of sense. It makes a lot of sense.
But that is something that we have to just say, okay. If that is our recommended approach, then let's document it as such, because we know it is a I I I mean, of course, we haven't done the the full surveys and everything else, but I would assume that this is a fairly common use case where, you know, the the main use story here is you have a dev environment, you export it so you have a single source of truth that's version controlled, and then you use that to power your production instance. Right? So you use the UI and everything else to configure it, and then you use the code as your immutable source of truth for prod. That's that's sort of the the first user story.
And I think the second one is really that you have the export as a templated starting point or templated source of truth for multiple projects that are all in prod. Right? So either it is a starting point where if you spin up a new, you know, templated, the, event website or something and they're all the same, so you wanna use the same kickoff point. You have multiple production instances that use the same config. Right?
That I think those are basically the main to, use cases for for this. Mhmm. And I think in both cases, there's a case to be made for, you know, template ability or at least the the variable, settings, which, again, table.
Speaker 2: Table does sound pretty good. Pretty pretty reasonable. And similar to what you said, Rijk, earlier, or Jonathan rather, I think, the flows, according to Joshua, you know, if you import something, you could be referencing some tables that no longer exist. So how do we deal with that? Error messages, error pop ups, aborting, disabling the flow and just, you
Speaker 0: know I think there's also a good data modeling question to that to be honest. I think for operations right now those settings that hold a a field or a collection name, those are basically just a string. Right? And then we, one once the operation runs, it uses that string to try to read data from those tables and whatnot. I think the only solution here is that we treat those as a separate data type.
So instead of saying it's a string, we say it has to be a collection reference. And therefore, the moment you do an export or import, Directus as a platform can recognize what those settings are supposed to be and validate if they are valid paths. Yes or no. Right? So I I think the the the longer term answer to this question would be to treat those as different data types where it's like, okay, collection reference, field reference.
So therefore, we can validate it on import or export or both, and almost do like a if you ever see it in in Word or Pages when you open the document, somebody had a bunch of custom fonts and it shows, you know, cannot use cannot find these fonts. Choose what you wanna use instead. That would be kinda the only way I can imagine that to work. Yeah? Mhmm.
Speaker 2: I mean, if we, yeah, if we treat, you know, the, flows as part of your schema or your your infrastructure, your application, and if then an imported flow does not work because of, you know, like an an unreferenced or wrongly referenced thing in an operation, then technically we should reject that whole import because your entire application becomes inconsistent or can can could become inconsistent. So think, like, if we really consider it part of your application and your whole infrastructure, then we just have to reject everything if something goes wrong.
Speaker 0: Hans makes a good point here in, in the chat saying, you know, same with role IDs in the permission settings. Effectively, what we're talking about is foreign key constraints, but in JSON, right, where it's like you have a a blob of settings that is different and unique to an operation type because every operation is different and we don't have like a we don't have individual tables for each of the operation types or something where that'd be insane. But what we're talking about here is effectively, how do we do forward key constraints within an operation here? Right? And exactly everything comes back to cache invalidation, naming things, and validating and structure data formats.
That's great. Yeah. But some sort of way, you know, in the operation value, if we can read it statically, that'd be great, you know, where you can look at an operation settings value and just be recognize what are supposed to be form keys. It could be some sort of unique value syntax, although that feels a little proprietary. It could be, you know, that the operation defines, a sort of JSON schema type of thing that we know what that unstructured data type is supposed to be so we can validate it.
But, yeah, at the end of the day, what we're talking about is really much foreign key constraints for, operational layout settings and everything else.
Speaker 2: Yeah. Validation. Just to because I got curious, like, I'm, I think, just to getting back to the import order, like, please correct me if I'm wrong. So I think, if I remember correctly, the flow, references the first operation, and then each operation references the next operation. So the order of operate the the order of import would have to start with the first operate no.
Because then you can't access you can't reference the second one. Oh, okay. So you have to do it backwards. Yeah.
Speaker 0: Yeah. Yeah. Yeah. Yeah. Exactly.
Speaker 2: Exactly. Exactly.
Speaker 0: I'm still the father
Speaker 2: to change.
Speaker 0: Go on. It's the other way, bro.
Speaker 2: Teacher, that's wrong. No. Right. I can see you have to step step through it once and then just pop off the stack and then go backwards. Okay.
Speaker 0: Which is also why, you know, I mentioned if we save it as a nested object tree as is and just treat it as one big blob, right, instead of a semi structured thing, that that simplifies export import, like, tremendously, really, because we don't have to do that. We don't have to care about, insertion order because it's all nested on the flow. So, therefore, you don't have those foreign keys pointing back and forth. And a resolve and a reject just becomes a nested object instead of, what you call it, reference to a different, thing in the flat list. Right?
So that simplifies quite a bit, in in that sense. It's a bit of a break and change for tooling that exists against the operations endpoint, but it would make this a lot a lot less annoying. And I think it also solves one other bug. We we've had a bug in flows where people have run-in into where if you're editing a flow, and I think it's you disconnect operation, then you try to reconnect it somewhere else
Speaker 1: Right.
Speaker 0: You get that error that is, like, cannot 4 giga straight. Right? Because you're, Yep.
Speaker 1: You have to
Speaker 0: disconnect it. You're dealing with 4 giga
Speaker 1: from reconnect it in the order that you want. Yeah. It's a Yeah.
Speaker 0: Exactly. And that is because you're trying to change those foreign key and how they point to each other in a way that is database technically speaking.
Speaker 1: We have to answer that question for clients on
Speaker 0: probably almost It's it's a it's theoretically correct. It's also annoying as hell. Yeah. It's one of those
Speaker 2: Is neck beard correct? Actually actually, it is correct.
Speaker 0: It is. Pure. It's database pure. Suck
Speaker 2: it. Yeah. Oh, asks, are we talking about creating custom operations from the UI? No. We are actually talking about importing and exporting flows.
So, for example, if you export your current schema and import your schema into your new environment, Should flows be included, how should how would that look like, etcetera etcetera.
Speaker 1: So the template CLI utility that Alex and Bryant and the team put together, we migrate everything including flows. But we have full control over everything. It it's exporting content and schema and everything else comes back in in the appropriate order. So your schema, your, you know, updates and things that need to exist before flows get created all happens as part of that kind of operation steps through the CLI. So configuration as code, kind of the general thoughts that we're working towards there will help with this generally, but I I was taking notes here.
I saw some of the comments over here. If you're updating an existing flow, Well, if you're updating something in production, flow migration is now gonna have to be maintenance window for anybody doing that. Right? Because if you've got active flows, you've got crons, you've got other things that are going on, if you suddenly come in and hammer a flow that made changes, you're gonna cause issues. You can break, you can lose data, cause problems.
Transactional processing and other things can be impacted with updating an existing flow. So because currently, you know, our practice with the CLI tool is we delete all the operations and recreate them in order to avoid any kind of, you know, what changed or didn't change just from a complexity standpoint. But those are things that we're gonna have to consider as well as when you're migrating a flow. Are you updating the existing flow, or are you gonna delete and recreate?
Speaker 0: Yeah. But,
Speaker 1: you know, from an operational standpoint, my my alerts go off from a security risk perspective of if I'm using Flows for invoice processing or I'm using Flows as part of a payment process, which we have clients doing, And we're suddenly gonna, you know, whack a flow and recreate it, or we're gonna update it in some way. What are the operational impacts and risks of doing that? And they've gone crazy in the chat again.
Speaker 0: To Hamza's point, that's kind of always a problem with with any sort of data migration. Right? If you import anything new into a production environment, you gotta make sure that you know what you're importing. That's that's the name of the game. That is also why, you know, the static file route in between is is what we're all talking about here.
So that becomes your source of truth, and that is version controlled so you can see when it was updated by who. You can have some, you know, review processes in place to make sure that it has needs to have sign off, all that kind of stuff. But, yeah, it is technically always just a problem with any sort of data migration. If you have a settings table and import whatever on top of it, then it also breaks. Thinking about the settings table a little bit more, Are the settings always tied to a flow or is it more common to have settings that are global to your whole project and then happen to be reused in the flow?
The reason why I'm saying that is in in a flow execution, we have that data object. Right? So that holds the all the data that's available in the operations and in the flow. One of the fields in there is that dollar sign trigger. That is just the information of the flow itself.
Like, what caused it to trigger? I figured we could just have an additional flag in there for any sort of custom data that you wanna just have in that flow global, Right? That you can then reuse in those operations with the existing tools. So therefore, you can make a flow field, flow setting, Netlify URL and that just is becomes one of those data things that you can in in your request operation, you can use the parenthesis the the curly brackets, to reference back to your global flow data. Right?
That could be a a sort of alternative to a settings table, but it would be unique to each individual flow. Yeah. Exactly what Joshua is saying right now. It's basically specific flow environment variables. Right?
It would be, you know, environments in the, just that flow. And, yeah, you could theoretically already do that. You could use script operations or you could use the the JSON operation to just return, you know, some static data and then use that elsewhere. That's very true. Very true.
But it could be a native thing. Then when you export and import it, you just have it as a sort of flow level thing, instead of as a separate operation.
Speaker 2: Oh, I mean yeah. There's definitely both, both options. Like, having a global thing, is very useful for general stuff, like, even general info between operations, like a brand name, a URL that you reuse in between different actions. But I can also see, you know, the usefulness of tying something specifically to one, thing just for one thing. Sounds pretty good.
I guess the the the, the actual question is just, okay, how do you manage that in a good way? You know, like, how do we make it pretty? How do we make it a good UX? But I yeah. It it makes sense from,
Speaker 0: and One one I know Brian and Kevin will yell at me for if I don't bring it up. How do you save it encrypted? Like, some of those settings could very well be, you know, an API token or or things like that. Right? So how do you make sure that that stays secure as well?
Speaker 1: Yep. That's why I I I think traditionally, most people seen here end up creating a custom table, that's an admin accessible only table. It's not given any permissions for anybody else. You can then store hashed keys. You can store data and values that are masked, only accessible to the administrators kinds of things.
But I, you know, I've run into this. I actually had this report just a, like, a default language as a filter right across a flow. I was doing a bunch of flow operations and they were nested translations, and I wanted to be able to just say, oh, I only want a specific language. And then I was and you can do things like it's it's simple enough to do a run script that just exports the value. So now you've got that variable in memory, right, in in the data data payloads.
But Right.
Speaker 0: So That's that's where it gets that's oh, here comes another site engine. That's the fun thing about these episodes, man. I thought this is gonna be a 10 minute topic, and here we are 45 minutes in, and I just realized something new. When it comes to secret values like that, we have to store them encrypted, not hashed because we have to be able to decrypt them for use in the actual operation. Right?
So when you make a request and you have to include some sort of token, it needs to be the original token, not the encrypted version. But to the chat's question here from Joshua again, you've been killing all of these questions. Thank you for them. You know, who's allowed to see the config? Right?
Then, well, I'd say for the right UX at this, you can insert them once, and then that's it. Right? We don't show them again. But at the exact same time, you can then use a run script operation, return them, and then there they are. Right?
So it's it's just a bit of a fake fake sense of security. They are encrypted into the database to make sure that that stays as secure as we can, so it will never be I mean, that's that's kinda this is kinda the crux of it. I it will never be exported as is, but at the same time, you know, does that matter? Even when you're doing it, it's not a closed closed.
Speaker 2: Right? Yeah. Yeah. Especially since flows are, at least at least right now, just, an admin exclusive thing. So we kinda you know, like, security schmecurity.
You know? Like, if you're already an admin, you you have different problems. Like, if your attacker is already an admin, your flow keys are the least of your worries, basically.
Speaker 0: Yeah. Yes. No. Yes. No.
Yes. No. Maybe. It's it's tricky in that sense. It's it's weird, though, when you think about it.
And this is the same when you're talking from a platform team perspective, this is the same problem you're seeing on, you know, AWS secrets manager or d o environment variables or something like that. If you have an encrypted value and you save it, you then, you know, s h into your container, you do print f and there they are. Right? It's like they're stored as secure as possible, but then in your running process, you can just print the environment and everything is right there. But you gotta figure out what is what is the right move.
Right? Because it can you have read only access to flows where they're hidden? But then if you have update access now, by definition, you have a way to expose them. Right? Is there a way is there a different security level where there should be the the sort of admins that have access to secure stuff and admins that can edit flows, but they're not the same thing?
That that's a different question. Yeah. I
Speaker 2: mean, if if we go down, you know, the the settings table route, then we already profit from our, authentication, authorization, system. So, you know, you you can already then assign oh, exclusively these people are allowed to read them, look at them, and stuff like that. So You don't need to plan out every case ahead of time. Well, then you're you're in the wrong stream, my friend. This this is what we do.
Speaker 0: And again, this is this is the divergent part of all this. Right? This is luckily Yeah. A lot of these things are not blockers to at least get to the minimum viable, which is let us explore these damn things.
Speaker 2: But Right.
Speaker 0: These these are the things that people will, you know, use it for and maybe use it wrongly and then maybe expose themselves to massive headaches and issues down the line. So I do I do wanna at least be aware of the types of things that we might see in the future. So we can either prevent it in code, make it better, make it make it better down the line or document it in a way or just warn people about some of the dangers that they might hit. Right? It's, it's it's fun that you mentioned it, but at the same time, we've seen people do shit before.
It's like, make a cron job that runs every second and put something in the table. Table blows up. Whose fault is that? Right? Did we did we have to do something to prevent that?
Is that a user error? That who's who's responsible? Exactly. Featurex activity, see revisions. Working working on that, by the way.
Unrelated for this call.
Speaker 2: Don't tell. Don't tell. Hooray. Okay.
Speaker 0: I don't want the sidetracked too much, but, yeah, we have the, retention settings for that coming, shipping fairly soon. Thank god. Yes, Amar. If you want to.
Speaker 2: No. I
Speaker 0: don't wanna get in trouble with HR. I don't think that's a good idea.
Speaker 2: Goosebumps. Oh, no. Okay. So, just just keeping the time, in mind, we don't have that much time left. So maybe we should start to kinda converge now a little bit, You know, yeah, just to, get back on track a little bit.
Oh, wow. And then the whole essay drops
Speaker 0: from Yeah. There goes the next 10 minutes.
Speaker 2: Oh, wow. Should we read that aloud? Are there bad words in it? Looks good. Okay.
In my humble opinion, the whole edit complexity of environment migration I'm using directdis comes from the yaw application becoming data itself. Alright. While not using Directus, you can run different versions of the application side by side, and it's easy to switch version which version is running because the structure is defined within the application. Wildirectors, most of you. My guy, there there's not a single dot in that sentence.
Oh my god. That's that's one sentence.
Speaker 0: Don't throw too much shade. And it the the gist of it is because all the configuration is in the database instead of in code, therefore, you have migrations back and forth prompts. Right? And to Joshua's point, that is definitely a configuration as code discussion, which I think we had a couple episodes ago, actually, which is a great segue. Directs.i0/tv under request review.
Highly recommend it. The the the 5 second summary of that discussion is really around there's multiple use cases. Right? There's a lot of people that prefer configuring everything from the UX and UI and then making an export. So by definition, it is in the database.
Right? It's it's a mixed environment where there needs to be a two way binding into configuration. We don't wanna end up in a sort of again, the choices are different different use cases, different type of people. I'd I'd rather not end up with a sanity slash strappy type of environment where you have to pull things locally to code, you know, your settings and then redeploy it. That is just not quite the vision that we have for, you know, the ease of use and the user experience for this.
Also, I know from some inside chatter that they're trying to get out of that as well. So this is this is a bit of a two way binding discussion.
Speaker 2: Did you did you see They updated their comment and broke up the paragraph into multiple sentences now.
Speaker 0: Danielle, look at what you've done. I think this is called cyberbullies. We don't condone that here.
Speaker 2: Oh, no. Thank you for thank you for the, not not, what is it? Thank you for your cooperation, Wolfolas. Thanks for being here. Thanks for your message.
We do appreciate it.
Speaker 1: I'm not sure how you say it, but Wolf Wolf of us is an awesome contributor. He's been an incredible community member and contributor and helps out across everywhere across the platform. So
Speaker 0: we are
Speaker 2: It turns out.
Speaker 0: Anyhoo, to your point then, converging it down to okay. What can we do? I think the the question that I do have is, do we wanna look into storing this as a nested JSON blob versus a separate table? Right? It would be a bit of a breaking change, but it simplifies both the output files and it simplifies the, the way to import it for us quite a bit.
And it also solves that cannot resolve foreign key type thing bug in the same time. Right? The alternative is that we don't do that, but we avoid that breaking change in the operations endpoint with the downside that then for both export and import, we have to make sure that the whole nested tree is included at all times. And then during import, we do it in the correct insertion order. Right?
Gut feeling wise, I'm kinda leaning towards let's blop it all as a nest of blob because it simplifies things. At the same time, couple downsides of that is that I think we'd lose the ability to choose what fields are returned from operations. I don't know if that matters as much if you're dealing with flows. Theoretically speaking, most of the databases now should support field selection for JSON. But seeing that we're talking about a theoretical infinitely deep object, that might be tricky.
Right? Or we have to filter it down in in post after we got the data back.
Speaker 2: Quick quick tangent. Can SQLite do that? I'm not sure. I don't think so. Right?
Speaker 0: I think that update came earlier this year that they have some sort of SQL path, JSON path selection. Yeah.
Speaker 2: They they've
Speaker 1: got some JSON path.
Speaker 0: Yeah. It's fairly recent. That's also one of the reasons why I missed we couldn't really do this earlier. Right?
Speaker 2: Yeah. Oh, that's nice. Okay. Okay.
Speaker 1: Yeah. So the lights are getting a lot of attention because you've got companies like Terso and others that are using this distributed edge file based databasing, caching kinds of things. So there's some there's a lot of work going on in that space and SQLite's gotten some pretty good attention.
Speaker 0: But, I
Speaker 1: mean, vendor variation, I think, as we move towards hopefully, as we move towards Duris, you know, whether or not that vendor supports it, then we can decide sourcing wise how we handle that. But for the main
Speaker 2: I'm a big proponent of s Eskelet. Like, if if we if I do something, it it I wanted it for an SQLite, please. Okay. The end of this tangent. I'm sorry to I just I was curious.
Yeah. Okay.
Speaker 0: So so I think because, you know, the nested JSON stuff is something we haven't historically done because we couldn't really do that cross database vendor properly support it. It is something that probably comes with new issues that we don't know about yet. Right? So oh, yeah. Somebody actually mentioned it here too.
It was like, Joshua again. Get a shout out, Josh. Stick with the current structure because making breaking breaking changes to the operation config will get to be a huge problem if using nested JSON objects. It's gonna be easier if you have a single row that changes. Just fair enough.
And also see support for older databases and vendors that don't have JSON support. Also true.
Speaker 2: Although,
Speaker 0: I I wanna say that the ones that don't have an end of life I guess SQLite, the previous version, not yet, of course. Yeah. So and and, also, obviously, the big breaking change. People have been exporting, importing things through the APIs. The separate tools have been made.
We saw that BCC schema sync shout out earlier. So we wanna make sure that that is not a huge freaking change either, right, where we just completely wreck any of those existing things. So sticking with the current structure probably makes the most sense. And then that raises the question around, you know, data integrity and import order, but that is a technical problem to solve that we just have to do during import. So that does mean that it'll probably be a separate way that we, do this than than compared to your usual export import just because order matters and that integrity matters.
No. That's fine.
Speaker 2: That should be really doable. You know? Walk the tree once and then just pop off from the back.
Speaker 0: Yeah. And and the age old question, is this part of the schema export? Yes or no?
Speaker 2: Oh, right. The the gonna
Speaker 0: be a discussion for a different day, which is discussion we've had before around roles. And the yeah. But Joshua basically is concluding as well. We just need some schema export flags, which is, like, what do we include? What do we not include?
This this this is the discussion we had exactly the same around roles. Do you wanna include roles and permissions? Probably. Do you wanna include users? Probably not.
Do you wanna include permissions? Probably. Do you wanna include all of the permissions or just the ones that are about your production databases? Probably just the prod ones. How do you filter that down?
This is the exact same stuff. Right? Is it all the flows? Is it some of the flows? How do you choose which of the flows are included and which ones aren't?
Do we just export everything and then assume that somebody goes into the file and deletes them manually? Lots of more fun questions to be had in this. And for that, I wanna say, subscribe, like, and subscribe.
Speaker 1: Nope. I think
Speaker 2: Hit the bell. Hit the bell.
Speaker 1: Always been.
Speaker 0: Hit the bell and make sure you don't miss that episode when we go deep on that.
Speaker 1: Now selective import export is gonna be needed for most of these things, I think. And ideally, whether export could export everything, but selective import on the import side, the ability to choose what I'm actually merging into the next iteration is is key.
Speaker 2: Right.
Speaker 0: Oh, that being said, I've not been muting. I'm muting. I'm muting. I'm muting. And I'm muting.
The the doorbell is ringing, so I just wanted to make sure it didn't get too annoying. But that being said, we're at the top of the hour here. There's one more question that just came in. So another quick thought, what about pulling changes instead of pushing them in? Like, a sort of federation type of thing where you link data from multiple direct instances, you pull change from another instance.
That sounds like a discussion for another day.
Speaker 1: I think we've had that as part of the dual syncing as part of the configuration as code anyway. It is something that we we've we're we're thinking about, on that side of the house, and that's this is part of that as well. Right? So the configuration as code affects this, having utilities that make this easy now, versus, you know
Speaker 0: Yeah. I I think that two way bind to me always has the file in between. So you push it from dev to file, and you pull it from prod from file. That is kind of the the the one to jump. Anyways, with that being said, thank you all for tuning in.
Thank you all for the great questions and ideas in the chat. This episode will be available on Directus TV in the very near future. Shout out to shout out to Nat. Dan, did you make sure there's any any Zingers in here? Do we have the thumbnail?
Speaker 2: Oh, that's not quite sure. May may maybe the the technically technically correct, maybe. I'm not sure.
Speaker 0: There you go. That's that's the perfect one. That being said, check it out drex. Io/tv. We'll be doing this again in in the near future.