FileMaker SaveAsXML and FMUpgradeTool: Building Automated Deployments

This presentation shows how to use FileMaker’s SaveAsXML and FMUpgradeTool features to create automated deployment workflows.

Mislav Kos walks through:

  • Understanding XML file structure and catalogs
  • Creating patch files for surgical code deployments
  • Using Patch Lab tool for building and testing patches
  • Setting up automated pipelines with Git integration
  • Moving from manual deployments to DevOps practices

You’ll see live demos of adding fields through patch files and learn about the challenges of working with large XML files. The session covers both current capabilities and future vision for continuous integration in FileMaker development.

Key tools discussed include the developer command line tool, XML exploder for managing large files, and mod log generation for tracking changes between environments.

This presentation is ideal for FileMaker developers who want to move beyond the Data Migration Tool and implement more controlled, automated deployment processes.


Transcript

Presented by Mislav Kos, Soliant Consulting Claris Practice Director of Engineering
August 5, 2025

Introduction and Housekeeping Notes

All right guys. My name is MislavKos. I’m the Director of Engineering at Soliant Consulting. I’m going to be talking to you today about the SAVEasXML and the FM upgrade tool features and capabilities as part of the Claris FileMaker platform.

Just real quick, kind of housekeeping note, best to just keep your questions until the end. You’re welcome to put them in the chat, but I’m just not going to be monitoring that as I’m talking, so I won’t look at that until the very end.

Okay, so the reason for doing this is I wanted to lay out a vision for what we could have in FileMaker, and to some extent we already have in terms of getting our deployments to be highly automated. A lot of the pieces are in place already, but I do want to acknowledge that not all of them are in place.

Purpose: Discussion and Collaboration on FileMaker DevOps Practices

 (00:48)

And so, some of this is aspirational and my purpose here is to share my ideas for how it could work, and in doing so, spark discussion in the community so that all of us together collectively through that discussion, we can motivate Claris to continue investing in these features and these capabilities.

And also through that conversation we can potentially influence the way in which they implement these things. So bottom line is I want to move FileMaker closer to DevOps practices.

So I’m going to do a part one of a demo real quick here just to give you some visuals to anchor to before we move on with the rest of it. So let me switch over to here. This is Patch Lab. It’s a tool that is up on GitHub that was published today. And I’m going to take us in here in just a quick high-level overview of it.

How to Create Patch Files and Apply Them

 (01:43):

The purpose of this is to make it easier to work out how to create these patch files, how to apply them, and this is a collection of unit tests, but you can also create composites here that patch multiple different things. But what we’re seeing in the screen now is just the unit tests and each of the different possible catalog objects and targets that you can operate on.

And then also the ads replaces in the lead. So you can see which ones I have worked out that would be in green, the ones in red aren’t quite working yet, whether that’s I didn’t do it right or there’s a bug in the tool.

So let’s just dive into one of these. I’ll just pick one here and take a look at the detail of it. So the idea is that when you want to create a unit test for one of these, you want to kind of work out how to build a patch file for one of these objects.

Workflow Overview and Demo Preview

 (02:30):

You would select what targets you want to operate on, what actions you want to do, give the test a name I’m seeing, okay? Yeah, give the test a name and then set things up with a developer file and a production file. You make your changes in the developer file, you build a patch file and then you apply the patch.

Let me see if I can blow this up a little bit. I am seeing some of the comments saying that it’s too small, so hopefully this will be a little bit better for people and I’ll try to speak up too.

Alright, so that’s as far as I’m going to go. Now a little bit later I’ll build an actual patch and apply it so we can see what that looks like too. But for now, I just want you to have a visual of what we’re building up to.

Foundation and Structure of XML Files

(03:29):

And I will move back to the presentation here and start kind of working my way in laying out some of the foundational pieces before we go back to the demo. So at the end of the presentation, I would like you guys to feel like you have a pretty strong familiarity with the structure of the XML files. Feel like you know how to go about building and applying a patch file. You’ll have access to this patch lab tool.

And also at a detailed level, we’ll look at this vision of how we get to a place where we have highly automated deployments.

The Goal: Build Excitement and Participation in Continuous Integration

(04:02):

And again, my primary objective here is just to build excitement and participation within the community so that we can influence Clariss towards moving towards moving the platform in a place where we have continuous integration. So I’m going to be talking about two key pieces.

SaveAsXML

(04:18):

One is SaveAsXML, and this was first released back in 2019, so it’s been a while now. It is available from the tools menu, but importantly it’s also available as a script step and as a command line program, which is this developer tool, which my sense is that a lot of developers aren’t aware that this tool exists, that it’s kind of flown in under the radar. So that tool is available out there and it can be used to create system scripts to generate XML files as well as a number of other maintenance actions.

And importantly, when we generate this SaveAsXML, what we’re doing is we’re expressing FileMaker code as text. And that’s really important because it unlocks a lot of capabilities that are available to platforms where the code is written out in text. So that’s SaveAsXML.

FMUpgrade Tool

The other piece of it is the upgrade tool, and that was first released in 2020, but it’s kind of been stuck in developer preview for a long time, but the most recent release of FileMaker has made substantial upgrades to it.

Upgrade Tool Benefits: Surgical Deployment vs. All-or-Nothing Migration

 (05:20):

And so that is very exciting to see. And this tool is used to apply an XML patch file to a FileMaker database so that we can deploy code into that database. So we’re talking about deployment here, and you may be asking yourself, well, can’t we already deploy using the data migration tool using DMT and we can? But that method and mechanism of deployment is all or nothing. The entire database file is being deployed, whereas with the upgrade tool, you can control which pieces get deployed. So it’s a much more surgical operation.

And also the DMT can be slow when you’re moving large sets of data. And with the upgrade tool, again, what you’re moving is code not data, and so therefore it’s going to be a faster operation. So these things put together, SaveAsXML gives us going from FileMaker code to text upgrade tool goes in the other direction.

End State Vision: Isolated Development and Strategic Deployment

 (06:14):

We now have this roundtrip capability, and again, this is a big deal that we are moving close to this capability. So let’s visualize what our end state would look like. Let’s say we have a development environment, a development database, and then a production database. And we have some record data in each file, and we have some code in each file, and I’m adding a new feature to the dev file.

And meanwhile my teammates are making their own development changes, and someone has had to make some changes in production that they have to do some hot fixes. And what I’d like to be able to do is isolate my changes and build them into a patch file and then push that patch out into a UAT environment. And if that looks okay, then deploy it to production. So visually that’s what we’re trying to get at here.

Automating Repetitive Tasks

(07:03):

And this whole process, I’d like it to be seamless, no friction, I want it to be automated. It should be fast from start to end and it should have minimal disruptions to end users. So at Soliant, we pay a lot of attention to how we manage our software projects, and there’s a consultant company called Constructs that we paid attention to over the years. It was founded by a guy named Steve McConnell A couple of years ago. He wrote a book called More Effective Agile, and throughout the book he peppered it with these key principles as he called them.

One of them is automate repetitive tasks. So if we look at the general flow of work in software development, it starts with requirements moves on to design, then you do the development and then you move through testing and version control and eventually deployment. Now the tasks on the left side of this flow, they’re the tasks that are necessarily manual.

Automation Strategy: From Human Tasks to Scalable Processes

(08:00):

They involve humans. This is where decisions need to be made in creativity and things of that sort. Whereas the tasks on the right, they’re potentially automated and they’re repetitive tasks. They can feel frustrating and boring to humans doing them, and they’re well suited to be automated so that they can be performed by computers. They’re also error prone if it’s left to humans to do it. So the closer work gets to that right side of the flow, the better suited it is for automation.

When we’re talking automation, we’re talking making it repeatable, testable, and scalable. The desired insight that we saw a moment ago visually, I’m going to rewrite it here in terms of bullet points. We’re going to make our development changes. We’re going to express that code as text, that’s the SaveAsXML file. We’re going to store that in version control. Then we need to be able to compare, commits different versions that we have stored so that we can detect changes between different commits.

 (09:03):

Then we need to be able to select individual changes to include in our patch file. Then we build the patch file, test that out and apply it. So those are the steps and if we look at each one, keeping in mind who does this work? Is this a human doing it, making development changes, that’s going to be people who are doing that. Although there is increasingly a role for AI to act as a coding assistant, but still humans are going to be in the driving seat there selecting the changes to include in the patch. That’s going to be a decision that a human should make, but it should be easy for them to do it. So there should be some automation that helps them make those selections. And similarly for the applying the patch, I’m going to be involved with that, but the extent that I want to be involved in is just clicking a button.

Human vs. Automated Tasks: Defining Roles in the Patch Workflow

(09:49):

That’s it. The rest of it should be automated. It should be easy for me to do. The rest of these tasks are all it should be automated. I don’t even want have to think about these things.

Now I’m a consultant. Consultants talk about scope and managing scope and that first item make development changes, having AI help with that. Let’s take that out of the scope of this conversation because we’re talking about deployment here. It is something that I think Claris needs to move towards, but for the purposes of this conversation, we’re not going to talk about that.

Instead, we’re going to look at the rest of these deployment tasks and how they can be automated. So this bullet point list can also be represented as a little diagram like this where we’re generating the XML files, we’re storing them in Git, then we have to compare them.

Challenge 1: Automated XML Generation Solutions

 (10:37):

That’s the diffing part. We build a patch, test the patch, apply the patch. So what stands between where we are now in this desired end state? I’ve listed a number of challenges here. I don’t want you to start reading through them. We’ll go through them one by one and we’ll start with the first one there.

How do we generate these XML files? As I mentioned earlier, we can generate them from the tools menu, but importantly we can also script them using the script step or that developer tool command line. At Alliant, the way we’ve solved this is we create an API endpoint for each database that we want to be able to get an XML from. We install that in the database using an add-on so that it’s easy to do. And once it’s there, then that can be called by an external service. And so that just makes it a very turnkey thing where there’s minimal manual effort that needs to happen.

Challenge 2: XML Storage and Version Management with CI Pipelines

(11:32):

It’s just to a subset work that is required. Then the next step is once you have the XMLs, what do you do with them? Where do you put them? Sometimes they can be quite large in size. Where do you store them? How do you keep different versions around?

And so for this we use a pipeline, a GI pipeline, and in case you’re not familiar with those, it’s essentially like an automated script that either when you tell it to run or on a schedule or based on some conditions that are met. So you can think of it as a FileMaker server script schedule. But instead of doing operations on record data, it’s doing operations on code. And our pipeline calls our endpoint in each of the target database files to generate the XML to retrieve it, and then it commits it into git. This project here available on GitHub is the one that makes the call to the endpoint to tell it to generate the XML and then to retrieve it.

And then this here is what the pipeline looks like when it makes it so you don’t have to think about it, right? This thing just runs in the background and in our case, we have it run nightly. And so whenever we need it, we have a fresh XML that’s available to us to look at a run of the pipeline looks like this. These are simply the steps that are in that script that get executed.

Large File Challenges: Git and Tool Limitations

(12:53):

And so for us, it’s a nightly schedule. And so every weeknight this thing runs and creates a new commit. If there were changes made in the database, if there’s no changes, then it doesn’t create a commit. So this is worked pretty well for us, but we have run into some challenges and one of them is that these XML files, the sizes can get quite big.

Here is one of the solutions that I do some work with. One of the files in there, the XML file is close to a quarter of a gig. And so why is that a problem? Well, most tools aren’t built to handle large files. GI doesn’t like large files. It has a limit to how much you can store in a repository. And the diff tool that GIT uses doesn’t handle large files. Well, if you want to do XML style sheets, if you want to apply that to a file, it requires a bit of memory to process large files.

Real-World Impact: Tool Performance Issues with Large XML Files

 (13:45):

And if you want to open up that XML in VS code, it’s going to start to complain about the sizes of the files. Here’s a screenshot of one of our project repositories, and we can see we’re kind of creeping up on that size limit. When I look at this repository in source tree, which is a graphical UI for looking at GI projects, and if I compare it to commits, I get a spinner that just kind of blocks me for a while and then sometimes this fun thing happens, run out of application memory, and if I open one of the XML files in VS code, it complains that the file is too big, that it’s not going to be able to do all the things that it typically would be able to do with an XML file. So how do we work around these large file sizes?

Solution: XML File Splitting with Human-Readable Script Rendering

(14:30):

Well, the solution that we’re using is an open source project that was developed by Malian, who is a developer out in Germany, and it splits these large files into small XML files. Solan recently made a contribution to this project to create a lossless mode so that when the large XML is split into smaller ones, no information is lost. The output of running this program is a separate XML file for every layout, every script, every table, and so on. And that makes it much more manageable.

In addition to splitting out or exploding these XML files, Malta is also built in a feature that scripts get rendered in a human readable way and tools love that. It becomes much very useful to be able to just diff a script, a human readable version of a script like this. And it’s very intuitively understandable what the change was.

Challenge 3: Understanding the Undocumented XML Format Structure

(15:27):

In this case, it was a renaming of a script. So we’ve added a calling that EXPLODER program in our pipeline. And so what we store in our Git commits is not the large XML files, but the small XML pieces onto the next challenge. And that is that the XML format is not really well documented. So let’s take some time here to take a look at what the structure of that XML looks like.

At the very top level, we have just one tag. It’s called FM SAVE as XML, and it has some attributes that tell us the version of the XML format, the version of the FileMaker product that generated the XML, that’s what the source is. And then the name of the file that the XML came from, the XML format version in this case, 2, 2, 3 0.

Here’s a mapping of all the different versions that have existed over time as well as how they map into the FileMaker version that generates them.

XML Structure Deep Dive: Three Core Elements and Their Purposes

 (16:27):

And if you ever want to refer to this on your own, there’s a link in the upper right that you can go to see this table. If we go back to the XML and drill deeper into it, inside of the FM save as XML tag, there’s going to be three elements and only every three elements. Structure, metadata, and DDR info.

That last one, let’s start with that one. It is intended to be used by analysis tools. And so unless you’re developing an analysis tool, my advice is just ignore this for now.

For building patch files, you’re not going to need any of the information that’s in there. The information that you’re going to be interested in is going to be located elsewhere.

So then the next one is metadata, and that tells us the file level info, think file options. So here’s the file options dialogue and we can see that that file is set up to automatically log in with an admin account and on the website we can see what that looks like in the XML file.

Structure Element: Actions, Catalogs, and the Add-Only Nature of Save as XML

(17:29):

So it just kind of mirrors that functionality. And then everything else for us is going to be inside of the structure element. And so the bulk of the file, all of that code that exists there is going to be described inside of a structure element and specifically inside of a child element called ad action for patch files, there’s going to be different actions like adding something, replacing something or deleting something.

But for the save as XML files, it’s only ever going to be an ad action. And the reason is because these save as XML files are created from the context of, Hey, what do we need to do if we were to rebuild a file from scratch? And if that’s what you’re doing, then all you’re doing is adding, you’re not modifying or deleting anything, everything’s going to be an ad action.

And inside of the add action things are organized by what are called catalogs, and these are just the different things that go into a FileMaker, file tables, layouts, scripts, custom functions, value list, things like that.

Catalog Organization: Why Order Matters for Circular References

(18:32):

Here’s the list of all the catalogs that exist. It’s not a short list, but it’s also a finite list. It’s about 20 objects. And so it’s not too bad in terms of wrapping your head around it, just spending a little bit of time looking at it. And I think it’ll be understandable to FileMaker developers, the order of the catalogs does matter.

And to illustrate why I am going to give an example, suppose that we’re adding a layout that has a button and then we create a script and the button layout, the layout button. We make it so that it calls the script, and then inside of the script we have a reference to the layout. So what we just created is a circular reference. If we were to deploy these changes manually into another file, which would we create first? The layout or the script, whichever one is created first would have a broken reference to the other.

Solving Circular Dependencies: Script Stubs and Catalog Structure

(19:22):

So the way that the SaveAsXML sidesteps this issue is that it creates first a script stub that has no script steps in it. Then it creates the layout with the button. The button can reference the script stub, it’s not going to have a broken reference. And then you add the steps to the script stub, and those steps can then reference the layout, right? So in this way, we avoid any issues that have to do with these circular dependencies if we keep diving further in.

Let’s take a look at one of the catalogs and see what it looks like. Let’s look at the script catalog and we can see that it has a UUID tag and that UUID in addition to containing the id, it also contains last modified information, how many times the catalog’s been modified and when the last modified was and by whom. In addition to that, it also lists out all the different scripts that are part of that catalog, and we can refer to these things as catalog objects.

Catalog Objects Deep Dive: Script Stubs vs. Script Steps

(20:21):

So the child elements of a catalog, those are catalog objects, and interestingly, we can see that script folders are treated simply as scripts inside of the XML.

Okay, so this is a script catalog. Let’s dive into one of these scripts and see what that looks like. And you’ll see that it also has its own UUID element and then it has script options listed, which tell us is this script available in the scripts menu, can it be run with full access? Things like that.

But notice there’s no script steps here yet, and that’s because we’re just looking at a script stub. If we want to see the script steps, then we have to move from the script catalog into steps for scripts, and that’s what we’re seeing here, and you’ll see that inside of that script tag.

Now we have a script reference because that essentially has to, we have to tell the tool, Hey, these steps, they belong to this script stub that we mentioned earlier.

Script Step Anatomy: Attributes, Hashes, and Content Structure

(21:23):

It’s kind of like a foreign key. It sort of points you to that stub that exists elsewhere in the file. And then underneath it we have the individual script steps called out in the XML. If we look at one of those script steps in more detail, it has some attributes in it.

One of them is a hash and a hash. Is there intended to be useful for comparing different versions of the XML over time so that you can quickly tell did something in that element change? If anything is different inside of that script step, it’s going to have a different hash and we’ll know that something changed because the hash will be different.

We also see the index, which is our script number, the line number, and then the internal ID and the name. These attributes are going to be the same for all script steps, but because there’s many different kinds of script steps, the stuff that’s different of it is the stuff that’s inside the child elements of the script steps.

So let’s see what that looks like for the show custom dialogue script step here, we see that we’re specifying a message and we can see what that looks like in the XML as well as the expression, the calculation expression that specifies what the message is. And we can see that we have one button, so we can see how the XML expresses the code that we see in the FileMaker interface, expresses it as text.

Practical Approach: Black Box Thinking for XML Understanding

(22:51):

Now there’s lots of different script steps, and if you’re thinking, oh man, I got to understand the XML for all these different script steps, my advice is just to think of this as a black box. For most use cases, you’re not going to need to know or understand the details at that level. And when you do need to understand it, then you can just look at that particular scripts that they’re looking at and I think it’ll make sense to you from the context that you’re looking at it.

But if we just think of it as a black box, then we can see the XML at a high level as simply this. There’s the structure tag, it has an add tag and it has a list of catalogs, and each catalog has the catalog objects listed in it. So this is a simpler model to internalize what the structure of these XMLs looks like.

(23:39):

If you want to dive deeper in this, you can of course generate the XML of a database that you’re familiar with, but you can also take a look at the sample file that I have available here on GitHub. The idea is that this file has one example of one of everything, one of every script, step one of every layout object, one of every kind of permutation, and then it already has the XML created for it, and that way it’s a ready to go reference so we can see any particular kind of thing that we’re interested in, what it looks like in the XML without us having to go and recreate it and regenerate it.

Also, this gives you all the different versions of the XMLs so that we can compare them and see, okay, how did they change over time? How did they evolve when new features exist? Things like that.

Comparing Your Commits and Understanding How They Change

 (24:29):

All right, onto the next challenge. Once we have our commits with the XMLs, we then need to compare them and understand how things are changing in our solution. So what I’m describing there is ding to different solutions.

And what we want to do is we want to go from here where we have a development file where we’ve made some changes and we have a production file and we want to understand what those changes were that were made to something like this that simply lists out those changes and it groups them by catalog.

So we can see here we’ve added some fields, we’ve changed some other fields, we’ve made some script changes, we’ve made some layout changes. This is a much more understandable format to a human consumer. So what we need is a diff tool. But the problem is that standard diff tools don’t work well with XML because of the hierarchical nature of how that is organized.

Custom Diff Tool Strategy: Listing and Comparing Catalog Objects

(25:29):

And also they don’t work well with large files. And so what we need is a custom diff tool. And the way we’re going to get there is we’re going to start by first listing all items, all catalog objects that exist in a file, and again, catalog objects. Those are the fields, the scripts, the layouts, the custom functions. We’re just going to list all the ones that exist in our development copy and also do a similar list for our production copy.

And then we’re going to identify changed items by comparing those two lists. Now, this list of items, we can get that from the XML because we can look at those UUID tags that have the internal ID and name and the last modified and Proof+Geist has this web app out there. Maybe you’ve come across it. It’s a great tool. You can drag the XML onto it and it just lists out all the different items in there along with the last modified information.

FM Monologue: A New Tool for Automated Mod Log Generation

(26:27):

And so we need something like this, but we need it to be scriptable so that we can automate workflows around it. We needed to work on multiple files at once, and we also needed to compute hashes for each catalog object.

I’ll get back to the hashes a little bit later, but essentially we need to generate these mod logs. I like the name for that, so I’m going to use that. We need to be able to generate these mod logs for our solution, even if it consists of multiple files, and we need to generate one main monologue that describes all of the different items in that solution.

So here’s a new project that I’m working on called FM monologue. It’s a command line tool. It’s based in Rust, which is a lower level programming language that makes it possible to build with performance in mind. It’s close to initial release and it generates these mod logs for us.

Tool Output Analysis: Catalog Object Details and Metadata

(27:21):

Here’s a sample output of what comes out of that tool. And each row in here is one catalog object and the columns. We can see that for each object, we can see what catalog it comes from, what database file it comes from, what version of the XML format was used, the internal ID and the name of that catalog object. And then for some kinds of objects, we can also see some contextual information like in this case for accounts. We can see if they’re internal accounts or if they’re external and we can see the last modified information, how many times it’s been modified and who the last person to modify it was and when did that happen. Again, that information comes from those UUID tags. And then the last column is here is the hash column. And again, I’ll come back to that in just a second.

FM Delta: Comparing Mod Logs to Identify Changes

(28:10):

So this generates our mod log output, and then the next step is to use the mod monologue from our development copy and our production copy to identify changes. And to do that, we need to line up the two mod logs and compare them. So there’s another project that I’m working on, FM Delta that’s also a command line tool. Also Rust-based also close to initial release. And this will compare those two monologues and then it identifies which items change in that comparison.

Here’s what the output of that is going to look like. It’s going to be a CSV format, and each row is going to be a change that has happened between the two different versions. So in this case, we’re seeing a field that exists in our development database but not in our production. And therefore this will need to be an add action in our production file if we want to bring our production file so that it’s equivalent to our development file.

Similarly for deletes, if something exists in production but not in development, then it’s a delete and changes are going to be things that we can see exist in both places. And if we can see that the last modified timestamp is different, then maybe something changed in there. And so that should be a change. If the timestamp is the same, we don’t need to include it in our change detection because nothing in there has changed.

Matching Algorithm: DMT-Compatible Item Comparison Strategy

(29:39):

But when we line up two monologues like this, we need to match up the items in each file. And how are we going to do that? What if the IDs and the names of an item don’t both match? For instance, in this case we have a different ID even though the name is the same. So what we need is a matching algorithm and the rules we’re going to use are going to be the same ones that the data migration tool uses. We’re going to first match by ID and name. We’re then going to match by name only and then by ID only. And if we can’t find a match with any of those three strategies, then there is no match. So that’s where we’re going to match up the lines. And now let’s come back to the hash column and talk a little bit about that.

Hash Validation: Filtering Out Meaningless Timestamp Changes

(30:23):

So I had said if we see that the last modified timestamp is different in the different monologues, then we can suspect that something has changed. And if we have that mechanism of detecting a change, then what do we need the hash for?

So let’s look at here. In this case, the last modified timestamps are different. So we know that it’s potentially a replace action, but what if the hashes were the same? They’re not in this contrived example, but what if they were the same? If the hashes are the same, that means this is what we could call a meaningless diff. It means that the last modified timestamp got updated, but not because anything substantial changed in that catalog object. And so we wouldn’t want to include that in our diff results. That would just be noise into the overall picture. So really it’s just the hash that we need to identify replace actions.

Understanding Change Analysis: From What Changed to How It Changed

(31:16):

So we’ve been talking about adding or comparing monologues, and a diff tool shows us what changed, like field A changed for example, but also how did it change Example, it has a new auto enter calculation, and so far we’ve only been talking about what changed, but to understand which changes we want to potentially include in our patch file, we’ll also need to understand how those changes happened.

So once we’ve identified what changed, we can then use standard diff tools to see how it changed. And the reason why this will work is because now the XMLs that are being diffed are much smaller and standard diff tools can handle that. And so can we as human consumers of that. It’s not visually as overwhelming as it was before when it was a large XML file. There’s a tool called tasking that works pretty well in terms of diffing.

Practical Example: Analyzing Field Changes with Diff Tools

(32:08):

It’ll understand if things are broken up on separate lines. It’ll understand that there’s just white space and that it should be matched up in the same way. So the tool works pretty well.

Let’s look at an example here. We’ve identified that the rural language field has changed and now we want to understand how it changed. We can use the astic to see that one of the versions the auto enter is set to constant data. And the other one, the auto enter is set to calculate it. So we know that what changed is the auto setting.

Upgrade Tool Progress: Testing Results and Community Opportunities

(32:41):

So that was a bit about save as XML, and let’s move our attention to the upgrade tool.

Next. As I mentioned, this tool has been largely non-functional for a long time, but with the recent release, there’s a lot of new catalog objects that are now supported, and I think that’s cause for celebration and excitement.

Here’s a screenshot of the unit tests that I’ve done on all the different catalog objects, and we can see that most of these are green, meaning they’re passing the test. There are some that are red and they’re either red because I haven’t been able to work them out, but they do work or there’s legitimately bugs in the system, hard to tell which is the case, but maybe as a community we can give this attention and see if we can see if any of these we can get to work.

And even though there are some of these that are still, at least I haven’t gotten to work, consider that a month ago just about all of this would’ve been read.

Documentation Challenges: The Need for Better Guidance

(33:37):

And so there’s been a lot of progress that has been made and it’s very encouraging to me to see that. Now the ones that are red, again, like I said, maybe they would work fine if I only knew how to build the patch, which brings us to our next challenge that it’s an under documented tool.

David Vickram has this quote, the future is here, but we don’t have the documentation. So the future is here. We have the patch tool, but we haven’t been given documentation on how to use it. Rather we have some documentation. There is a guide out there, but I think it needs to have more detail added to it.

By the way, David’s going to be doing some presentations on this topic in September and over at Engage You in November. So check those out. So we don’t have, in my opinion, kind of full set of documentation on this.

Building Patch Files: Live Demo Walkthrough

(34:30):

So how do we build these patch files? I did write a blog post on this. You can check that out if you’d like. And also let’s do a part two of our demo to see how we go about building one of these.

Alright, so I’m going to switch back over to that demo file and we’re going to create a new record and let’s add a field to a file. So we’re going to say our target is going to be a field and we’re going to do an add action. We’re going to give our test a name and let’s also configure it with some auto enter options.

And now this side of the patch test is all empty. I’m going to click the initialized patch test button so that it gives me some initial starting points, like a development copy of a file which is mostly blank at this point and a production copy of the same file.

Demo Setup: Creating and Modifying Files for Patch Generation

(35:26):

Those two files are exactly identical other than their name right now. And over here I have a starting point for a template file and I can see what that starting point looks like here.

And now I’m going to click download all files, which is going to put a copy of these files locally on my file system and open up the finder window so I can see it. I’m going to open up the dev file, open up manage database and add a demo field. And I’m going to give it an auto enter, close the file and then add it up here so that we have an updated file.

I’ll click the button to generate the XML file. Now this here you can do by running the command and command line and this is the command, but one of the goals of this tool is just to simplify this process for us and remove some of that friction that is there if we’re trying to orchestrate these things manually.

XML Comparison: Examining the Generated Files Side by Side

(36:27):

So now we have this XML file and here it is. And we’re also going to open up the patch file and we’re going to compare those side by side.

So I’m just going to move that over here and I’m going to try to blow this up a little bit so the folks can see it a little bit better. I will collapse these so that we have kind of a scannable XML file.

So this is our entire XML file. And over on the right we have our patch file and we want to find the field that we added.

Building the Patch: Copying Elements and References

(36:56):

Here it is demo and I’m just going to take the entire element and notice I don’t need to know what’s inside of it. I could look at it, but also I could just keep it collapsed and just think of it as a black box and then carry it over into my patch file.

And then I’m going to also need to tell the tool which table this field belongs to. I’m going to do that by copying this table reference over here. And then we’ll also need a UUID for the catalog, the field catalog. There the UUIDs have this last modified information. I’m pretty sure that we don’t need that in here because it’s just going to get overwritten when the tool updates it in the target database. It’s going to give it its own last modified.

So I’ve saved this file. We now need to put it back in our patch lab tool in the test.

Applying the Patch: Results and Key Takeaways

 (37:51):

And now we’re going to click a button to apply this patch to this production file. And that’s going to create a new file here for us. It is possible to apply a patch in place in a production file. We’re not doing that here just so we can keep them distinct and we can compare them if we want to.

I’m going to open up the patched tool and we can see that we have our field here. So we just added a field to a file and if we look at our production file, it did not have this, so it didn’t have that field there.

So hopefully this demonstrates how seamlessly we can build these patch files. And really the hard part now is just figuring what that format is, and that’s sort of what it should be. That’s the part that’s new to us. All these other things are just kind of mechanics that we don’t need to be bothered with.

Tool Benefits and Multi-Item Patch Consideration

 (38:42):

So this tool will hopefully make that easier. And again, I do have templates worked out for most of these actions. There are some reds that aren’t quite working yet, but most of these things exist. So it gives us a starting point and hopefully as a community we can add to this and sort of build out this suite of unit tests.

So back to the presentation. Oh, that’s right. The other thing I want to show in the patch file is once you work out one of these patches, let’s say that you’re trying to figure out how to patch a couple of different items in the same patch and maybe you get stuck in it.

Sharing Patch Bundles: Export and Import for Community Collaboration

(39:30):

One second, let me find the record that I just created, which is right here. So this one worked for us. We’re going to mark it as verified, but let’s say this isn’t working for us. Let’s say it’s failing, right? And we’re stuck on it, we’re not quite sure what to do about it. And we want to ask the community, Hey, what am I on stuck here? What am I doing wrong?

There’s a button here that I can click that will export the entire patch bundle as a zip file. Here it is. This is a zip file. I’m going to get rid of this folder. The zip file is basically what that folder was. So if I uncompress the zip file and go inside that folder, this is the folder we were seeing before.

And in there is a manifest file which tells us what the different files exist that are there, and that’s what makes it possible to then import into our database.

So let’s pretend that we don’t already have this record in our database. I’m going to delete it. And somebody send us one of these bundles that they want some help with and we want to take a look at it. We can click import patch. It’s going to tell us, Hey, fine, tell me where the manifest is located. We’re going to locate it, we’re going to point it to this manifest.

Live Demo Challenges and the Need for Automation

(40:47):

Something is not working there. Lemme try that again. Okay? This is what happens in live demos. I’ve done this a dozen times and of course, all right, let’s try it one more time before we move on. All right, manifest file, not having luck patch tests field. Okay, let’s try it this way.

 (41:28):

And now it’s importing it and now it imported and it created a new record. If the record existed in my patch library, it would update all the fields. And so now we can recreate that patch that someone else was working with before.

And so the idea is that we can share these patches that we build using the Clariss community forum so that we can collectively build up a knowledge of how to work with these files.

Alright, let’s return back to the presentation here. Next challenge is, alright, so great, we’re seeing how this is all going to, how it all fits together. But to build these patch files, we got to do that manually. And really we need an automated way to do this, right? And so I’m looking to build this. I hope they do. I do have some ideas of how it could be done and maybe it’s something I tackle as a next project, but really I’d like Claris to build this for us.

Patch Validation and Deployment Challenges

(42:26):

So once the patches are built, we then need to validate the patches to confirm that they’re working as we think they should be. And I think we can do this by using some of the pieces that I already described, in particular the modification log and the delta.

So the steps would be we generate a mod log for the patched file and we compare it to our dev mod log. And in particular, we filter those comparisons to show only the changes that we selected to include in the patch file. And if we don’t detect any differences, then I think it’s safe to say that the patch worked because it applied the things that are in the patch correctly to the target file.

All right, and then the last set of challenges here, when we deploy these patch files, there’s a bit of manual work for us to do and it also is disruptive to end users.

Streamlining Deployment: From Manual Process to API-Driven Automation

 (43:22):

So currently to apply a patch, we have to remote connect to a server, we have to upload our patch file to the server, we have to close the database file, we then have to run the command to apply the patch, and then we have to open the file.

Now imagine if we had an admin, API endpoint where we can pass a link to the patch file and then FileMaker server does all the rest. If users are in the system, their sessions are paused so that the patch can be applied. And note I’m saying pause not disconnected. That way it’s minimal.

And user disruption, we’d likely need a new kind of pause state that would ensure that no scripts are in progress. And if there are some long running scripts that are in progress, the patch would simply time out and it would be the developer’s responsibility to ensure that the database is in a patchable state.

Vision for the Future: Integrated SAX MEL Delivery and Patch Patrol System

(44:12):

So all of these different pieces that I described, here’s a visual that kind of puts them all together. I know there’s a lot there, but dare to dream with me that one day we can have something like this.

Let’s break this down. There’s sort of two chunks of this. The one on the left is what I’m calling SaveAsXML delivery. This would be the automated mechanism of giving us or making these XML files available to us in a sort of very friction-free automated way. And on the right is what I’m calling patch patrol, which would be a graphical UI and interface where we can build these patch files and deploy them.

So let’s look at it step by step. The delivery one, it starts with this Git project that contains that pipeline code that I was describing. That’s what this vertical arrow is. This is the pipeline that runs.

Pipeline Process: From Repository Clone to XML Generation and Enhancement

 (45:03):

We clone this thereby creating our own repository for our specific application and for a particular environment, a development environment, a production environment, whatever you need.

Once that’s there, we then run the pipeline, which calls this project, which is available in GitHub. And this is what makes a call to that endpoint that requests an XML to be created. So that generates the XML.

And then we can imagine sort of layering on other capabilities into this flow. So this is just an idea stage right now, but we can imagine some library that removes secrets that are hard coded into the code, right from that XML.

Extended Pipeline Capabilities: XML Processing and Analytics Enhancement

(45:47):

The pipeline then calls that exploder library that Mal Basian built, that splits the XMLs into small XML files. This is work in progress. Currently.

This is another library that, again, this doesn’t have anything to do with deployments, but we can see that once we have a structure like this in place, we can add capabilities to it.

So for instance, if you’ve ever been in a position where you are doing an analysis of the top call stats, you’ll know that it refers to catalog objects by IDs and not by names. Well, IDs aren’t very meaningful to humans. And so really what’s needed is to enrich the top cost stats with the names. And so if we can map the ID names that we can get that information from the XML, we can have an automated process that then enriches those top call stats.

So we can kind of imagine how this could work for us.

Complete Workflow: Automated Analysis and Human-Guided Patch Management

(46:34):

And then we would run the mod log to create a modification log of all of the databases that had the XMLs generated for them. And then another idea here, we could have an AI take a look at that XML and create a change summary for us or even do a code review for us. So this is a process that would be entirely automated.

And then on the other side, this would be an interface that we can view the available commits from multiple environments. We can select two commits. We can then see what the differences are between those two commits.

And then among those differences, we can select the ones we want to include in our patch. And then we need the tool to build the patch. We have the tool to apply the patch, and then we need to validate the patch. So this process on the right would be a mix of automation and human in the loop.

Community Engagement: Gauging Interest and Offering Support

(47:25):

Now, while some of the pieces are not there yet, a lot of them are there already. And if you’re interested in building out this or at least start to build out this capability for your own solutions, let us know if you’d like our help with doing that in particular, if you’re interested in a paid workshop to get it set up.

And I think there’s a way to do a poll here to see if there’s any interest in this. I’m going to just drop that in. Give me a quick second. Okay, let’s see if that works.

So I’m curious, is there interest in things like this out in the community? So if you wouldn’t mind just giving that a look and indicating what your level of interest is, but regardless of what manner that takes place, I said paid workshop here. We need to earn our revenue of course, but also we are committed to moving this forward within the community.

Strong Community Interest: Validation for Moving Forward

(48:21):

So one way or another, we need to make this happen. So back to this. So we have a lot of these capabilities, but we want more, right? We want to fill out that whole picture.

Now, when I had the idea to do this presentation, I did wonder how many people would show up who’s interested in this stuff. I find it an interesting topic, but what am I going to get? Like 10 people to sign up? 20. At last count, there were 267 of you who registered for this.

And I know that I am an amazing speaker. I get that. But I’m guessing that you’re here because of the topic and not because of me. And to me, this is pretty cool that there’s really strong interest. It’s very clear, very strong interest in the community on this topic. And that may not have been obvious to people.

Call to Action: Key Features Needed from Claris

(49:18):

It may not have been obvious to Claris prior to this. So this is an opportunity for all of us to speak with one voice and tell Claris, Hey, this matters to us.

And in particular, the things that we’d like you to focus on is to make the upgrade tool fully functional to document it. Right now when you run the tool, if there’s errors, it doesn’t report on the error. So we want meaningful reporting of the errors.

We would like to have that admin API endpoint so that we can apply patches to hosted files so it can be done in a way that doesn’t disconnect user sessions.

And then when we’re deciding what to include in our patch file, we need a way to identify which changes go together. So we need a method to tag changes so that it’s easy to say, Hey, these things belong together, and then we need to be able to build these patch files.

Community Action: Supporting the Initiative and Resources

(50:07):

Building them manually is a non-starter, so we need a tool to build these for us.

Now, earlier today, I created a post in Clariss community where I listed these requests. And if you agree that these are useful, helpful requests, I had like to ask you to go there and up, upload the post and add your comments to it. And if we can generate a lot of conversation, a lot of buzz around it, it’ll get claus’s attention.

All right, so here’s the QR code. I’m going to put this QR code back. Well actually on the next slide. So there it is. So if you’re with me that this is something that we need that is important, then please take a moment to do that.

And then this slide here gives links to all the other things that I mentioned along the way.

Resources Shared in the Presentation

Continue the conversation (please upvote and add your comments)
https://community.claris.com/en/s/question/0D5Vy00001GlYIZKA3/saxml-fmupgradetool-thank-you-and-a-request-for-continued-investment

PatchLab
https://github.com/soliantconsulting/patchlab
https://www.youtube.com/watch?v=CFsjWBKFd58

Blog posts
https://www.soliantconsulting.com/tag/saxml

SaXML Delivery
https://github.com/soliantconsulting/fm-saxml-delivery
https://github.com/soliantconsulting/fm-saxml-delivery-addon

SaXML Exploder – split XMLs
https://github.com/bc-m/fm-xml-export-exploder

One-of-Everything (ooeOne-of-Everything (Ooe)
https://github.com/mislavkos/ooe-fm

Difftastic – diff XMLs
https://difftastic.wilfred.me.uk

Claris guides and release notes
https://help.claris.com/en/app-upgrade-tool-guide/content/patch-file.html
https://help.claris.com/en/developer-tool-guide/content/index.html
https://help.claris.com/en/server-release-notes/content/index.html (see “FileMaker tools”)

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top