Bill Heizer and Adam Russ with Soliant Consulting sit down with Matt Navarre (Claris.training) and Matt Petrowsky (ISO Productions, ISO FileMaker Magazine) to discuss the ins and outs of FileMaker hosting with Soliant.cloud, the premier MSP for FileMaker consumers across the globe. We cover a range of topics that explain why we’ve invested heavily in our platform as a service, and how it benefits our customers as the perfect entry point to the cloud, giving them capabilities with FileMaker unavailable elsewhere.
Welcome to a multiple podcast. So we got FileMaker talk. We’ve got two folks from Celine here. And Adam, why don’t you do your intro. Let’s hear.
Adam Russ (00:00:11):
Fair thing. My name’s Adam Russ. I’m with Soliant. We do the Cloud Quests podcast. We’re happy to be here with the two Matts, and we’re looking forward to the conversation.
Matt Navarre (00:00:19):
I think it’s so fun when we get to do a cross release podcast like that. And then we have Bill Heiser as well. Yes, bill. I think so many of us have known each other for, I don’t know, 20 plus years. But we’re going to really talk about hosting really kind of the hosting world, and which is a ton more around that really. It’s, I think it’s, that’s not really even defining. It is a bill. It’s gimme a better definition of what, of what’s our scope today.
Bill Heizer (00:00:45):
Yeah. So, you know, when you, when you talk about hosting this particular clientele that we’re probably targeting, it’s interested in the FileMaker hosting component, right? Soliant has a product offering called slim.cloud, just a name. It is our, for our managed services provider, our managed services offering. Inside of that, we do many, many different things, everything. From, you know, things with Salesforce, things with pure cloud native applications lift and shift app modernization, iott ai, ml service storage V D I a number of different things.
But today, you know, I think we’ll primarily focus on the file hosting, what that means and what is encompassing that, and then how it tangentially relates to some of those things. I think more specifically around, you know, leveraging the cloud to help, I don’t know, the, probably the white word is you know, shore up some shortcomings of the FileMaker platform, address some of the issues that we, and I know specifically Matt and yourself and, you know, others have dealt with in that world over the years is, you know, supporting FileMaker and some of the things that we’ve architected into the cloud infrastructure to help shore that thing up and do a little bit better job there. So, I’m excited to talk about that and what we do specifically.
Matt Navarre (00:02:02):
I’ll throw kind of one thing out at the beginning that I don’t know, may or may not be controversial. I think FileMaker, the platform, has one kind of Achilles heel, and that’s FileMaker Server. It’s the one thing that’s kind of a single point of failure, and you are doing a ton of work to basically reduce that risk as much as you can. So I want to talk about some of the technologies you’re doing there, Matt, what do you think about all that?
Matt Petrowsky (00:02:25):
As long as you’ve got the systems in place that keep you up, then you’re good to go. So, I mean, if you’re a small guy and you’re running bare metal and you’re just on, you know, a shared host or a dedicated machine, or if you’re running with the service that they’re running cloud containers, and/or if you’re running MO multiple with Kubernetes, then it’s all just a matter of is what you need to access up and are you monitoring it?
Matt Navarre (00:02:53):
Yeah. So from what I understand, Bill…
Matt Petrowsky (00:02:54):
I mean, I imagine they’ve got, I mean, they’ve got Wim (Decorte) over there, so I mean, they got the God guru of all, well, at least the public guy, he puts the stuff out there, which is good. He doesn’t keep it just for himself. <Laugh>.
Matt Navarre (00:03:06):
Yeah. And Bill, it’s a huge room full of Mac Minis, right?
Bill Heizer (00:03:06):
Yeah, yeah, exactly. Yeah. Mac, Mac Minis and under cabinet a couple.
Matt Navarre (00:03:16):
Of Mac Pros that are sort of sagging in there. Yeah.
Bill Heizer (00:03:18):
<Laugh>, we really go all in our hardware. <Laugh>.
Matt Petrowsky (00:03:26):
I would assume you guys are running containers.
Bill Heizer (00:03:30):
No, actually we’re not. I’d be more than happy to talk about that. Oh, wow. Yes. We can start off with that. So, you know, at the end of the day, you come down to a couple of factors that I think really matter. And, and in our lab area, before we vet things out into production, we go through a whole series of things.
Everything from the way the machines themselves, the EC2 instances are configured. We’ve looked at virtualization of that stuff. You’ve sort of got an abstraction layer that lays there. We’ve looked at obviously the different platforms, Windows. We’ve looked at Unix, we’ve looked at obviously the later stuff in Ubuntu and those sort of things. And at the end of the day, there’s a much bigger picture that comes into play here. We’ve, you know, containerization has its pros and its cons — price, performance scalability, operational management.
Bill Heizer (00:04:21):
There’s a number of factors that come into play there. And, you know, when you, when we look at it, we have to look at it from a perspective of cost. We have to look at it from a perspective of performance and supportability and really interoperability. And I’ll use that generically for a second, but let’s talk about cost. You know, one of the reasons you’d go to containerization is cost. Another reason would be for some support capabilities.
And there are certainly some advantages to that. You know, you’re encapsulating the OS, you’re encapsulating a configuration, you’re encapsulating the database itself. The FileMaker itself also has some nuances to it in the way that it operates. And you’ve got to get down to the details here and peel back the onion, but overall, I think what it comes down to is containerization, I think has a place, and it’s coming.
Bill Heizer (00:05:12):
The product itself has to have a few changes done to it that I think they’re even moving towards some, some things that will make that a much more appealing thing. We’ve, we’ve gone through production releases, our testing releases and looked at how it would be affected in our environment. But from a performance perspective, there’s some, some gains there as well.
You know, when you compare Linux or, or Unix versus Windows, you know, I have been back and forth on this quite a bit of, is there a performance advantage? And the reality of it is, is we’ve kind of come to realize that there’s some things that are better, and there’s some things that are worse. And it mostly comes down to your specific application and the things that you’re doing with it. When you compare that platform against a Windows platform, there’s also some things built in. I mean, has this operating system been around a long time? It has a lot of capabilities that are sort of innate to it. There’s also some interoperability that’s built into the US that we just don’t have on the Unix side yet.
Matt Navarre (00:06:09):
I want to take a, before we get too deep into this, I want to take a quick step back because I’m realizing that maybe some people don’t know what containerization is. Yeah. it’s something that I didn’t know that much about until not that long ago. So, like, normally FileMaker Server is one computer. You’ve got the OS and the FileMaker Server installation are around the same box, whether that’s Windows or, or Linux. Just one of you. Describe like what’s different about containerization in a way that’s more than I understand currently.
Bill Heizer (00:06:35):
Matt, why don’t you take it?
Matt Petrowsky (00:06:38):
Sure. so if you have a computer and it’s got an operating system on it, and you install FileMaker Server alongside that operating system, that would be bare metal, you’re basically running on hardware. The next stage is when you’ve got hardware that has high capability and you want to be able to run multiple different operating systems, then you’re going to create a VM, which is a virtual machine that’s a separate isolated OS and file could be in each VM, but it’s very resource and processor intensive because you’re setting up all the different drivers.
Now, a container is, it’s a step beyond. You basically are running, you’re letting the OS still manage all of the OS things, but you’re then just running a subset of all the things that you need to do. They spin up faster and it’s better resource utilization, and you can have a lot of them running side by side. So it’s sort of like, I don’t know, you just get more bang for the buck without the overhead when you go with a container versus a full VM.
Matt Navarre (00:07:43):
Right. But then Bill, you just said there’s some downsides to that and I’d like to know more about them.
Bill Heizer (00:07:47):
Yeah, I mean, at the end of the day, you have to take into consideration that OS will have to go through patching. You’ve got different things that have to be updated a period of time. You have to reach in and do monitoring. You have to reach in and, you know, execute things on there. For collecting statistics, you’ve got to, you know, upgrade individual components. You may have to extend interoperability. And with the containers, you’ve got to reach into those individual containers. In safe states, you’ve got to bring those states back. Moving EBS or connected volumes in and out of them can be very difficult. You’ve got to be concerned about extension of the partitions. You’ve got to be concerned about a number of sort of, we’ll just call them OS level type stuff, as well as the infrastructure around that.
Matt Navarre (00:08:28):
Yeah, yeah. If like the, if the container OS itself has an issue that’s going to affect a whole host of servers, which wouldn’t be the case. I guess it would be the case if you have a big VM, but big VMs like on AWS, those are all redundant. Yep. So that the whole benefit of AWS is everything is redundant servers, everything.
Matt Petrowsky (00:08:47):
That features on your company’s tech stack too, your, your knowledge base is going to dictate what direction you’re going go. I mean, you need to leverage your human resources in order to be able to manage something that, and grow beyond. Yeah, yeah. You know, the few people running it.
Bill Heizer (00:09:03):
Yeah, absolutely. And you also have to take into consideration who your target client is and what their use cases are, because, you know, one size certainly doesn’t fit all. You have some capabilities when you’re outside of containerization to move that needle, you know, much easier, much more dynamically than you do inside of that container. So, you know, at the end of the day, I’m not, I’m not opposed to it.
Our testing has shown us that there’s some real promise there. And we’ve, we’ve had some further conversations with some other organizations about moving to that model in some cases, you know, Claris Server, you know, coming down the road here, you know, we’ve got some builds of that available. The, the thing is out, and we’ve looked at that and that, I think that holds more promise to that than the current IT iterations of FileMaker Server do.
Bill Heizer (00:09:47):
Hmm. so we’ll see where that materializes. We’ve been doing a lot of testing with, you know, some pre-build and, and kind of kind of figuring out where we’re going to go there. But it is a much bigger picture when you’ve got to think about just, “Hey, I’m going to stand up this instance, it’s going to run like this, and I don’t need to do all these other things with it.” Again, it gets down to the weeds, but overall, our take on it is, at this point in time, we’re just not quite ready to pull the trigger on that for all of these other capabilities need to be in the place. Again, managing hundreds and hundreds of units is way different than managing, you know, your manual 50, you know, machines that you need to do so right. Scale comes into play there, so. Yeah.
Matt Navarre (00:10:23):
Yeah, yeah. For sure. Like the, like I was thinking about it when I was prepping this, the Drake equation comes to play when you’re, when you have that many for those who don’t know what that is, it’s a, it’s a math equation to determine whether or not what the likelihood of intelligent life in the universe is, which is the number of stars with planets times the number of, the percentage of stars with planets, times the percentage of planets that have in the habitability zone times or percentage of planets that are in the age where they could actually have had life times their percentage of that have oxygen in the atmosphere, blah, blah, blah, blah, blah.
So they’re all, you know, teeny tiny fractions times a hundred billion. And, you know, the, the numbers get high. They ac actually get measurable. If you have 40 servers, you don’t really have to worry about stuff. If you have 2000, you do, because the things that are super rare are going to be happening every week.
Bill Heizer (00:11:11):
Yeah. The other piece of that too is, is, how do I say this? When you cross over barrier into deployment models, there are different areas that maybe not so obvious that are affected. Like one by example is cloud formations. So we are a cloud formation, service delivery certified AWS provider. And…
Matt Navarre (00:11:34):
Can you define that a little bit?
Bill Heizer (00:11:36):
Yeah. So at the end of the day and I was going to get into some of, some differentiations, but one of the key differentiators I think that we bring to the table, at least from an architectural perspective, and what we do in the MSP is this notion of a couple of key areas. And one of them is cloud formations delivery. So it’s a program that you basically, we can call it certified. You get designated to, you go through some validation, you show examples of your work, AWS validates you on that and gives you a delivery program designation. And we’ve gone through that for a number of types of things. So everything from, from, you know, modern cloud development using React and some other components and gateways and APIs and stuff, we’ve applied that methodology to our deployment for all of our clients.
Matt Navarre (00:12:16):
Yeah. Yeah. So it’s a deployment. It’s code…
Bill Heizer (00:12:18):
Infrastructure. It’s a code.
Matt Navarre (00:12:19):
Infrastructure. Perfect. Yes. Code is, how does, how does the cloud formation product from AWS change how you would deploy economic server, I guess is what I’m kind want to know.
Bill Heizer (00:12:29):
It’s a great question because it is a fundamental differentiator for us. I think whenever we put together a client’s implementation, I’m going to use some generic terms. You know, a client comes to us and says, we would like your assistance here. We’re going to leverage a lot of automation. A, because we want to keep cost down. B, because we’ve architected to do such, and C because it’s very reliable. You know, humans get, humans make errors. We just do. Right? And we’re talking about hundreds and hundreds of steps that go into place to get something into place. It’s one thing to throw a box out there, give it an IP, install an SSL certificate, and go to town. It’s another to put one out there that has CloudWatch monitoring and system auditing and you know automation for snapbacks or what we call Snapshots
Bill Heizer (00:13:17):
and automation for monitoring. And all of these things are go into place, setting up the VPCs, routers, gateways, security groups, and controls. All of those things are implemented through an automation routine. Not only the surrogate implemented and built dynamically, it’s injected with pre-build stuff. You use CloudFormations to stand up the networking, allocate out resources, build gateways, set up security groups and security policies, apply them to those instances, give instances and the backend infrastructure, the rights to communicate with each other. And it’s all done in a way that is consistent, reliable, vetted, and repeatable. So, by way of example, just last, I think it was about two or three weeks ago, we had to make a change globally. So we’re talking about, I don’t know, we’re probably active right now in 17 different regions across the globe.
Matt Navarre (00:14:07):
Was one of my other questions.
Bill Heizer (00:14:09):
Yeah. I mean, as we support every region, we exclusively leave out the Middle East and Russia for reasons that you can probably guess for yourself. But you know, at the end of the day, we deploy across almost every region available in AWS, including local zones, which is a whole different topic we can talk about. But, but at the end of the day, we needed to make a deployment change that would affect every single instance, not only FileMaker Servers, but the services that they interact with. And rather than having to go through every account and build a policy group and change the networking and opened up different security ports, we wrote a modification to our cloud formation. We ran that through a vet, and it runs through a test, verifies whether these things are going to, and then we deploy it out to the entire infrastructure.
Bill Heizer (00:14:53):
And within a couple of hours, the entire region of every account is updated. And with that, that goes to say, Hey, it’s cost us very little to do that update. It keeps us up to date, it keeps us, how do I say this? more enticed to keep it up to date. And it also enables new capabilities much, much faster. So as we implement new capabilities, like for example, we’re introducing something called OptiFlex Disaster Recovery model.
And, and we can talk more about that later, but at the end of the day, it required this modification so that the instances themselves would have access through a specific security group to interact with another set of services operating not only in one account, but in every account. That would’ve been a monumental task to modify about 17 different things without that code automation in place, without that infrastructure that it was pre-built on that.
Or to go back to our earlier conversation, if you had done it in containers, this would be a whole different ballgame. And so you’ve got to keep under consideration not only the, the technical implementations of what you get out of the, but how it affects the bigger picture. So this is a great example of how that architecture really benefits us, thereby benefiting our customers. We don’t have to charge them for the time, and we can bring features to life much, much sooner.
Adam Russ (00:16:15):
Right. Helps in, in some cases, Bill, it’ll also give us the ability to roll back certain things, right? . So if we’ve got something that goes haywire, which, you know, God forbid we’ve, we’ve kind of designated that we want to invest in something that’ll give us the ability to go back and, you know, move that, move the needle back too. So that’s another aspect. Yeah.
Matt Navarre (00:16:33):
Remember that time when FileMaker Server had this bug that came out and the other time that happened, and <laugh>
Matt Petrowsky (00:16:38):
That’s funny cuz when you said that, when you said we can deploy globally, I mean, no matter what the tech is, whether you’re writing a shell script or whether you’re writing Windows Batch Script, if you’re the engineer and you hit that button, you’re saying a prayer right before <laugh>, when it goes out to multiple hundreds of machines, you’re like, which one is going to fail? Which one of them you’re looking for it? I think one has the outlier that you hit <laugh>.
Bill Heizer (00:17:01):
Well, I think that would be true if you didn’t have the architecture that I feel like we’ve taken this into consideration before we deploy anything. Our lab is virtualized this to support this concept of every different configuration. Realize too that our instances are pre-built, baked on a number of variants, and those variants don’t get changed. So we know exactly how it’s going to affect every single implementation.
And we can, we can run that through a number of series of processes. For example, we just did the Server 2023 modification, and we’re going to do an evolution in that case. We’ve already vetted it out to the basic configurations we have. We know exactly what components are going to change. We know how the volumes are going to be affected, and we can automate that into our process and kind of run a complete environment test and know the results of that before we do that.
Bill Heizer (00:17:54):
So if I agree with you, you are sort of precedent in those cases. I think it as an MSP, one of our primary jobs is to eliminate that complexity and eliminate, or at a minimum, reduce that risk exposure for our clients. And yeah, of course, if you do it wrong, you can, you can suffer those consequences. And again, it’s about investing in those technologies. I mean, our lab, we don’t charge our clients for. But we have to do those sort of things and it’s a, yeah, definitely it’s an expense, but one that became necessary a number of years ago, but if we’re going to do this, we need to do this right.
Matt Navarre (00:18:30):
So, all right, I got a couple pointed questions, because they’re occurring to me as we’re talking about all this awesome stuff. So normally when you have to upgrade your FileMaker Server from 19 five to 19 six to 19 20, 23 <laugh> and there’s, you have a couple of options, right? As a normal user, you either leave the OS there and you install the new version of server hoping everything’s going to work, or you could deploy a new a new OS and with the ER server already installed and configured, and then you just map the data drives over to it. And I’m also really curious about like, how many data drives do you have? Do you have a separate drive for data and backup? Like, what is your process for like a regular customer to go from 19 five to 19 six server? I
Matt Petrowsky (00:19:14):
I would assume all data’s detached from what’s operating.
Matt Navarre (00:19:18):
I would too, but I want to ask Bill <laugh>, that’s
Matt Petrowsky (00:19:20):
You’re most flexible.
Matt Navarre (00:19:21):
Yeah, yeah, yeah.
Bill Heizer (00:19:23):
So the answer is yes. Yes, yes. So let’s, let’s talk about that a little bit. First of all, there is a complete ossification between server configuration and data, and there’s some nuances that with the product, you know, a combination of both hard lengths and the way that we attach EBS volumes and the dynamics of that, and then the idea of keeping backups in mind. So we want to come back to that here in a little bit because this architecture dovetails into that. You’ve got a couple of things you need to consider. First of all, we want to complete abstraction away from the server and the data that includes both FileMaker data as well as container data.
Matt Navarre (00:20:01):
Those are separate volumes, probably.
Bill Heizer (00:20:03):
They’re not, and there’s a reason for that. They could be, it doesn’t really matter.
Matt Navarre (00:20:07):
Bill Heizer (00:20:08):
You know, there was a time and place when I would’ve been the biggest proponent in the world of and, but we’re talking physical hardware back then when it was like, okay, you have different volumes for this and you know, they’ll be on a raid and it’s hardware and it’s a hardware raid, and we’re going to do all this stuff that’s irrelevant. Now it’s just irrelevant in the cloud. It just doesn’t really matter. Everything that’s connected to that is EBS. Which is Elastic Block Storage. And you can change, we’re going to say to make those same. And you can do some things with that.
Matt Navarre (00:20:33):
Also change the size on the fly without having to reboot. Yep. Yeah. All that’s beautiful stuff.
Bill Heizer (00:20:37):
Yep. We can modify configurations on the fly. We can specify what instance type that we want. Throw more. Yeah. You can do IOPS. The funny thing about IOPS is we’d to be able to get more performance out of it, but honestly, the Draco engine, there’s a limiting factor there where you sort of…
Matt Navarre (00:20:51):
Yeah, that’s so true.
Bill Heizer (00:20:53):
…hit a level and there’s really nothing you can do there. Now, the data, my migration tool is a different story, but we can talk about that later. But getting back to that, we’ll stand up an instance. So let’s say we wanted to update from one particular version of FileMaker to another. We’re going to take a couple of things. First of all we’re going to see what the reliability of that particular installer is that we are supplied <laugh>. Been working with them since…
Matt Navarre (00:21:16):
You mean the installer? Not even just the version of Server?
Bill Heizer (00:21:20):
Yeah, I mean, yeah, what the installer were supplied you know, windows has a whole package, definition file that’s very rich for installation. It hits sometimes and it doesn’t hit other times.
Matt Navarre (00:21:32):
Oh, I’ve seen it fail.
Bill Heizer (00:21:33):
Automated and unautomated you know, where did they put a reboot in the call and bottom line. We’ve just come to realize that we really can’t rely on that and that’s okay. But what it comes down to in some cases we will do a complete automatic deployment. So we have a lot of automation that occurs.
So let’s say for example, it’s Sunday night and we’ve notified people that we’re going to deploy to the North Virginia region an update that’s going to say go from 19.4 to 19.6. In that case, we may say we’ve vetted it out and we’re going to run an automatic deployment, and we will push that out. We’ll use an API backend on AWS to call some SQS. We’ll throw that into Event Bridge. Event Bridge will make a call over to the OS, do the work that it needs to do, log that instances, reboot that box, and you’re up and running, and that will work.
Matt Navarre (00:22:22):
So that means when we get to the FileMaker Server part, that means log all the users off, close the file safely, stop the server, and shut down the service. And then I’m, are you, are you always replacing the machine? You’re not just doing an upgrade on the OS of an file? Depend. Oh, so sometimes, right?
Bill Heizer (00:22:44):
It really depends. Let’s go to the other end of the spectrum. So that was a very simple example, and that’s going to be what I would consider your dot releases in between. Although file a little bit different now. I mean, every release is kind of an update, you know, whether it be 19 4, 5, 6, or I don’t know, I guess 2023 would be considered a bigger upgrade. We consider it a bigger upgrade <laugh>, really. But there, well, for a number of reasons. I, I think there’s some, oh, so let’s go to the other spectrum and I can talk about that. So in that particular case, you know, we’ve got a new operating system capable of running Server. Now it brings to it some capabilities that we would like to take advantage of the Windows operating system and innately has some capabilities that we can leverage.
Bill Heizer (00:23:24):
It’s interaction with AWS and AWS’s CLI, AWS’s APIs, the support of that, you know, once we go to the cloud, we’re limited; we can’t even think about trying to continue to support, say, Windows 2012 or an older version of, of something, you know, Java and other button pieces coming to place. But in that particular case, we may want to look at what we call an evolution. And this is more to the extreme, in which case we can use our automation to deploy and build out an entire new server that’s sitting alongside of that part of the production one. Then once that is in place, there are a number of other automations that can come into place to do the following due to the architecture and the way that we set up a generic build where there’s an abstraction between the data and the server configuration.
Bill Heizer (00:24:13):
You’ve got a few things you’ve got to get from the old server, server script schedules. You may have some drivers, you may have some script automation that may be going on. You may have some plugins that need to be there. A lot of the plugins, a lot of those configurations and those scripting things are obfuscated off into the data volume, which we can snapshot. We can then literally issue a command to say, bring this one down. Production goes down. We take the volume regardless if it’s one gigabyte or one terabyte in volume size, we disconnect that volume, dynamically reattach it to the other one. And due to the architecture of the way that the machine talks to that volume through some of our proprietary ways, it will just recognize that as that new volume and it will bring those files up based on that.
Bill Heizer (00:25:02):
Yeah. So you can, you’ve got a variety of different things, but the, I think the thing that is most important here is not understand all of the moving pieces, but to understand this, because we are in the cloud and because we can leverage the cloud infrastructure Event Bridge, SQS, AWS APIs and talk to those in a modern cloud methodology through those APIs and modified keys and secure ways to do that, and each client is provisioned into their own blast radius. Its account, its VPC, its security roles. That’s okay. We are not doing any shared accounts. You know, it’s very easy to do all this automation when you’re just operating in a single tendency, which I know a lot of people do, and it’s unfortunate because it’s a problem when it’s a problem and it’s not intelligence.
Matt Navarre (00:25:53):
So do you mean shared accounts? You mean like, IM users in AWS?
Bill Heizer (00:25:56):
No, it’s way more complicated than that because when you establish an account in AWS, you’re establishing sort of the first barrier and the way that account is built. A lot of MSPs will, you know, they’ll say, well, listen, we’ll have one account and we’ll put stuff in a bunch of different regions and we’ll stand up people’s FileMaker Server or this particular instance in this account automation and the rules of security and VPC peering and VPC sharing and account sharing becomes very simplified.
Matt Navarre (00:26:29):
Oh, you mean a whole AWS account? Not, yes. Okay. Okay. That’s a yes.
Bill Heizer (00:26:34):
And then, and then inside of that, then you get into multi-sharing stuff. We don’t do any, you know, we’re not in the business of doing file sharing across servers. You know, we just don’t. Each individual client has their own servers, their own restrictions on those sort of things. So we’re not getting into that model. That model has been dead since 15. But what we are in the business of is controlling that blast radius. And that blast radius is not only for that client, but it’s also about extending that client. Because say for example, we have a number of clients that have very advanced, you know, where we may have multiple VPCs pointing to multiple VPNs across multiple different network segments that are piped in for both replication authentication, access to ESS on-prem or SaaS models.
Bill Heizer (00:27:24):
And by isolating those accounts and doing that, controlling that we can extend that individual clients without affecting any of the anyone else’s infrastructure, let alone exposing anyone else to anything. That is a fundamental difference in operating in the cloud that I think I’m, I’m going to be very honest, we went down that path initially. Very quickly through our relationship with AWS and trying to get into become an AWS Advanced Tier Partner that we had made some assumptions that we just, you know, it’s one of those things you don’t know until, you know, right? Early on we kind of figured out that we needed to restructure some of that account structure to control that and allow for extensibility. So going back to the conversation about upgrading, we’re going to take each implementation and look at where the client is in their evolution.
Bill Heizer (00:28:15):
Meaning are they long in the tooth and their particular version they’re supporting, what’s the server build, what are the other components involved? What plugins they do they have, do they have a large amount of data for container data? Do they have a large amount of scripting? And we’re going to vary that depending upon what’s available.
With 2023, we’ve come to the realization that we’ve got a new OS build. We’d love to take advantage of the new capabilities that are going to shore up to what we call our OptiFlex family of products, which we can talk about later. But it’s about supporting the sustainability pillar and the cost optimization pillars, which are part of the AWS Well-Architected Framework. They’re about savings to our customers and giving them a lot more flexibility. And we’re going to be able to leverage that with Server 2023 windows and both FileMaker 2023, right. I guess Server 2022 anyway.
Point B is, we’re going to do an evolution. We’re going to dynamically build those new server infrastructures, and then we’re going to stand those up, migrate over the components that are important for that particular implementation. And we’ll be able to swap those volumes into place that effectively reduces our downtime to minutes as opposed to runnning as whole server upgrade.
Matt Navarre (00:29:31):
And being down for hours or half a day. All right. You’ve penalized me twice with OptiFlex. I think now’s the time. I want to know what it is.
Matt Petrowsky (00:29:37):
I have a question. When it comes to your package of automation, considering everything how much of it are you using FileMaker itself as your front end? Are you doing a lot of it through the FileMaker Admin API? Or are you doing a lot of this through Amazon’s provided automation through script their scripts, and…
Bill Heizer (00:30:00):
I would say about…
Matt Petrowsky (00:30:00):
…Lambda, whatever. Absolutely. Or you communicating through a file, like I know that the listeners would love to know. We’ve got this one FileMaker and it’s actually communicating to Lambda, which is communicating to our FileMaker Servers and we’re communicating directly to our server, and we’re all doing it in this FileMaker database. That’s what, okay. <Laugh>. Yeah.
Bill Heizer (00:30:18):
I think at the end of the day, we kind of ran out of room with FileMaker in terms of, you know, it’s supporting that stuff a long time ago. You know, we’ve…
Matt Petrowsky (00:30:25):
So is it more of a web UI?
Bill Heizer (00:30:27):
…I mean, FileMaker’s a component of it, but let me describe it to you and to get some idea. So at the end of the day, I would say 95% of what we do from an automation is pure Cloud Native, everything from Lambda to SQS to gateways you know, a number of other features that talk directly to AWS, both CLI sometimes there’s callbacks to data APIs and FileMaker to like post results of things.
But the big thing, for example, currently right now operating and actually just fired one second ago, like literally 11:45, we just kicked off a global snapback. That Global Snapback runs about 13,000 jobs every 15 minutes. It’s an organization where we go out and we are doing a snapshot of a volume. We are cataloging it, we are using Event Bridge, we’re using Lambda, we’re using some communications to the FileMaker data API.
And we are making a bunch of connections to store all this stuff. That stuff happens on the end, and then we report it back. Some of it goes to an ESS table that we do access with a FileMaker solution, but we also have a whole different React front end that does a bunch of other things. So at the end of the day…
Matt Navarre (00:31:43):
That’s a global backup of every single client’s thing. Every 15 minutes, basically?
Bill Heizer (00:31:46):
Every 15 minutes.
Matt Navarre (00:31:49):
I think 12 minutes is probably what they really expect. They’ll come up.
Bill Heizer (00:31:51):
I think so it’s been an hour for a long time, and then we did a few changes to do it. We have two different models. We call it Premium and Standard. Standard’s an hour. When we go to 2023, we will invoke the full 15 minutes. And the reality of it is, I don’t care if your database is one megabyte or one terabyte, a snapshot through AWS occurs in a matter of milliseconds. It takes me longer to get the issue of the command through the OS than it does to actually do it. And the beauty in that, it also is not only is it instantaneous, but we’re leveraging AWS snapshot on the backside, which means we get into little detail, but you’ll probably find this intriguing. I don’t do any FileMaker backups on local EBS for a number of reasons.
Bill Heizer (00:32:40):
Number one is cost, because I want, our default is every 15 minutes for the last 24 hours, once for every of the last seven days, once for every of the last four weeks, and once for the last of the every of the last three months. Those are all cataloged and online, and they’re available to you to mount as a secure FTP point to allow a much more robust file transfer. Yeah. And secure HTTP is, right? Yeah. You can, through our portal, you can expose the live databases by dynamically opening a port through a request that’s happened through self-service, opens that up, allows you to use an FTP tool to move stuff back and forth into place, download whatever you want, replace a file, put it back into place, modify container data without having to have an unsecure open port to RDC.
Bill Heizer (00:33:31):
Right, right. Because the number one effect that we see in our monitoring is brute force attack. It’s constant, it’s daily. Oh. You know, and I hate watching auth logs. <Laugh>. Yes. And, and, and the reality of it is, is we just shut it down. We just, you don’t allow, and we dynamically open ports based on rules for RDC, even for our own administrative access, but coming back to the snapback process that occurs, and then when that occurs, we’re only capturing the Deltas. So if I have a one terabyte solution, I, the first time I bring that on, I take a snapshot, I, I am capturing a whole, you know, terabyte of that. But then from that point on, I no longer am, I’m not storing data on an EBS, which is 12 times more expensive and isn’t durable. We’re immediately storing that as a delta for the, for the next 36 backups that we do in S3, which is 11 nines of durability can be cross region replicated, again, based on your risk exposure.
Bill Heizer (00:34:29):
Right. Everybody’s got a different exposure. Costs are different at those, but we then have access to those things to bring online at on demand, and we have three ways that we can do it. And both two of the three are self-serve.
A, you go to our portal, you say, I want to mount this, snap back it, expose it as secure FTP dynamically open support gives you access through a dynamically generated set of credentialing. You get what you need to do, do what you need to do, and you close the thing back up and it dynamically shuts down after a period of time, or you can force it closed at that point in time.
Second one is self through, through opening the, the active enabling live FTP support to the live databases areas, both secure and the standard location, in which case you can move things around.
Bill Heizer (00:35:16):
So if you have a set of container data that you need to move, you have access to do that in a very secure way, you can move files back and forth. And the last one is this which is our first level of data recovery. In the event of a problem, it can be very cumbersome to restore a terabyte worth of data. I mean, it’s EBS, so it’s fiber channel on the backend, but it still takes time to copy stuff from EBS to EBS.
Matt Petrowsky (00:36:17):
Wow. Are you running parallel block volumes?
Bill Heizer (00:36:20):
Nope. We don’t need to. We don’t. Well, the way that it fundamentally works is we’re leveraging snapshots. And, and without getting into how snapshot work, imagine if you will, we get a snapshot of the volume as it comes up the first time. From that point on, you’re doing a block level comparison and you’re only storing the deltas are those block levels. Right. Then when you request a restore, you’re effectively looking at the block levels from the hole, and then each delta that changes in it, it merges those, right. And you create a new volume that represents all of that
Matt Petrowsky (00:36:53):
Based off of the Deltas.
Bill Heizer (00:36:54):
Yep. Based off of the Deltas. And then because of our architecture and because of the way that we sort of fit the images together, if you will, and all the pointers and the way that it’s designed, we can just disconnect a volume and substitute its Mount Point back into place. And that server then comes up exactly as it was at that point in time. FileMaker is unaware that something has happened behind the scenes, and it buys us a massive amount of capabilities for that, as well as the other component of OptiPlex, which is our disaster recovery model.
Matt Petrowsky (00:37:32):
These are the things that people won’t and probably usually can’t spend the time to create for themselves. That’s usually what you’re paying for when you’re paying for hosting; you’re paying for the infrastructure that you, yourself, one won’t take the time to either learn, but then two, actually implement.
Bill Heizer (00:37:51):
So you hit it right on the head. You’ve got to find somebody to leverage this stuff. It would be silly for Soliant or quite honestly, I mean, any FBA that even had a large amount of clients to, to think about investing in all of these tools, if there wasn’t going to be a larger use of them. It would be very difficult for me to justify to say one particular client, even if they were a large enough client to say, we’re going to implement all of these things just for you. I mean, Soliant.cloud was kind of built out of necessity for us. I mean, I know that everyone here has dealt with situations where either the, let’s say your client you’re working with their infrastructure team may not necessarily be versed on FileMaker ease. Is that a right, is that a word? Can I use that as a word <laugh>?
Matt Petrowsky (00:38:33):
Bill Heizer (00:38:34):
And quite honestly, how does it work with their infrastructure? And, thirdly too is, and this isn’t meant to be a derogatory thing, but we sometimes lack the experience. I mean, I spent 15 years at FileMaker working with a lot of FBAs to help them overcome some of this infrastructure piece. I mean, prior to FileMaker, I worked in some large organizations, Dow and Lily and Boeing, and these are large implementations. And my infrastructure experience was very helpful in me helping customers solve some of these things. I hate it when somebody has this wonderful solution and the infrastructure or the configurations, right?
Matt Navarre (00:39:08):
Is the weak point.
Bill Heizer (00:39:09):
Is it implement, is it is a barrier for them to get over. So us taking control is too strong of a word, but us providing this capability and shoring this up provides our customers and hopefully our FBAs with a better experience that they can count on and have some faith that things are done the right way. And I can say, you know, I don’t want to toot own horn, but you know, we’ve been very lucky to have a significant amount of success. Over the years we’ve really refined this into something that we can count on now. We feel like that you know, the offering is really vetted at this point.
Matt Petrowsky (00:39:48):
I was snickering while you were, while we’re talking and just reflecting on, you know, years gone by and you’re like, okay, a one terabyte backup and you’re talking about a fiber back plane, I’m thinking, yeah. What would that be like on Sneakernet? Hey, gimme the sand disc <laugh>, let’s copy the file and walk it over. Yeah. It’s just a different world than before. It’s just amazing.
Matt Navarre (00:40:11):
So a stack of CyQuests 44 max.
Matt Petrowsky (00:40:14):
Bill Heizer (00:40:14):
When we stepped into this world, you bring up a good point, you know. It’s all about leveraging your file skillsets and leveraging your, we’ll just call it infrastructure expertise. So lack of better term. But what I found early on and you know, lucky I had people like Mike Duncan and you know, Brian Engert, had been extraordinarily helpful in helping us architect. We’ve worked with AWS to get some of their engineering input on some of our architectural designs.
But, but you know, I look back over the years and those experiences have helped to form how this thing gets architected. And those experiences are designed to, like I say, shore up things that the product may not do the way that we want it to or expose new capabilities that I think are fundamental to a better experience with, with FileMaker in the product line. You know, it might be a good thing to talk a little bit about some of the things behind the scenes about how validated that is you know, and what customers might want to look for in any MSP regardless, whether it be cloud or whoever they’re going with.
Matt Navarre (00:41:26):
I think they’re, so yeah, that might tie into one, one of my other questions, which is like, what disasters could have happened that your infrastructure prevented? What are the things that other people who are in this space? By the way, we’ve been sitting at 97 acronyms for a while, so there’s three more before we hit a hundred. And that’s going to be pretty exciting. I’m not actually counting him, but there’s been a lot <laugh>
Bill Heizer (00:41:44):
I’ll have three more within the next five minutes. So I promise I’ll do my best. You bring up a good point. We recently have seen a couple of issues, and I’m not going to name names because that’s just not proper to do, but I will describe the SI situation. Back, I don’t know, maybe six months ago or so we were made aware of a ransomware attack on one of our competitors, I guess for the lack of a better term.
And you know, I think, I think at the end of the day, you would be a fool to sit there and try to make a claim that you are oblivious or you are in the clear from any of these type stuff because it’s a moving target. Right. But what I do think it is, is it’s the MSP’s responsibility, including ours, to think about the unthinkable. You know it’s very, very easy to not have a backup strategy. And the reality of it is, is you don’t need it until you do. Right? Yeah.
Matt Navarre (00:42:36):
Bill Heizer (00:42:37):
This particular case, you know, this ransomware attack, basically for those unfamiliar with the core of it is someone has gained unauthorized access to an instance. And they will encrypt that instance with a key and then they will hold the data and its contents at ransom until you pay the money and then they’ll give you the key or hopefully they give you the key. You know, I think at the end of the day you got to think about that stuff. When I talked about the snapshots, I think one of the things we wanted to do is we want to make a diverse and completely separate firewalled off section between backups, different levels of backups, right. And different storage systems. Again, with the separation between OS and data, forget about configurations of apps and all that stuff, because all of that I can stand up automatically.
Bill Heizer (00:42:31):
Yeah. So a ransomware, one approach to deal with that would have been to have an automated ability to rebuild an entire infrastructure. Just destroy the other one. We don’t care. But you got to get the data, right? Well, if that data is stored in a firewalled area that’s prior to that point in time, and even if it were encrypted and it got put onto that encrypted, it would still be in a different data store that would be and it would be written at a block level and you’d be able to restore from that.
Matt Navarre (00:44:02):
Yeah. It’s still reducing risk. Like, I mean it’s been, it’s been said that if the right hacker really, really, really, really wants your data, they’re going to get it.
Bill Heizer (00:44:12):
And I think it’s your goal to say, okay, every area that I store things and everything that I do, I have to put up some firewalls, some blast radius. And every, the risk mitigation strategy, again, is different for every customer. So a baseline set of things may be okay for someone, you know, but you go pay on that and you say, well, what’s your time to recover? What’s, what’s in your budget? Is an hour okay, is a day, okay, is 10 minutes okay?
Then you have to supply different levels of that. Each of those have to have to adhere to the models that not only financially appeal to them, but also protect them in those areas. And, you know, we, we saw another issue, just what, I guess it’s only been a couple of weeks ago where a particular vendor had a fire in their building.
Matt Navarre (00:44:56):
Bill Heizer (00:44:57):
And that fire caused no damage to the physical equipment in that data center. However, what it did do was cut off access to that so you couldn’t get into the building, the building lost power, the building lost internet connectivity. So three days before you can get in the building a day to get the electricity put in another day to get the fiber put in another day to rebuild all that physical layer hardware and get it back into place.
So again, we’re back to that question, what’s your risk mitigation factor? Yeah. And have you thought about those things? You know, so in our case, we’ve given some consideration to that type of scenario. And what you’re really talking about in our case, is a complete data center failure in a particular region. In which case we have the option and some default stuff to say, listen, we want to protect against regional failure, both from a connectivity, from a data exposure and a configuration.
Bill Heizer (00:46:01):
The last thing you want to do at two o’clock in the morning on a Sunday is pull out your, you know, your 16 page, rebuild my environment and reconfigure in all of that. Our OptiFlex product is designed for that level of risk exposure where not only is the data abstracted away and stored away in a secure model and even in a different region, but the configurations of those servers that can be modified and brought up instantaneously.
So you take our cross region OptiFlex product. You may be sitting in the North Virginia region, and we will mirror a configuration over in the Ohio region, completely different geographically, different data centers, completely different set of hostware that it’s running on. And we will stop that, we’ll stop that instance and we will only run it when we’re ready to engage it or when we do what’s called packaging.
Bill Heizer (00:46:55):
So we shuttle those, those snapshots over to that other region. Now we’re in two places with 99 point 11 nine versus durability. We’ve got the ability to stand that up. So instead of going through all of that, you go to our portal, you select launch from a recovery point, which is mirrored from those last 15 minute snapshots that we’ve stored. And you can restore to that point, and you’ll be up and running with a full DNS switch. And if you require, because of that account separation, to be able for your disaster recovery box to reach back into either your on-prem or reach out to that third party through another, we can establish that connectivity to bring that up dynamically too. So it’s a full feature configuration solving that, hey, if I really need it, it’s a press button. Yeah. And everything’s got up online.
Matt Navarre (00:47:45):
Early when I said that the one of the Achilles heel of file found server, what you just said, put that sentence in all uppercase with multiple underscores and 15 exclamation points because all of that work was needed. Because if it’s not Mongo DB at the backend, which already solved all that stuff, you know but you’ve solved it so it’s there. It’s awesome.
Matt Petrowsky (00:48:05):
Well, even MongoDB, doesn’t couch MongoDB. It doesn’t matter what pieces of the puzzle you’re using. It all comes down to the automation, the glue, the scripts that you stick Correct. In between that makes everything work orchestrated.
Matt Navarre (00:48:18):
I get that. I’m just, what I’m talking about is like what, you know, what you would have to do if you’re running your own FileMaker Server versus what you would do if you were just using a cloud appliance.
Matt Petrowsky (00:48:28):
You’d have to know on Amazon’s infrastructure, how to write scripts. Yeah. How to do your FileMaker Server maintenance and all the other stuff that you need to know.
Matt Navarre (00:48:35):
Yeah. All that.
Bill Heizer (00:48:37):
Matt Navarre (00:48:37):
I would just trust someone.
Bill Heizer (00:48:39):
Is obvious is to try and let more people leverage that stuff. The investment for a single client is never going to be worth it. The investment that we did for us initially was, like I said, out of necessity. We wanted our customers to have a better experience. We wanted to be able to service them in a timely manner. and shore up some of these things. When we have a problem, we want to, you know, we want to process that. We know the steps, we know exactly what we need to do, and we can, you know, come to re recover them very quickly. And the other thing to that is sort of hidden in the waves here is our ability to service that and troubleshoot which t is massively, exponentially faster because we know all of the moving components we pre-build and pre-bake into things.
Bill Heizer (00:49:26):
Wim (Decorte) and I spent a good, oh, I don’t know, months building some configurations for Zabix, both in some configurations for Windows performance monitoring. They get deployed, we can engage them, and we can look at them over a period of time and then we can troubleshoot that. And if by chance we find that we’ve got something that, you know, that might affect other people, we’ve got automation in place to roll that change across to those things. So it’s been one of those where the benefits were obvious at first, but the things that we found since then really in my opinion, kind of outweigh them. Yeah. But it’s about putting it together so that, you know, other people can leverage that. And you know, at the end of the day, the, the cost to do that would always be too much for one individual.
Matt Navarre (00:50:13):
Oh yeah, for sure. Well, we’ve been going for a while and I get a feeling we could nerd out. I mean, I know there’s a huge amount more depth we can get into, but is there any other key things you want to really throw in and talk about before we, before we wrap it up?
Bill Heizer (00:50:26):
I have one area that I’d like for people to think about when they’re considering an MSP selection. You know, there’s a couple of things that I think really stand out that are important because you’re effectively, you’re either partnering with someone who is now going to be in charge of either you as the client or your clients, let’s say as an FBA. You need to understand what their investment in that is.
And, you know, we’ve done, I think three things that I think are good differentiators and things that you should be asking. One of them is sort of falls in the categorization of, you know, Soliant has been an FBA partner, you know a Platinum Partner for a long time. But we’re lucky enough to have some people like Mike Duncan and Wim (Decorte) maybe lucky or unlucky to have someone like me, but that’s, we yet to be determined.
Bill Heizer (00:51:16):
But yeah, the point is we put a lot of thought into the experiences that we’ve had of building the infrastructure and taking our cloud expertise and trying to model that. That’s fine. And that’s all dandy, but I, I do think there’s some advantage there. The other one is our investment in AWS, whether you agree AWS is a premier cloud leader or not, we did a lot of due diligence. We spent a couple of years looking at cloud providers. We feel like there’s a clear advantage there. And well, they’re bigger than all their competitors combined, I think still. And from a particular service level and the things that we want to do and where we are heading as a company you know, we’ve just established our COE, which is our Cloud Center of Excellence.
Bill Heizer (00:51:52):
But we’ve invested in AWS and we are an Advanced Tier Partner. It is very difficult to become an Advanced Tier Partner. To be a Select or Registered (Partner), those are easy hurdles. This requires a level of vetting. This requires a level of audit of auditing that has to be done. We have to maintain a minimum monthly level of revenue. Meaning we, our run rates for our clients have to grow exponentially. I mean, since 2015, we’re averaging about 65% growth year over year since that time. Last year we had a good record year and clips that a little bit, but wow. That is about showing that we’re investing in that technology. We are an SPP, which is a Solution Preferred Provider that gives our clients access to economies at scale.
Bill Heizer (00:52:39):
It also lets us bring technologies to market aand through the AWS stores, it also lets us help customers make that next step. I think of FileMaker as kind of a light step, you know, FileMaker hosting and bringing in the cloud. You know, you can do it, but the next one is, hey, leverage S3 or some sidecar capabilities we have through APIs leveraging some other backend type of stuff. And in that particular case, we’ve gone and worked with a number of clients to do things like get proof of concept funding for them, help them get funding from AWS.
As an Advanced Tier Partner, we can do that. I talked about the CloudFormation piece and how important that is to that. And then outside of that, that’s AWS saying we’re doing it right. That’s AWS giving us credentialing, saying we’re doing it right. That’s key. We’re saying we’re doing it right. But on top of that, one of our requirements is we have been trying to get into some very unique spaces with Soliant.cloud, and one of them is higher education and some government work and some other things.
Bill Heizer (00:53:39):
That has required third-party validation for things like SOX, HIPAA, GDPR, Higher Education, Community Vendor Assessment. These are things that we have gone to a third party called Cyber GRX, which basically stands for Global Risk Assessment. This is a third party who has audited our environment, our in structured, our code infrastructure, our processes behind the scenes and everything from how do we deal with clients’ data, how do we secure it, how do we secure access to this? And we have been validated by that.
We are now level two certified which basically puts us at the top tier of, in any small or medium sized business. Your level ones are going to be people like AWS and people like Google. That puts us in a position where we can handle a lot of secure data and you as a customer then can leverage that. You know, I can’t tell you how many times we used to get, are you HIPAA compliant? Are you SOX compliant? Right.
The reality of is that is a sort of an iron triangle. You’ve got the AWS piece. You’ve got Soliant as a vendor or FileMaker hosting company and you’ve got your solution. You can check off two of those boxes and you can leverage that, you know, by hosting in that environment.
Matt Navarre (00:54:50):
AWS will just give you a SOX compliance certificate because they have theirs. You can forward your clients too. But that’s different than you being SOC compliant, you know?
Bill Heizer (00:54:57):
It is indeed. But you’ve got to have, let’s say that you go through all the work, HIPAA is a great one. You know, there’s a lot of things you’ve got to do in your solution to be HIPAA compliant. There’s also things that your vendor, i.e. your FileMaker hosting provider and who that hosting provider is running on to do that. We do that heavy lifting for you. We just recently engaged with a white label client, and that’s a new terminology, but what it means is we have partnered with a CBA, a Claris Business Alliance partner. And we’ve got a few of them under our wing now. And what we’re doing is we’re white labeling them so that they can provide their entire solution and a SaaS model where it’s white labeled. So, you know, the vendor knows nothing about us, and we don’t care.
Bill Heizer (00:55:38):
We’re just augmenting the staff of that FBA. So everything from domain names, secure SSL, we package up the deployments and build cloud formations to support the automation of deploying their solution. And, and that’s cool. Particular case, we’ve got VDI I in place, so they’ve got virtual desktops, we’ve got a CNA app in place for them so we can bundle all of those things together. Those kind of things coupled with the infrastructure, the architecture, the cloud formations, the monitoring and all that I think really differentiate us and those are things to ask if you need them. Yeah. You know, and again, I caution people to think about, it’s not a question of if, but…
Matt Navarre (00:56:15):
It’s when. Yeah.
Bill Heizer (00:56:17):
<Laugh>, so Yeah. True. you know, yeah. And then, you know, the other thing that was on top of my head was, you know, some of these features in the Soliant.cloud portal that may not be obvious to some people. I don’t know if we’ve got time, but I wouldn’t mind mentioning a couple of them if you’re interested.
Matt Navarre (00:56:32):
Couple, but we do got to wrap.
Bill Heizer (00:56:34):
Yeah. So let’s take 30 seconds to talk about two small ones. We talked about the OptiFlex dynamic disaster recovery.
Matt Navarre (00:56:43):
For both. Yeah. Yeah. Multi-Zone.
Bill Heizer (00:56:45):
There’s piece for the cloud. There’s also piece for on-prem to the cloud. And that’s an interesting model because we give you some capabilities and some tools to stall on your, on-prem server that will take whatever your configuration is and it will mirror that through packaging, through gateways and APIs. We use Lambda to format that and then we stash it into a slide out cloud capable format and make that available to you. So we have a couple of clients…
Matt Navarre (00:57:09):
And so if your on-prem goes down, you actually revert to slide cloud. Interesting. Yep. Everything goes down a bit, but Yeah.
Bill Heizer (00:57:15):
Yeah. It can, or funny enough, in some cases it, it can actually go to the opposite way, depending. Oh, sure.
Matt Navarre (00:57:20):
Sure, sure, depends upon the thing.
Bill Heizer (00:57:22):
But we have something called dynamic daca, which is part of that. And again, it’s about the sustainability and, and cost optimization. How it works simply is we have a lot of customers that have development environments and development environments don’t need to run 24/7. And one of the things about the cloud is either the dynamic nature of that.
So we give people the ability through the cloud to simply go to their portal and stop a server whenever they want. You know, they can stop and they reduce their costs and we still monitor it, we still keep it up to date. We launch it and stuff. But the other thing is to set it up on for production situations where you can run it for only 12 hours a day, 10 minutes, 6 automatically fires up provisions, everything else. Yep. Yep. We’re running it for 12 hours a day.
Matt Navarre (00:58:00):
And along those same lines, Amazon charges, every single thing is charged by the minute.
Bill Heizer (00:58:03):
Yep. Whatever it’s charged by that microsecond, you can really reduce costs. One of the key ones that I absolutely love regardless of your opinion of WebDirect, we have something called autoscaling workers and it’s part of the OptiFlex family and it works very much like this. Whether you’re using WebDirect and you’re using WebDirect workers, which are an augmentation service.
Matt Navarre (00:58:24):
Yeah. Separate computers. Yeah, yeah.
Bill Heizer (00:58:25):
Yeah. And you, they run on there. One of the decisions you have to make is, how many do I need at any one point in time? And you always have to scale them to the, the capability. Yeah. And what size should they be? We remove that entirely. We removed that question entirely by simply saying, if you subscribe to this OptiFlex, autoscaling, you’ll set a threshold and say past one user, past five users, I’ll use my primary machine. Anything past that, I will dynamically allocate the instance it’s configuration, attach it, put an SSL on, attach it to that and scale that automatically.
So up down. Yeah. Up and down. So if you want, if you cross your threshold of let’s say 5, 10 or 20 or 5,000, whatever you just think worker one might want, you can scale that out and then it dynamically shuts those down. And not only does it shut them down, but it literally deletes those configurations because we don’t care about them.
Bill Heizer (00:59:20):
Right. Right. And no data’s stored on them; we just want the processing power. We want to remove that load from the server, the other capability of that. And we have a client who really was able to take advantage of that. They service a large telephone provider for access for their technicians to come in every morning and get their workloads.
At the end of the day they re-upload the work that they’ve done. Sound familiar. Right. But we know we have about 100, 120 technicians need to come in and download in the morning. So we’ve built the capability to pre-bake or prewarm and we can just say, Hey listen, at 6:00 AM I want you to have three workers ready to go fire them up and set them to this size. And then starting at 11 or 10, I want you to start looking at that and scale it either up or down.
Bill Heizer (01:00:05):
Right. You know, depending on a load. What happened in their cases instead of running 365, 7 24, all 5 instances cause have about 120 users. Yeah. We find it in the morning, Monday through Friday. We operate about three instances on the web direct workers occasionally per month they’ll see the fifth one come fourth and fifth come in, but 90% of the time those three do it, but only for those three hours. Yeah.
They scale down between 11 and three, they scale back up to two to three for the hours of four to six and then they all shut down overnight reducing their cost by 96% of running 7 24. So again, leveraging cloud infrastructure to do this automation for us, building upon what we’ve done with the rest of the architecture and really invoking a lot of that is, is really what we’re trying to do here. Yeah.
Bill Heizer (01:00:59):
Because we want you to have a good experience, but we know it can’t cost that much. <Laugh> and I, I love some of the, I love some of the things. We’ve been vetting these for a while. We are fully releasing dynamic docket. We’re releasing the OptiFlex both on-prem in cloud and we’re releasing the autoscale web worker with the 2023 release as well as a couple of other features that are in there. But bottom line, you know my point was to get across to y’all that there’s a bit more to hear. We’re, we’re trying to put together something that hopefully we’re proud of and that people will trust us and you know, because at the end of the day how we do is going to be a direct reflection on them.
Matt Navarre (01:01:37):
Well, I’ve got one final question, which is this, under whose keyboard is the post-it note with the eight character password for the whole entire works written?
Bill Heizer (01:01:46):
Actually the reality of it is, is it’s completely dynamic. It, everything is spun up dynamically. We use keys that are interchangeable all the time. No particular account is even active. I’ll give you a great example of how serious we take security.
Matt Petrowsky (01:02:00):
There’s one root password to the root user. Yes. That’s the Right. Better be in a couple of vaults and maybe an air one allows.
Matt Navarre (01:02:09):
You to, with a root account to not actually have ability to do anything you can remove.
Bill Heizer (01:02:14):
That’s the thing. You know, you can, there’s a couple of layers here. I’ll articulate a couple of them. So MFA to any route to any account, no one has access to route except one. And that’s stored away very much like we say, we don’t utilize root accounts for anything. Right. MFA to get into the, into any Amazon stuff multifactor to get into anything to our infrastructure.
On top of that, there are dynamic security policies that are generated and torn down for requests. So for example, if anyone from our team, after they’ve gone through our policies for, you know, the silly things like when did they get background checks and when did they audit, when did they come in? Are they still employed? These are processes that gained those certifications and things they go to that they want to make a request to do some work that requires them to get to a particular instance.
Bill Heizer (01:03:04):
Let’s say RDC for example. Well, no RDC parts are open. They have to make a request. That request is then generated that talks to an AWS API puts it into event bridge. The event bridge issues a request to that we dynamically generate a security group. We dynamically apply that security group with a set of rules that are based on where that user’s requesting from. That port is applied to the instance. It’s set for a particular time period. That person then can only access from that route; Only that person, their credentials are dynamically generated, dynamically created, and then dynamically tore down after either a period of time or when the request of that is done.
And it is all logged using, you know, CloudWatch in order to make that happen. . . So it’s so easy to say, well yeah, you can do this, you can do that. You know, when again, you’re managing a couple of boxes or you don’t have these security requirements or this risk mitigation in place. It’s a whole different ballgame when you really need to be thinking about those bigger pictures. So let’s just give you some example. That’s one of many. Yeah, definitely. Where we try to be real serious about that stuff. And we wouldn’t pass editing anyway. You know, we passed where we were at or above all of the classifications and there’s like 15 or 16 different, you know, categories there.
Matt Navarre (01:04:19):
Hey Adam, we hardly let you get a word in edgewise, you know, and I’d love to know your thoughts on all this stuff. <Laugh>.
Adam Russ (01:04:25):
No, honestly. It’s been a pleasure to, to sit here with you guys and working with Bill and these other guys has been great. Yeah, I’m just glad to get the word out here and appreciate you guys taking the time to let us kind of talk about this stuff and, and chit chat. So thank you.
Matt Navarre (01:04:42):
It is always so much fun to like nerd out hardcore and with other FileMaker people like you. I’m really glad you guys came on.
Bill Heizer (01:04:51):
Yeah, no, we still rarely get the opportunity to talk like this, so I appreciate both Matt and yourself taking time to, to talk about it. We get to dig into the weeds a little bit if then, you know, if it ever comes an opportunity to dig a little deeper, we take a narrow our focus and get down in the weeds. But this is really helpful, and I appreciate you both taking the time.
Matt Navarre (01:05:08):
I think it’d be so awesome to go back in the video and put a little counter over the <laugh> of all the acronyms and, and terms that you really have.
Matt Petrowsky (01:05:16):
Yeah, I was to say with that was some serious V R U. Variable resource utilization.
Matt Navarre (01:05:23):
Nice. And I, I threw out TLAs and FLAs <laugh> three letter acronyms and four letter acronyms. Yeah, I mean, having built a bunch of stuff with AWS you know, I was in the fifties server side like you were talking about. I became familiar with a lot of, the lot of these things were, and I saw the, I totally saw the light about doing all this stuff, but I had no resources to be able to get with our, with the app work size that we were at the time. And I’m thrilled that it exists, that you’ve done it, that you’ve done all this effort and paved the way. So thanks so much.
Matt Petrowsky (01:05:29):
Matt Navarre (01:06:04):
I don’t want to, though. I just, I want to, I want teach FileMaker development, that’s my world, and do podcasts and make videos, but that’s <laugh>. Yeah. Cool. Thanks for everyone for your time. This is awesome.
Matt Petrowsky (01:06:16):
Yeah. Yeah. All right. Guess this GFN – goodbye for now. Yeah. Bye for now.