This is part of an ongoing series of AWS related posts. Here are other posts in this series:
- FileMaker Server on Amazon Web Services Quick Start Guide
- FileMaker Hosting Info and More Fun with Amazon Web Services
Looking for FileMaker hosting services? Learn more about Soliant.cloud, our offering that combines our team’s FileMaker and AWS expertise.
We all know backups are a good thing. There has been plenty written on how and why you should have a good backup strategy in place, especially for FileMaker Server. So now you have done all that, make backups frequently and go to sleep secure in the knowledge that you are covered.
Now let us take things to the next level with secure offsite, cloud hosted backups that are redundant and replicated to other regions. AWS S3 is a great option for low-cost storage, and is extremely scalable and secure. Having copies of the files in S3 ensures that if something happened to your FileMaker Server, or even the EC2 instance you are running on, you could get those files back up on a new server as part of your disaster recovery plan.
The steps involved include:
- Install the AWS Command Line Interface tools.
- Configure a script and schedule if from FileMaker Server.
- Configure your S3 bucket options.
What is S3?
S3 stands for Simple Storage Service, and is part of Amazon Web Services offerings. If you haven’t seen our quick start guide for hosting FileMaker Server on AWS, you can view it here.
The instructions below are suitable not only for EC2 hosted virtual server, but also any FileMaker server wherever it may be running.
Once you install the command line interface (CLI) for AWS, it is relatively easy to get files from your FileMaker Server machine into a S3 bucket. The CLI is available for both Windows and Mac OS, but our example here will focus on the Windows deployment, in part because AWS offers Windows servers. The steps for Mac OS server hosted on premises would be similar, but instead of a batch script, you would write a shell script.
Set up an IAM (Identity and Access Management) account, and create an access key that we will use. Log into the AWS Management Console and go to the IAM section to manage access. For more on that, you can view the reference linked here.
Note: The access info needed, including the Access Key ID and Secret Access Key. You will use the access key in the batch file we show in the next step.
Create the batch file
set AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY_ID set AWS_SECRET_ACCESS_KEY=YOUR_SECRET_ACCESS_KEY set AWS_DEFAULT_REGION=(for example, us-west-2) aws s3 sync "C:Program FilesFileMakerFileMaker ServerDataBackups" s3://bucket-name ‐‐delete
The “‐‐delete” flag is optional and tells the sync command to remove any folders of files from the bucket that do not currently exist in the local file system. You can optionally leave that flag off and manage what files are kept and how long to keep them from S3 as well, as we will see below.
Note: The path in the example is the default path, if you have installed your FileMaker Server in another location or have set a different backup path you will need to adjust the batch file to reflect your actual backup path.
Schedule to run from FileMaker Server
Once you have edited the above sample with your information, you will save it to the Scripts folder in your FileMaker Server installation. Only scripts saved here will be available to schedule using the built in scheduler included with FileMaker Server.
Note: FileMaker server runs as a daemon on the server as the user fmserver. That is why the access key information needs to be in the script instead of configured from the command line
Schedule the batch file to run at interval, after your normal backup routine has finished running. FileMaker backups should be allowed to complete before moving to S3, so it is a good idea to schedule these accordingly.
Note: Another thing you could do here, to ensure that the backup occurs before transfer to S3, is to include the backup command in the batch script. FileMaker Server has its own command line interface to do this
S3 Bucket Options
Here is where things get interesting. There are several options available that you can enable with your S3 buckets. Here are several that I think are applicable and you may want to review when considering your backup strategy.
What is a Bucket?
Buckets are the containers for objects. You can have one or more buckets. For each bucket, you can control access to it (who can create, delete, and list objects in the bucket), view access logs for it and its objects, and choose the geographical region where Amazon S3 will store the bucket and its contents.
Versioning allows you to automatically have a copy of a file each time it changes, even if the files are deleted. Enabling versioning also allows for other options like Cross Region Replication. If you are using the delete option in your script to save to S3 with versioning enabled, even though your backups may have been deleted on the server, you can have a redundant copy of them in your S3 bucket.
Set up Lifecycle policies
Setting up a policy on an S3 bucket is easy and straightforward. S3 includes a couple different tiers for longer term and lower cost storage, including Glacier. These are meant for more infrequent access and longer term storage for files that may not be needed to access on demand.
A sample policy you might set up would be one that moves the files to infrequent access after a period of 30 days, then to Glacier after 60 days, and finally delete from the bucket after a year.
Keep in mind that accessing files stored in Glacier is not on demand. To keep costs low, Amazon Glacier is optimized for infrequently accessed data where a retrieval time of several hours is suitable.
Cross Region Replication
For maximum peace of mind, you have the option of replicating your S3 bucket to another region. That way, if the west coast has a major cataclysmic event and all the data centers in that region are destroyed or taken offline, you can still access a copy of your files from an east coast data center.
Costs for S3 storage are low and affordable. As with most AWS options, it depends on your implementation, but just for reference, they currently start out at around 3 cents per month for the first TeraByte you use, and it goes down to .7 cents (that is POINT SEVEN cents) per month for Glacier storage.
You can see more details on the S3 pricing page here:
There are also some costs for the bandwidth for getting content to and from S3. However, if you are using an EC2 instance to host your FileMaker Server, that bandwidth is free to use since it is all on the AWS side.
For most deployments, I think the cost would be quite justifiable for the redundancy and peace of mind this offers.
Special Thanks to Wim Decorte for providing feedback on this post as well.