Learn how different Mozart features affect your bill and ways to control costs as you set up your account.
As you start utilizing the different features of Mozart, it’s important to understand how Mozart charges you and how those features contribute to your usage costs.
There are 2 main components of your plan that affect your usage costs – Monthly Active Rows (MAR) and compute credits.
- MAR is the number of active rows of data loaded by Fivetran connectors into your warehouse within a calendar month. An active row is one that has been added, deleted, or updated, and will only be counted once for a given calendar month regardless of how many times it’s updated.
- Compute credits are used when queries are executed and to maintain your data warehouse.
You can see more detailed information about your ongoing usage in the Usage section of Mozart.
For more information on MAR and compute credits, see our Understanding Usage help doc.
How Different Mozart Features Affect Usage
Now that you have a basic understanding of how usage works, let’s review how different features of Mozart impact your usage costs.
Connectors contribute to both MAR and compute credits. For more in depth info on what connectors are and how to use them, check out our Connectors overview doc.
Connector MAR Overview
Initial Sync / First 14 Days of Incremental Syncs:
The first time a connector syncs, it will pull over all historical data. No MAR is incurred for any initial sync of a connector. After the initial sync is completed, no MAR is incurred for the next 14-days of incremental syncs for that connector.
Ongoing Incremental Syncs:
After the 14-day period of free incremental syncs ends, MAR will be incurred for all subsequent incremental syncs. MAR does not change if no rows are being added, deleted or changed, and increasing your sync frequency does not mean that you will incur more MAR.
Historical resyncs do not incur MAR.
Tips for Optimizing Connector MAR
Since the initial sync is free, it’s good to sync over all data at first and then decide which ones are most useful for your analyses / reporting during the 14-day period of free incremental syncs. Then deselect the tables you don’t want to use from future syncs. This way you aren’t incurring MAR for data you don’t need, but you also have some example data from the deselected tables in case you need to revisit them in the future.
Connector Compute Overview
Whenever a connector is loading data into your warehouse during a sync, compute is used to accept that data. This is true regardless of whether it’s during the first 14 days or not. Typically loading times are short, but if the connector is syncing very frequently, compute costs can add up quickly.
Tips for Optimizing Connector Compute
Think about how often you need the data from a connector and work backwards into the sync frequency. If you aren't sure, we generally recommend setting your sync frequency to every 24 hours to start in order to minimize compute costs, and then evaluating your need for more frequent syncs as you progress.
Transforms only contribute to compute usage. No MAR is incurred when running transforms and materializing them in your warehouse. For more in depth info on what transforms are and how to use them, check out our Transforms overview doc.
View / Table Materialization:There are two ways a transform can be materialized: as a table or as a view
- A table is the actual data generated by the transform query.
- A view is the SQL code in the transform. When you query a view, the SQL code associated w/ that view will be executed at the time of the query.
If you are materializing the transform as a table, then compute will be used to create that table when running the transform. If you are materializing the transform as a view, then no compute will be used when running the transform.
View / Table Materialization Tips:
Defaulting to view materialization for transforms can help limit compute costs, but it depends on the use cases for them. If the query associated with the transform doesn’t take long to run (eg seconds to a minute), then a view materialization can help you save on compute usage. However, if the query associated with the transform takes a long time to run, and you plan to query against that transform often, then a table materialization is better for saving costs.
Scheduling / Running Transforms:
Whenever a transform runs, either manually or on a schedule, compute costs may be used. If you are materializing the transform as a table, then compute resources are used to run it. If you are materializing the transform as a view, then no compute resources are used.
There are two scheduling options for transforms – at-a-specific-time and ancestor-based.
- At-a-specific-time scheduling is used to run the transform on a set schedule. Options include preset frequencies such as every hour, 6 hours, daily, etc., as well as custom cron.
- Ancestor-based scheduling is used to ensure that upstream tables are up-to-date before the transform is run, keeping its data as up-to-date as possible. For ancestor-based scheduling, the transform will run whenever the ancestor tables / views you’ve selected are “fresh”. For ancestor tables in connector schemas, this is every time the connector syncs. For ancestor tables / views from transforms, this is whenever that transform has finished running.
Tips for Optimizing Scheduling / Running Transforms
When setting up scheduling for your transforms, consider the business use case of that transform. Hourly is too often for a daily dashboard, daily may not be often enough for operational work, and ancestor-based doesn’t make sense for reporting that doesn’t need to be as fresh as your upstream data. We recommend daily as a default scheduling frequency, and you can adjust to meet your business needs as you understand them better.
For any query you run, whether it be in a transform, scheduled in a BI tool, or ad-hoc, the longer it takes to execute, the more compute costs are incurred. Queries run on larger tables and ones that involve complex joins and/or functions tend to have longer run times.
Tips for Optimizing SQL
Talk to us! Optimizing queries can be hard, but we have a team of experienced data professionals who can help. Feel free to reach out to firstname.lastname@example.org with any questions!