5 considerations when configuring a cluster in Starburst Galaxy
Bo Myers
Senior Product Manager
Starburst
Guy Mast
Product Manager
Starburst
Bo Myers
Senior Product Manager
Starburst
Guy Mast
Product Manager
Starburst
Share
More deployment options
We’ve introduced a ton of new features in Starburst Galaxy that help you tailor your cluster’s performance to the needs of your workload. In this article, we will walk you through the five things to consider when setting up your cluster for the first time:
- Cluster execution modes
- Cluster sizing
- Autoscaling
- Auto suspend
- Cluster scheduling
Choosing a cluster execution mode
When configuring your cluster, you have the ability to choose between three execution modes – Standard, Fault tolerant, and Accelerated. One of the benefits of Galaxy is the ability to tailor clusters to specific workloads in terms of size, access controls, and execution modes.
The three execution modes in Starburst Galaxy are:
- Warp Speed: Warp Speed is the technology behind our “accelerated” cluster mode. This is the execution mode we recommend for interactive analytics – analytics against user-facing applications or dashboards. Warp Speed utilizes smart indexing and caching to automatically increase performance of these workloads.
- Fault-tolerant: Fault-tolerance is our enhanced version of Trino’s fault-tolerant execution (FTE) mode. At the core, Fault Tolerant Execution (FTE) mode enables the execution of complex, long-running, and memory-intensive queries, where reliability is critical. This mode is ideal for write operations and transformation (ELT/ETL) workloads.
- Standard: This is the most basic execution mode and the mode enabled for free clusters. For production use cases, we recommend defaulting to Warp Speed (accelerated clusters) unless you require auto-scaling, FTE, or are operating in regions where Warp Speed is not yet available.
Detailed Warp Speed considerations
Warp Speed is a good fit for interactive workloads that are focused on querying the data lake, especially when querying 500M or more rows per table or for data lake queries taking a few seconds or longer in incumbent architectures.
While on average you should expect to see a performance boost of 4x improvement in query latency and 7x improvement in CPU time, the level of improvement you will see depends on the data filtering in the queries. The more data that is being filtered, the more Warp Speed will accelerate those queries. Examples of optimal data sets for Warp Speed include the following:
- Fraud and anomaly detection
- IoT and telemetry data
- Geospatial analytics
- Clickstream data (e.g. Customer 360 analytics)
- Log analysis (e.g. cyber security logs)
On the other hand, queries leveraging full table scans where no filters are applied will not see the performance benefits of Warp Speed. However, Warp Speed will not negatively impact your query’s performance.
Note: At the time of this blog, Warp Speed is generally available on AWS, in private preview on GCP, and with Azure regions coming soon.
Detailed fault-tolerant considerations
Fault Tolerant Execution (FTE) mode means a cluster will retry queries or parts of a query in the event of a failure without having to start the whole query from the beginning. Intermediate exchange data is spooled and can be reused by another worker in the same cluster.
This is especially useful when queries require more memory than is currently available in the cluster. With FTE, those queries are still able to succeed. Multiple queries are able to share resources in a fair way, and make steady progress.
In addition to this core fault tolerant architecture that you can find in OS Trino, Galaxy FTE clusters contain additional enhancements that lets them to scale to support queries up to 60TB+ in size (whereas OS Trino is limited to ~3TB).
It is important to note that query processing in fault-tolerant execution mode can be slightly slower than normal operation. To address this condition, FTE will adaptively determine the optimal number of partitions to make sure that only the largest, most complex queries will use the maximum of 1000 partitions. This adaptive partitioning functionality helps to preserve the overall efficiency of the engine. However, it is not recommended to use fault-tolerant execution mode if the majority of your queries are short-running.
Selecting a cluster size
The size of a Starburst Galaxy cluster determines the number of server nodes, including one coordinator and many workers, used to process queries. A larger cluster, consisting of more nodes, is capable of processing more complex queries, handling more concurrent users, and providing higher performance by using more resources.
Available cluster sizes include free, x-small, small, medium, large, x-large, and 2x-large. Best practice is to choose cluster size based on initial needs and then resize your cluster as needs change. Memory failures or slow query processing are typical signs that it’s time to size up to a larger cluster.
Autoscaling
As you get comfortable with the needs of your individual workloads, you can choose to create custom cluster sizes with autoscaling by setting the minimum and maximum number of workers to scale between.
When you create custom clusters with different max and min values, the clusters will automatically scale up to the maximum number of workers for the configured cluster size when the combined CPU usage of all workers exceeds 60%. Autoscaling adds one or more workers to get the combined CPU usage of all workers below 60%.
The auto scaling process takes approximately four minutes to make the first adjustment. If the CPU usage continues to climb and exceeds 60%, the process repeats until the maximum number of workers is reached. The auto scaling process takes approximately 15 minutes to make the first adjustment.
Inversely, clusters automatically scale down to the minimum number of workers when the combined CPU usage of all workers drops below 60%. Autoscaling removes one or more workers until CPU usage approaches 60%.
Note: At the time of this blog, auto scaling is not supported with accelerated clusters.
Auto suspend (or idle shutdown)
In Starburst Galaxy, you can easily configure your cluster to suspend after it has been idle for a certain amount of time. A running cluster is classified as idle when no queries are submitted and all processing of queries is completed.
A suspended cluster consists of a small configuration set, and a mechanism to listen to incoming user requests. It does not include any actively running server nodes, and no costs are incurred. Therefore, this is an ideal feature for optimizing your cluster costs.
Available idle shutdown times include 1 minute, 5 minutes, 15 minutes, 30 minutes, and 1 hour. You can also configure your cluster to never suspend if you don’t want to wait for your cluster to warm up. The average warm up time is ~5 minutes, so it is important to weigh the cost and performance tradeoffs of the different idle shutdown times.
Cluster scheduling
Finally, Starburst Galaxy makes it possible for an administrator to ensure that an idle cluster does not auto-suspend by creating a cluster schedule. Your cluster must be in running or suspended status to be affected by scheduling – a cluster that is stopped must be started manually, and is therefore not impacted by any defined scheduling.
Cluster scheduling timeframes override idle shutdown times to keep your cluster running. Multiple days of the week and multiple time intervals per day may be configured to keep the cluster always on, ensuring you avoid the warm up time.
It is also important to note that once the cluster schedule has been created, it can be applied to one or multiple clusters. This is especially useful for customers looking to keep their clusters running during business hours but save costs overnight.