Making effective decisions begins with understanding how your key business metrics are performing now and how they’re expected to perform in the future. Our normal range and forecast analysis tools help you assess past, current, and future trends in your metrics. You’ll be able to quickly identify outliers in your data (values that fall outside the metric’s normal range) and see a prediction of future metric performance.
Note: This premium feature is available to new sign ups for a 30-day period, after which it's included in all paid plans. On a free plan and want to upgrade? Learn how here.
This article includes:
- Understanding your analysis
- Normal range analysis in PowerMetrics
- How do we calculate a normal range?
- Performing a normal range analysis on your metrics
- Forecast analysis in PowerMetrics
- How do we calculate a forecast?
- Performing a forecast analysis on your metrics
- Forecast FAQs
Understanding your analysis
After applying a normal range or forecast analysis, find information on how your data is being analyzed by clicking the the Normal range or Forecast link under the metric visualization’s name. (See below.)
A window opens that describes the data being used to create the analysis. Here’s an example, with numbers indicating each component (see corresponding definitions below):
Here are a few definitions to help you understand each component in the window:
- Explanatory text: The paragraph above the graphic describes the data being analyzed. This description changes dynamically based on the metric visualization’s settings.
- Data line: The line represents the data being used for the analysis. Scroll over it to see descriptive tooltips.
- Visualization date range: The range of data that’s currently being displayed in the metric visualization (based on the selected date range).
- Included historical data: The date range for collected data.
- Forecast: The predictive range based on historical data. Note: This only displays when doing a forecast analysis.
Normal range analysis in PowerMetrics
A normal range analysis looks at natural variations in previous data to show you the range within which values are expected to be for a specified time period. Outliers (values that are outside the normal range) display as red (below normal) and green (above normal), bringing them to your attention so you can dig in and find the cause. Note: The colours associated with above and below normal depend on whether you selected to show ascending or descending values as positive when creating or editing the metric and configuring its favourable trend settings.
Analyzing a metric’s normal range can help you:
- Understand metric trends by looking at past behavior.
- Identify outliers in your historical data and investigate when needed.
- Determine if a sudden change in the metric (for example, all the latest values are outside the normal range) is normal or unexpected behavior.
- See where the metric values in the future are likely to fall if the metric continues to behave as it has been.
How do we calculate a normal range?
For a normal range analysis, we run a query against the metric using a window of time that includes more historical data than the current time window in the visualization. We use the same periodicity, filtering, and context. This creates consistent analytics results regardless of the underlying structure of the data being ingested into the metric.
Normal range analysis looks at the metric query for a given period of time (usually larger than the visualization’s date range). It looks at the trend, variability, and any seasonal patterns it can find in the data to understand metric behavior. We refer to a set of probability bounds that describe where 95% and 99.7% of the post-query values are expected to be. There’s a 0.3% chance a value will fall outside these bounds due only to the natural randomness of data. So, when you see a value outside these bounds (an outlier), it’s likely worth investigating.
The algorithm for normal range analysis stops at the last complete period in the metric’s data. It doesn’t take the last value in the metric into account as it could be incomplete.
Although the probability bounds may be extended into the future, this is not a prediction of future values. It’s a visualization of the past behavior of the metric extended into the future.
To calculate a normal range, we:
- Determine if there’s an overall trend in the data. If there is a trend, we normalize the data by removing it.
- Using the Fast Fourier Transform (FFT) algorithm, we look for seasonality (repeating patterns of a cyclical nature).
- Bucket the data into groups based on the seasonality(s) detected and periodicity of the data. For example, with a weekly seasonality and daily periodicity there will be 7 buckets, one for each day of the week.
- For each seasonal bucket, determine the standard deviation in the data. Then set thresholds at 2 and 3 standard deviations above and below the mean of the data.
- Reapply the removed trend to the data and the thresholds.
Performing a normal range analysis on your metrics
You can view the normal range for unsegmented, non-cumulative bar and line charts that have time on the x-axis (or on the y-axis for horizontal bar charts). Normal range analysis can be run for metric visualizations on a homepage, on a dashboard, and in Explorer.
The following example describes enabling normal range analysis for a metric visualization on a metric’s homepage. The same principles apply for visualizations on a dashboard and in the Explorer.
To perform a normal range analysis for a visualization on a metric’s homepage:
- In the left navigation sidebar, click Metrics to display the Metric List page.
- Select a metric from the list to open it.
- On the metric page, click the 3-dot menu for the metric view for which you want to see the normal range. Then, select Personalize view.
Note: You must choose a metric view that can be visualized as an unsegmented, non-cumulative bar or line chart with time on the x-axis (or on the y-axis for horizontal bar charts). - Under Analyses, select Normal range. (See below.)
Note: For a metric on a dashboard (in edit mode) the Analyses section can be accessed by clicking the Display tab.
See below for an example of normal range analysis with above and below outliers:
Forecast analysis in PowerMetrics
A forecast analysis shows where data is likely to fall in the future, within a specified time period. Forecasts predict future performance based on a metric’s previous values and trends.
Analyzing the forecast for a metric can help you make decisions based on estimated metric values for the immediate future.
How do we calculate a forecast?
For a forecast analysis, we run a query against the metric using a window of time that includes more historical data than the current time window in the visualization. We use the same periodicity, filtering, and context. This creates consistent analytics results regardless of the underlying structure of the data being ingested into the metric.
Forecast analysis looks at the metric query for a given period of time (usually larger than the visualization’s date range). Like the normal range analysis, forecasts assess overall variability and historical patterns, but apply this information differently to the data.
Forecasts use available information to guess what the next value will be. This guess is associated with a set of confidence intervals. A confidence interval is the range of values within which you expect the estimate will fall if you redo the test. Based on the guessed value, the algorithm continues to guess what each subsequent value will be (using the same set of confidence intervals). This process continues until it can no longer be confident about the next result. Because each value is based on the guess that came before it, the confidence window expands as it moves further from the real data, creating a cone of confidence. When the actual data becomes available (incomplete periods are now complete) the forecast reruns from that point, changing the initial set of guessed values.
To calculate a forecast:
We use the AAA version of the Exponential Smoothing (ETS) algorithm. A - Additive, E - Error, T - Trend, and S - Seasonality. The AAA version of this exponential smoothing method applies Additive Error, Additive Trend, and Additive Seasonality. This algorithm predicts a future value based on historical values. The predicted value is a continuation of historical values for the specified target date (a continuation of the timeline).
Performing a forecast analysis on your metrics
You can view a forecast for unsegmented, non-cumulative line charts that have time on the x-axis. Forecast analysis can be run for metric visualizations on your homepage, on a dashboard, and in Explorer.
The following example describes enabling a forecast analysis for a metric visualization on a metric’s homepage. The same principles apply for visualizations on a dashboard and in the Explorer.
To perform a forecast analysis for a visualization on a metric’s homepage:
- In the left navigation sidebar, click Metrics to display the Metric List page.
- Select a metric from the list to open it.
- On the metric page, click the 3-dot menu for the metric view for which you want to see forecasted values. Then, select Personalize view.
Note: You must choose a metric view that can be visualized as an unsegmented, non-cumulative line chart with time on the x-axis. - Under Analyses, select Forecast. (See below.)
Note: For a metric on a dashboard (in edit mode) the Analyses section can be accessed by clicking the Display tab.
See below for an example of a forecast analysis.
Forecast FAQs
Here are some of the most commonly asked questions (and answers) about forecast analysis in PowerMetrics.
Why are forecasts always estimates?
Referring to forecasted values can help you make better decisions for your business’ future. However, you need to remember that forecasts are always estimates. Their accuracy can be influenced by many factors, in particular, the amount and quality of the metric’s historical data.
Forecasting isn't only based on historical values, it’s also based on the patterns found there. It’s difficult to see those patterns when there’s not enough historical data to refer to. Metrics may lack data naturally (they’re newly created and haven't had time to accumulate many days or weeks of data yet) or they may refer to incomplete, messy data sources.
If a metric includes a lot of volatility and unexpected values (which can happen when something out of the ordinary happens) the accuracy of the forecast is also affected.
Why does my forecast look different for days vs months vs years?
When you run a forecast for the same metric with different time ranges, the results may look quite different. Just as the source data looks different for each time period so, naturally, will the forecast.
This supposed “mismatch” happens because the data being fed into the algorithm is different. The metric is queried using the current context to get the data that’s used for the forecast. If you’re running a forecast on data that’s bucketed into weekly values, then weekly values will also be used for the forecast.
Data can look quite different when changing time scales. Longer time ranges tend to have a smoothing effect on the data and will influence the forecast. That’s why data that looks smooth when looking at quarters can become variable if you switch to looking at months and even more variable when looking at days. If the historical data is smooth the forecast will often be quite linear with a clear cone extending from it. If the data is more variable, sometimes a periodic pattern is found that gets used in the forecast, creating a periodic forecast result. If the data is too variable, often no periodic pattern can be found and the forecast will fall back to a linear guess, with a large cone of confidence.
Why does my monthly forecast predict a drop, but my quarterly forecast doesn't?
Forecasts use the last point in the data as a starting place so that point (or the last few points) have a big effect on the overall forecast. The last point may be lower at one granularity and higher at another. Let’s use a sales metric as an example. The last week of the year has low values (it’s vacation time and no one is shopping) but the last quarter of the year has high values (from pre-holiday sales). The forecast algorithm would use the weekly data to predict a drop in future values and the quarterly data to predict an increase in future values.
The algorithm can only use the data that’s available to it – It doesn’t have any business context to apply to specific scenarios. Regardless of the time period being visualized, the forecast represents the algorithm's best guess at what the next points will be given the available data.