Anomaly Detection is a method that allows ITSI to automatically learn what “normal” behavior looks like for a KPI—and then detect when something unusual (anomaly) happens.
Instead of relying on fixed thresholds, anomaly detection uses machine learning to spot:
Unexpected spikes
Unusual drops
Irregular patterns
Because not all problems are obvious or follow fixed patterns.
For example:
Your app may normally have 500 users at 9 AM.
If one day it suddenly jumps to 5,000, that could be a bug—or a cyberattack.
Static thresholds may miss this or generate too many false alerts.
Anomaly Detection helps find subtle or unexpected changes before they become serious.
ITSI uses machine learning models to analyze historical KPI data and identify unusual behavior. Here’s how it works step by step:
The system looks at past KPI values (for example, over 30 days).
It builds a model of what the normal range of values looks like at different times.
As new KPI data comes in, the system compares it to the expected pattern.
If the value falls outside the normal range, it's flagged as an anomaly.
If the anomaly is severe enough, ITSI can:
Change the KPI’s status
Generate a Notable Event
Highlight the issue in a dashboard or Deep Dive
To tune how anomaly detection works, ITSI offers several important settings:
This defines how much historical data is used to train the model.
Common values: 14 days, 30 days, 90 days
More data = better model (but slower to learn)
Controls how aggressively the model flags deviations.
Higher sensitivity = more anomalies detected, even small ones
Lower sensitivity = fewer alerts, focuses on big changes
Best practice: Start with medium sensitivity and adjust based on results.
Determines how often ITSI updates the learning model.
Can be daily, weekly, or on demand
Important for systems that change behavior regularly (like seasonal traffic)
Before KPIs cross fixed thresholds, a pattern change may occur.
Anomaly Detection catches these early warning signs.
Static thresholds often generate alerts during normal fluctuations.
Anomaly Detection understands “normal noise” and filters it out.
In cloud or containerized environments, behavior changes often.
It’s hard to set fixed thresholds—anomaly detection adapts automatically.
Although powerful, Anomaly Detection has its limits:
If you only have 1 or 2 days of data, the model can’t learn normal behavior accurately.
More data = more accurate anomaly detection.
If your KPI is very noisy (frequent, unpredictable spikes), the model may flag too many false anomalies.
You may need to:
Adjust the sensitivity
Use smoothing techniques
Combine with filters
Anomaly detection is best used alongside static or dynamic thresholds, not instead of them.
It adds another layer of intelligence, especially for unpredictable problems.
Anomaly Detection uses machine learning to identify unusual KPI behavior based on past trends.
It helps detect issues earlier, especially in fast-changing or complex environments.
You can configure its learning window, sensitivity, and retraining frequency.
It’s a powerful complement to traditional thresholds—not a complete replacement.
When anomaly detection is enabled for a KPI in ITSI, the platform calculates an "Anomaly Score" for each data point where the behavior deviates from the learned baseline.
The Anomaly Score ranges from 0 to 100
This score is machine-learned, based on historical data patterns
It is not a threshold, but a confidence indicator of abnormality
KPI Status Color: A high score may shift a KPI’s color from green to yellow or red, depending on configuration
Triggering Notable Events: The score can be referenced in alert rules or action conditions
Trend Analysis: Allows operational teams to detect early deviations before thresholds are violated
Recommended exam-ready phrasing:
“ITSI calculates an Anomaly Score for each detected deviation, indicating how far the current value diverges from its learned norm. This score is used to change KPI state or generate alerts.”
When a KPI uses split-by fields (e.g., host, region, or application_id), anomaly detection functions independently for each split.
ITSI builds a separate anomaly model for each unique entity
Each entity receives its own learning baseline and anomaly score
This enables fine-grained anomaly tracking on a per-host or per-service basis
Helps detect localized issues affecting only a subset of the environment
Reduces false positives from system-wide aggregation
Enhances multi-tenant or distributed infrastructure monitoring
Recommended note:
When anomaly detection is used with split-by fields, each entity (e.g., host or region) receives a separate learning model, enabling entity-specific anomaly detection.
The anomaly detection model in ITSI is continuously trained over time, but retraining can also be:
Scheduled (e.g., weekly or monthly)
Manually triggered when needed
Updates the model with more recent behavior
Replaces part of the old learning pattern, improving relevance
Can temporarily reset anomaly sensitivity as the model re-learns
After major changes in system traffic, application logic, or deployment patterns
If anomaly detection no longer reflects current patterns
During seasonal behavior shifts (e.g., holiday traffic peaks)
Recommendation for practical exams:
After major infrastructure or traffic changes, it is best practice to manually trigger a model retraining to realign baselines with new behaviors.
Deep Dives in ITSI are visual tools for incident investigation. When anomaly detection is enabled, results appear as:
Shaded areas that reflect expected normal behavior ranges
The current KPI value is compared to this band
Individual dots that indicate where KPI values diverged significantly
These points correspond to anomalous scores, visually highlighted for RCA
Helps operators quickly see if a KPI is outside its learned normal band
Supports narrative-based RCA (root cause analysis)
Aids in communicating anomalies to non-technical stakeholders during postmortems
Anomaly detection uses machine learning to detect behavioral deviations
Each anomaly receives an Anomaly Score (0–100) based on its severity
With split-by fields, ITSI enables per-entity detection using isolated models
Retraining ensures that models stay aligned with system behavior
Deep Dive visualizations show anomaly insights as bands and dots, making diagnosis clearer and more accessible
What is anomaly detection in ITSI?
A mechanism that identifies abnormal KPI behavior based on historical patterns.
Anomaly detection analyzes historical KPI data to determine expected behavior patterns. When new KPI values deviate significantly from these patterns, the system identifies the deviation as an anomaly. Unlike threshold-based alerts that rely on fixed boundaries, anomaly detection evaluates dynamic patterns and identifies unusual behavior even when thresholds are not crossed. This capability allows administrators to detect subtle performance issues that traditional threshold monitoring might miss.
Demand Score: 78
Exam Relevance Score: 88
What prerequisite is required for anomaly detection to function effectively?
Historical KPI data for baseline analysis.
Anomaly detection algorithms require historical KPI data to learn normal performance patterns. The system analyzes historical trends such as average values, seasonal fluctuations, and typical variation ranges. Without sufficient historical data, the anomaly detection model cannot determine what constitutes normal behavior, making it difficult to detect anomalies accurately. Administrators therefore ensure that KPI data has been collected for a sufficient period before enabling anomaly detection features.
Demand Score: 76
Exam Relevance Score: 89
What occurs when an anomaly is detected in a KPI?
An anomaly event can be generated and used for incident analysis.
When the anomaly detection system identifies KPI behavior that deviates significantly from expected patterns, it can generate an anomaly event. These events can be used in monitoring dashboards or correlated with other alerts to identify potential incidents. Anomaly events help operators investigate unusual system behavior and identify issues that may not yet trigger traditional threshold-based alerts. By highlighting unexpected patterns, anomaly detection provides an additional layer of monitoring intelligence.
Demand Score: 72
Exam Relevance Score: 85
Why might anomaly detection fail to generate events for abnormal KPI behavior?
Because the anomaly detection model has not been properly trained or enabled.
If anomaly detection is enabled without sufficient historical data or proper configuration, the system may not generate anomaly events even when KPI values appear abnormal. This can occur when the anomaly detection model has not been trained or when KPI data is insufficient for establishing baseline behavior. Administrators must ensure that anomaly detection settings are enabled, historical data is available, and model training has been completed before expecting anomaly events to be generated.
Demand Score: 74
Exam Relevance Score: 86