As a ServiceNow solution architect exploring AI automation, I recently completed the Predictive Intelligence Fundamentals and Implementation courses and micro-certification on Now Learning.

This hands-on course provided a good technical foundation to understand Machine Learning in the ServiceNow platform.

This article is an wrap-up of insights that I fould interesting and that I want to remember.

Setting expectations

Predictive Intelligence can be a powerful tool, but it is not magic. It uses machine learning only — it does not reason, explain, or autonomously learn from anything out of the patterns in the training data. The value of PI depends directly on the quality and relevance of the data you feed into it.

  • Volume: You typically need 30,000 to 300,000 records for a viable supervised model.
  • Quality: Clean, consistent, well-labeled data is an absolute MUST. Poorly labeled tickets as input lead to inaccurate predictions. Some customers get discouraged with Machine Learning because the effort to gather 30,000+ of good quality records is an effort that outweights the expected benefits.
  • Testing: Rigorous validation of results (e.g., precision, coverage, similarity scores) is essential before promoting any solution to production.

What PI will not solve:

  • It won’t classify a record correctly if the historical data is inconsistent.
  • It won’t learn to recognize new classes or categories automatically.
  • It doesn’t identify the root cause of problems or provide decision logic.

Where Predictive Intelligence sits in the ServiceNow AI Ecosystem

Predictive Intelligence is just one branch of ServiceNow’s broader AI capabilities. It focuses exclusively on machine learning (ML).

 Other types of AI on the Now Platform include:

  • Generative AI: Used in content summarization, draft suggestions, Virtual Agent responses, and app generation.
  • AIOps: For anomaly detection, event correlation, and root cause analysis in ITOM
  • Natural Language Understanding (NLU): for Virtual Agent, chatbots and Natural Language Query (NLQ).
  • Image Recognition: Document intelligence for Document scaning, OCR, and data leverage in workflows.

Each of these capabilities delivers very different outcomes and value propositions. Predictive Intelligence Maching Learning focuses strictly on learning from a selection of historical data to make predictions.

What is Predictive Intelligence?

Predictive Intelligence is ServiceNow’s native machine learning capability. It leverages both supervised and unsupervised learning to support use cases like categorization, assignment, similarity search, anomaly detection, and duration forecasting.

In supervised Learning, the model is trained on labeled historical data. You provide thousands of examples of correct input and related output. The system learns and creates a formula to map inputs to outputs.

In unsupervised Learning, the model is trained on input data only. It discovers patterns, similarities, or groupings on its own.

ServiceNow offers four main ML frameworks:

  • Classification – Predicts field values (e.g., category, assignment group)
  • Similarity – Matches prompt text to similar historical data (tickets or KB)
  • Clustering – Groups related records to detect trends or anomalies
  • Regression – Forecasts numeric outcomes such as time to resolution

Core ML Solutions: Technical Deep Dive

1. Classification

The Classification ML solution learns and formulates a correlations between input fields (short description, location) and a known output (e.g., Assignment group).

Use Cases:

  • Auto-categorizing and routing incidents
  • Predicting HR case types or CSM issue categories
  • Prioritizing tasks based on and context

Classification models require at least one input field — typically a text field like Short description — and one categorical output field to predict, such as Assignment group or Category. A word corpus is mandatory if the input includes any textual content. The model uses supervised learning, so historical records with correctly labeled outputs are essential. During training, stop words are automatically removed, and text is normalized for better pattern recognition.

Effectiveness Metrics:

  • Class Precision / Class Coverage
  • Overall Precision / Coverage
  • Confusion matrix (via Solution Visualization)

2. Similarity

Unsupervised model computes similarity between the input record and a defined lookup set.

Use Cases:

  • Suggesting KB articles during incident logging
  • Detecting duplicate ideas in the idea portal
  • Boosting Virtual Agent deflection rate by directing users to the right content.

Similarity solutions must have a designated input source or prompt (e.g., a new incident) and a lookup set (e.g. past incidents or KB articles) with at least one common text field, usually Short Description or Chat Prompt. A word corpus is always required to enable similarity comparison across textual fields.

This model doesn’t use training labels but still relies on clean and rich text content. Stop words are filtered out automatically during model preparation.

Effectiveness Metrics:

  • Similarity score (0–100)
  • Qualitative validation through user engagement

3. Clustering

Unsupervised algorithm groups records by similarity in active records only. Uses density-based methods to identify clusters.

Use Cases:

  • Discovering incident surges indicating major outages
  • Highlighting content gaps for knowledge managers
  • Grouping similar unresolved issues for collective resolution

Clustering solutions need active records with one or more input fields — ideally text-based, such as Short Description to detect natural groupings in current data. As this is an unsupervised model, no output field is defined, and no historical training is involved. A word corpus is required when using textual inputs, and the system preprocesses the data to remove stop words and irrelevant noise before clustering begins.

Effectiveness Metrics:

  • Cluster count and density
  • Changes in incident patterns across clusters

4. Regression

Supervised model that correlates historical inputs with numeric outputs like time, count, or percentage.

Use Cases:

  • Estimating incident resolution duration
  • Forecasting SLA compliance risk
  • Predicting project task completion windows

Regression models need at least one input field (can be structured or text) AND one numeric output field such as a duration, time, or cost. If a text field like Short Description included as an input, a word corpus is required. The system applies standard text processing techniques — including stemming and stop word removal — to optimize training. The output must be a number or date/time field for the model to function properly.

Effectiveness Metrics:

  • SMAPE (Symmetric Mean Absolute Percentage Error)
  • MAE (Mean Absolute Error)

Platform Architecture: ServiceNow ML Training and Prediction Workflow

Predictive Intelligence uses a distributed architecture:

  • Customer Instance: Hosts solution definitions and input data. Triggers training and performs real-time predictions via Prediction APIs.
  • Training Server (ServiceNow Data Center): Extracts, cleans, and processes data. Trains the model and returns metadata and performance statistics.
  • Prediction Server: Caches the model for fast execution. Responds to real-time prediction requests from the customer instance.

This architecture ensures minimal impact on the production instance and accelerates prediction response time.

Lifecycle of Building and Releasing a PI Model

Step 1: Preparation

  • Clone production into sub-prod
  • Enable required plugins (PI, NLU Core, etc.)
  • Identify prediction goals and validate data quality

Step 2: Build & Train

  • Create a solution definition
  • Define input/output fields, filter historical data
  • Submit for training via PI Home
  • Validate model output, precision, and coverage
  • Adjust, retrain as needed

Step 3: Test & Promote

  • Use prediction scenarios in sub-prod
  • Export to Update Sets for deployment
  • Set retraining frequency (manual or scheduled)

Step 4: Operational Integration

  • Use predictions in flows, BRs, client scripts
  • Monitor outcomes and performance
  • Adjust filters or models based on real-world performance

Final Thoughts

Predictive Intelligence is a specific form of artificial intelligence — one based strictly on machine learning, not general reasoning or generative models. It brings measurable value to ITSM and traditional case management by enhancing how we classify, assign, and prioritize work. In particular, it excels at automating field population, a task often neglected by agents because it’s not meaningful to them and not always easy or intuitive. Offloading that burden helps enforce data consistency without adding friction to daily operations.

That said, setting up Predictive Intelligence is not plug-and-play. It demands significant effort to clean, prepare, and structure your data, define meaningful use cases, and rigorously test outcomes. It requires thoughtful implementation, not just technical enablement. While the results might feel magical from the end user’s perspective, under the hood it’s a logical, statistical, and mathematical treatment of structured and unstructured inputs. The predictions are grounded in patterns, probabilities, and transformations derived from the data you’ve chosen — no more, no less.

For architects and platform leads, this is where the opportunity lies: to use the precision of ML to augment — not replace — the judgment of support teams. If deployed thoughtfully, PI will help scale best practices, reduce manual friction, and accelerate service delivery with a high degree of consistency.

Leave a comment

Trending