15 models.
One risk score.

Seven anomaly detectors feed a gradient-boosted meta-learner. Every score includes SHAP explanations and Bayesian uncertainty bounds for full regulatory transparency.

Ensemble Score
0.87 Critical Risk
Risk Analysis Live
Composite Risk Score0.87
0.00.250.500.751.0
Statistical Deviation
0.91
Isolation Forest
0.84
Temporal Pattern
0.78
Velocity Shift
0.92
Graph Topology
0.65
Dormant Reactivation
0.88
Regime Change
0.71
SHAP Explainability
Per-decision rationale
Scoring Engine

Seven detectors. One ensemble. Full explainability.

Each transfer receives a calibrated risk probability from a 15-model ensemble. Regulators receive per-decision rationale with every score.

Anomaly detection
Statistical deviation, isolation forest, temporal patterns, velocity shifts, graph topology, dormant reactivation, and regime changes.
7
Independent anomaly detectors
Meta-learner ensemble
A gradient-boosted classifier stacks detector outputs with 29 behavioral features. Platt calibration produces true probability estimates.
29
Feature dimensions
SHAP explainability
TreeExplainer produces per-feature attribution for every risk decision. Regulators receive quantified reasoning with each score.
100%
Decisions with explanations
Pipeline

From transfer to risk score in four layers.

Select a stage to see how it works.

Feature extraction
39 behavioral features computed from transfer history. Velocity, timing, counterparty diversity, and graph metrics feed the scoring engine.
39
Feature dimensions
01
Anomaly detection
Seven independent detectors run in parallel. Statistical deviation, isolation forest, temporal patterns, velocity shifts, graph topology, dormant reactivation, and regime changes.
7
Parallel detectors
02
Meta-learner
A gradient-boosted classifier stacks detector outputs with 29 behavioral dimensions. The ensemble weighs each detector by historical accuracy.
29
Ensemble features
03
Calibration
Platt scaling transforms raw scores into calibrated probabilities. Bayesian Neural Network uncertainty bounds quantify prediction confidence.
0.97
Calibration accuracy
04
Explainability
SHAP TreeExplainer generates per-feature attribution for every score. Regulators receive deterministic rationale for each decision.
100%
Decisions explained
05
Delivery
Calibrated scores write to the risk profile materialized view. Downstream agents consume scores within 200ms of transfer confirmation.
<200ms
Score delivery
06

See risk scoring produce a live verdict.

30 minutes. Real transfers scored live. Full SHAP breakdown for each decision.

Request a demo