Interactive visualization of how our ML models process CVE data and make predictions
MLP - 25 features
Uses vendor/product rates and description similarity. Works immediately when a CVE is published.
MLP - 66 features
Uses all available signals including EPSS, sightings, and ATT&CK features. Requires NVD enrichment.
Graph Neural Network
Learns from CVE-CWE-CAPEC-ATT&CK knowledge graph. Provides interpretable reasoning chains.
Different CVEs have different amounts of data available. Our adaptive ensemble selects the best model based on data availability.
Most CVEs at publication. No CWE, no CPE. Early Premium dominates with 66.7% similarity weight.
Some data available. GNN gets 16.9% weight - knowledge graph reasoning adds value.
All signals available. Full MLP gets 39.9% weight - uses all models.
| Model | AUC-ROC | Features | Architecture | Best For | Key Advantage |
|---|---|---|---|---|---|
| Early Premium Recommended | 0.9913 | 25 | 25-256-128-64-1 | SPARSE (71.6%) | Works at disclosure |
| Full MLP | 0.9719 | 66 | 66-256-128-64-1 | RICH (3.2%) | Uses all signals |
| GNN GraphSAGE | 0.9344 | 18 + KG | 18-128-128-1 | MODERATE (25.2%) | Interpretable |
Select a model to simulate data flow through the network
Each node aggregates features from its neighbors and combines with its own
Each CVE collects feature vectors from its connected CWE nodes. These are averaged (MEAN aggregation) and concatenated with the CVE's own features.
Information flows from 2-hop neighbors (CAPEC patterns linked to CWEs). CVEs with CWEs leading to severe attack patterns inherit higher risk signals.
Measure prediction confidence by running multiple forward passes
Unlike normal inference, keep dropout enabled during prediction
Each pass uses different random dropout, giving different predictions
Low variance = high confidence, high variance = uncertain prediction
Low ECE means predicted probabilities align well with actual outcomes
Click "Run MC Dropout" to see uncertainty quantification
Measures how well predicted probabilities match actual exploitation outcomes (0 or 1).
💡 Heavily penalizes confident wrong predictions. If model predicts 95% and CVE wasn't exploited, loss is high.