How weights were learned and where they're used
We have three prediction components:
How do we combine them optimally? We learn the weights from data!
The learned coefficients tell us how much each component contributes to the final prediction.
Similarity leads! When data is limited, matching descriptions to known exploits is most reliable.
Balanced! All three components contribute roughly equally when partial data is available.
ML leads! With full EPSS and sightings, the neural network is highly accurate.
Adjust the weights manually to see how it affects the combined prediction. Compare with learned weights to understand their optimization.
Based on the CVE's data regime, the appropriate weights are selected automatically. SPARSE CVEs use similarity-heavy weights.
The final prediction is: w_ml x ML + w_kg x KG + w_sim x Sim. Higher weight = higher contribution to final score.
Components with higher learned weights indicate they're more reliable for that regime. This informs which signals to trust.