Welcome to UNIQUE
’s documentation!#
Introduction#
UNIQUE
provides methods for quantifying and evaluating the uncertainty of Machine Learning (ML) models predictions. The library allows to:
combine and benchmark multiple uncertainty quantification (UQ) methods simultaneously;
evaluate the goodness of UQ methods against established metrics;
generate intuiti ve visualizations to qualitatively assess how well the computed UQ methods represent the actual model uncertainty;
enable the users to get a comprehensive overview of their ML model’s performances from an uncertainty quantification perspective.
UNIQUE
is a model-agnostic tool, meaning that it does not depend on any specific ML model building platform nor provides any ML model training functionality. It only requires the user to input their model’s inputs and predictions.
Check out Installation to get started!
Cite Us#
If you find UNIQUE
helpful for your work and/or research, please consider citing our work:
@misc{lanini2024unique,
title={UNIQUE: A Framework for Uncertainty Quantification Benchmarking},
author={Lanini, Jessica and Huynh, Minh Tam Davide and Scebba, Gaetano and Schneider, Nadine and Rodr{\'\i}guez-P{\'e}rez, Raquel},
year={2024},
doi={https://doi.org/10.26434/chemrxiv-2024-fmbgk},
}
To request more information, check out Contacts & Acknowledgements.