Providers ========================= A :class:`~ir_measures.providers.Provider` implements the calculation logic for one or more :class:`~ir_measures.measures.Measure`. This page provides a list of the Providers that are available in this package. .. _providers.accuracy: ``accuracy`` ------------------------- Accuracy provider **Supported Measures:** - ``Accuracy(rel=ANY)@ANY`` .. _providers.compat: ``compat`` ------------------------- Version of the compatibility measure desribed in: .. code-block:: bibtex :caption: Citation @article{DBLP:journals/tois/ClarkeVS21, author = {Charles L. A. Clarke and Alexandra Vtyurina and Mark D. Smucker}, title = {Assessing Top-k Preferences}, journal = {{ACM} Trans. Inf. Syst.}, volume = {39}, number = {3}, pages = {33:1--33:21}, year = {2021}, url = {https://doi.org/10.1145/3451161}, doi = {10.1145/3451161} } **Supported Measures:** - ``Compat(p=ANY,normalize=ANY)`` .. _providers.cwl_eval: ``cwl_eval`` ------------------------- cwl_eval, providing C/W/L ("cool") framework measures. https://github.com/ireval/cwl .. code-block:: bibtex :caption: Citation @inproceedings{DBLP:conf/sigir/AzzopardiTM19, author = {Leif Azzopardi and Paul Thomas and Alistair Moffat}, title = {cwl{\_}eval: An Evaluation Tool for Information Retrieval}, booktitle = {Proceedings of the 42nd International {ACM} {SIGIR} Conference on Research and Development in Information Retrieval, {SIGIR} 2019, Paris, France, July 21-25, 2019}, pages = {1321--1324}, publisher = {{ACM}}, year = {2019}, url = {https://doi.org/10.1145/3331184.3331398}, doi = {10.1145/3331184.3331398} } **Installation:** .. code-block:: pip install ir-measures[cwl_eval] **Supported Measures:** - ``P(rel=ANY,judged_only=False)@ANY`` - ``RR(rel=ANY,judged_only=False)@NOT_PROVIDED`` - ``AP(rel=ANY,judged_only=False)@NOT_PROVIDED`` - ``RBP(rel=REQUIRED,p=ANY)@NOT_PROVIDED`` - ``BPM(T=ANY,min_rel=ANY,max_rel=REQUIRED)@ANY`` - ``SDCG(dcg='log2',min_rel=ANY,max_rel=REQUIRED)@REQUIRED`` - ``NERR8(min_rel=ANY,max_rel=REQUIRED)@REQUIRED`` - ``NERR9(min_rel=ANY,max_rel=REQUIRED)@REQUIRED`` - ``NERR10(p=ANY,min_rel=ANY,max_rel=REQUIRED)`` - ``NERR11(T=ANY,min_rel=ANY,max_rel=REQUIRED)`` - ``INST(T=ANY,min_rel=ANY,max_rel=REQUIRED)`` - ``INSQ(T=ANY,min_rel=ANY,max_rel=REQUIRED)`` .. _providers.gdeval: ``gdeval`` ------------------------- gdeval **Installation:** .. code-block:: Install perl, see **Supported Measures:** - ``nDCG(dcg='exp-log2',gains=NOT_PROVIDED,judged_only=False)@REQUIRED`` - ``ERR@REQUIRED`` .. _providers.judged: ``judged`` ------------------------- python implementation of judgment rate Adapted from OpenNIR's implementation: https://github.com/Georgetown-IR-Lab/OpenNIR/blob/master/onir/metrics/judged.py **Supported Measures:** - ``Judged@ANY`` .. _providers.msmarco: ``msmarco`` ------------------------- MS MARCO's implementation of RR **Supported Measures:** - ``RR(rel=ANY,judged_only=False)@ANY`` .. _providers.pyndeval: ``pyndeval`` ------------------------- pyndeval **Installation:** .. code-block:: pip install ir-measures[pyndeval] **Supported Measures:** - ``ERR_IA(rel=ANY,judged_only=ANY)@ANY`` - ``nERR_IA(rel=ANY,judged_only=ANY)@ANY`` - ``alpha_DCG(alpha=ANY,rel=ANY,judged_only=ANY)@ANY`` - ``alpha_nDCG(alpha=ANY,rel=ANY,judged_only=ANY)@ANY`` - ``NRBP(alpha=ANY,beta=ANY,rel=ANY)`` - ``nNRBP(alpha=ANY,beta=ANY,rel=ANY)`` - ``AP_IA(rel=ANY,judged_only=ANY)`` - ``P_IA(rel=ANY,judged_only=ANY)@ANY`` - ``StRecall(rel=ANY)@ANY`` .. _providers.pytrec_eval: ``pytrec_eval`` ------------------------- pytrec_eval https://github.com/cvangysel/pytrec_eval .. code-block:: bibtex :caption: Citation @inproceedings{DBLP:conf/sigir/GyselR18, author = {Christophe Van Gysel and Maarten de Rijke}, title = {Pytrec{\_}eval: An Extremely Fast Python Interface to trec{\_}eval}, booktitle = {The 41st International {ACM} {SIGIR} Conference on Research {\&} Development in Information Retrieval, {SIGIR} 2018, Ann Arbor, MI, USA, July 08-12, 2018}, pages = {873--876}, publisher = {{ACM}}, year = {2018}, url = {https://doi.org/10.1145/3209978.3210065}, doi = {10.1145/3209978.3210065} } **Installation:** .. code-block:: pip install --upgrade pytrec-eval-terrier **Supported Measures:** - ``P(rel=ANY,judged_only=ANY)@ANY`` - ``RR(rel=ANY,judged_only=ANY)@NOT_PROVIDED`` - ``Rprec(rel=ANY,judged_only=ANY)`` - ``AP(rel=ANY,judged_only=ANY)@ANY`` - ``nDCG(dcg='log2',gains=ANY,judged_only=ANY)@ANY`` - ``R(judged_only=ANY)@ANY`` - ``Bpref(rel=ANY)`` - ``NumRet(rel=ANY)`` - ``NumQ`` - ``NumRel(rel=1)`` - ``SetAP(rel=ANY,judged_only=ANY)`` - ``SetF(rel=ANY,beta=ANY,judged_only=ANY)`` - ``SetP(rel=ANY,relative=ANY,judged_only=ANY)`` - ``SetR(rel=ANY)`` - ``Success(rel=ANY,judged_only=ANY)@ANY`` - ``IPrec(judged_only=ANY)@ANY`` - ``infAP(rel=ANY)`` .. _providers.ranx: ``ranx`` ------------------------- ranx https://amenra.github.io/ranx/ .. code-block:: bibtex :caption: Citation @inproceedings{DBLP:conf/ecir/Bassani22, author = {Elias Bassani}, title = {ranx: {A} Blazing-Fast Python Library for Ranking Evaluation and Comparison}, booktitle = {Advances in Information Retrieval - 44th European Conference on {IR} Research, {ECIR} 2022, Stavanger, Norway, April 10-14, 2022, Proceedings, Part {II}}, series = {Lecture Notes in Computer Science}, volume = {13186}, pages = {259--264}, publisher = {Springer}, year = {2022}, url = {https://doi.org/10.1007/978-3-030-99739-7\_30}, doi = {10.1007/978-3-030-99739-7\_30} } **Installation:** .. code-block:: pip install ir-measures[ranx] **Supported Measures:** - ``P(rel=ANY,judged_only=False)@ANY`` - ``SetP(rel=ANY,judged_only=False)`` - ``RR(rel=ANY,judged_only=False)@NOT_PROVIDED`` - ``Rprec(rel=ANY,judged_only=False)`` - ``AP(rel=ANY,judged_only=False)@ANY`` - ``nDCG(dcg=('log2', 'exp-log2'),gains=NOT_PROVIDED,judged_only=False)@ANY`` - ``R(judged_only=False)@ANY`` - ``SetR(rel=ANY)`` - ``NumRet(rel=REQUIRED)`` - ``Success(rel=ANY,judged_only=False)@REQUIRED`` .. _providers.runtime: ``runtime`` ------------------------- Supports measures that are defined at runtime via `ir_measures.define()` and `ir_measures.define_byquery()`. **Supported Measures:** .. _providers.trectools: ``trectools`` ------------------------- trectools https://github.com/joaopalotti/trectools .. code-block:: bibtex :caption: Citation @inproceedings{DBLP:conf/sigir/PalottiSZ19, author = {Jo{\~{a}}o R. M. Palotti and Harrisen Scells and Guido Zuccon}, title = {TrecTools: an Open-source Python Library for Information Retrieval Practitioners Involved in TREC-like Campaigns}, booktitle = {Proceedings of the 42nd International {ACM} {SIGIR} Conference on Research and Development in Information Retrieval, {SIGIR} 2019, Paris, France, July 21-25, 2019}, pages = {1325--1328}, publisher = {{ACM}}, year = {2019}, url = {https://doi.org/10.1145/3331184.3331399}, doi = {10.1145/3331184.3331399} } **Installation:** .. code-block:: pip install ir-measures[trectools] **Supported Measures:** - ``P(rel=1,judged_only=False)@ANY`` - ``RR(rel=1,judged_only=False)@NOT_PROVIDED`` - ``Rprec(rel=1,judged_only=False)`` - ``AP(rel=1,judged_only=False)@ANY`` - ``nDCG(dcg=ANY,gains=NOT_PROVIDED,judged_only=False)@ANY`` - ``Bpref(rel=1)`` - ``RBP(p=ANY,rel=ANY)@ANY``