Providers¶
A Provider
implements the calculation logic for one or more
Measure
.
This page provides a list of the Providers that are available in this package.
accuracy
¶
Accuracy provider
Supported Measures:
Accuracy(rel=ANY)@ANY
compat
¶
Version of the compatibility measure desribed in:
Citation
Clarke et al. Assessing Top- Preferences. ACM Trans. Inf. Syst. 2021. [link]
@article{DBLP:journals/tois/ClarkeVS21, author = {Charles L. A. Clarke and Alexandra Vtyurina and Mark D. Smucker}, title = {Assessing Top- Preferences}, journal = {{ACM} Trans. Inf. Syst.}, volume = {39}, number = {3}, pages = {33:1--33:21}, year = {2021}, url = {https://doi.org/10.1145/3451161}, doi = {10.1145/3451161}, timestamp = {Sat, 09 Apr 2022 12:20:33 +0200}, biburl = {https://dblp.org/rec/journals/tois/ClarkeVS21.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
Supported Measures:
Compat(p=ANY,normalize=ANY)
cwl_eval
¶
cwl_eval, providing C/W/L (“cool”) framework measures.
Citation
Azzopardi et al. cwl_eval: An Evaluation Tool for Information Retrieval. SIGIR 2019. [link]
@inproceedings{DBLP:conf/sigir/AzzopardiTM19, author = {Leif Azzopardi and Paul Thomas and Alistair Moffat}, editor = {Benjamin Piwowarski and Max Chevalier and {\'{E}}ric Gaussier and Yoelle Maarek and Jian{-}Yun Nie and Falk Scholer}, title = {cwl{\_}eval: An Evaluation Tool for Information Retrieval}, booktitle = {Proceedings of the 42nd International {ACM} {SIGIR} Conference on Research and Development in Information Retrieval, {SIGIR} 2019, Paris, France, July 21-25, 2019}, pages = {1321--1324}, publisher = {{ACM}}, year = {2019}, url = {https://doi.org/10.1145/3331184.3331398}, doi = {10.1145/3331184.3331398}, timestamp = {Mon, 26 Jun 2023 20:45:15 +0200}, biburl = {https://dblp.org/rec/conf/sigir/AzzopardiTM19.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
Installation:
pip install ir-measures[cwl_eval]
Supported Measures:
P(rel=ANY,judged_only=False)@ANY
RR(rel=ANY,judged_only=False)@NOT_PROVIDED
AP(rel=ANY,judged_only=False)@NOT_PROVIDED
RBP(rel=REQUIRED,p=ANY)@NOT_PROVIDED
BPM(T=ANY,min_rel=ANY,max_rel=REQUIRED)@ANY
SDCG(dcg='log2',min_rel=ANY,max_rel=REQUIRED)@REQUIRED
NERR8(min_rel=ANY,max_rel=REQUIRED)@REQUIRED
NERR9(min_rel=ANY,max_rel=REQUIRED)@REQUIRED
NERR10(p=ANY,min_rel=ANY,max_rel=REQUIRED)
NERR11(T=ANY,min_rel=ANY,max_rel=REQUIRED)
INST(T=ANY,min_rel=ANY,max_rel=REQUIRED)
INSQ(T=ANY,min_rel=ANY,max_rel=REQUIRED)
gdeval
¶
gdeval
Installation:
Install perl, see <https://www.perl.org>
Supported Measures:
nDCG(dcg='exp-log2',gains=NOT_PROVIDED,judged_only=False)@REQUIRED
ERR@REQUIRED
judged
¶
python implementation of judgment rate
Adapted from OpenNIR’s implementation: https://github.com/Georgetown-IR-Lab/OpenNIR/blob/master/onir/metrics/judged.py
Supported Measures:
Judged@ANY
msmarco
¶
MS MARCO’s implementation of RR
Supported Measures:
RR(rel=ANY,judged_only=False)@ANY
pyndeval
¶
pyndeval
Installation:
pip install ir-measures[pyndeval]
Supported Measures:
ERR_IA(rel=ANY,judged_only=ANY)@ANY
nERR_IA(rel=ANY,judged_only=ANY)@ANY
alpha_DCG(alpha=ANY,rel=ANY,judged_only=ANY)@ANY
alpha_nDCG(alpha=ANY,rel=ANY,judged_only=ANY)@ANY
NRBP(alpha=ANY,beta=ANY,rel=ANY)
nNRBP(alpha=ANY,beta=ANY,rel=ANY)
AP_IA(rel=ANY,judged_only=ANY)
P_IA(rel=ANY,judged_only=ANY)@ANY
StRecall(rel=ANY)@ANY
pytrec_eval
¶
pytrec_eval
https://github.com/cvangysel/pytrec_eval
Citation
Gysel and Rijke. Pytrec_eval: An Extremely Fast Python Interface to trec_eval. SIGIR 2018. [link]
@inproceedings{DBLP:conf/sigir/GyselR18, author = {Christophe Van Gysel and Maarten de Rijke}, editor = {Kevyn Collins{-}Thompson and Qiaozhu Mei and Brian D. Davison and Yiqun Liu and Emine Yilmaz}, title = {Pytrec{\_}eval: An Extremely Fast Python Interface to trec{\_}eval}, booktitle = {The 41st International {ACM} {SIGIR} Conference on Research {\&} Development in Information Retrieval, {SIGIR} 2018, Ann Arbor, MI, USA, July 08-12, 2018}, pages = {873--876}, publisher = {{ACM}}, year = {2018}, url = {https://doi.org/10.1145/3209978.3210065}, doi = {10.1145/3209978.3210065}, timestamp = {Sat, 09 Apr 2022 12:44:58 +0200}, biburl = {https://dblp.org/rec/conf/sigir/GyselR18.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
Installation:
pip install --upgrade pytrec-eval-terrier
Supported Measures:
P(rel=ANY,judged_only=ANY)@ANY
RR(rel=ANY,judged_only=ANY)@NOT_PROVIDED
Rprec(rel=ANY,judged_only=ANY)
AP(rel=ANY,judged_only=ANY)@ANY
nDCG(dcg='log2',gains=ANY,judged_only=ANY)@ANY
R(judged_only=ANY)@ANY
Bpref(rel=ANY)
NumRet(rel=ANY)
NumQ
NumRel(rel=1)
SetAP(rel=ANY,judged_only=ANY)
SetF(rel=ANY,beta=ANY,judged_only=ANY)
SetP(rel=ANY,relative=ANY,judged_only=ANY)
SetR(rel=ANY)
Success(rel=ANY,judged_only=ANY)@ANY
IPrec(judged_only=ANY)@ANY
infAP(rel=ANY)
ranx
¶
ranx
https://amenra.github.io/ranx/
Citation
Bassani. ranx: A Blazing-Fast Python Library for Ranking Evaluation and Comparison. ECIR (2) 2022. [link]
@inproceedings{DBLP:conf/ecir/Bassani22, author = {Elias Bassani}, editor = {Matthias Hagen and Suzan Verberne and Craig Macdonald and Christin Seifert and Krisztian Balog and Kjetil N{\o}rv{\aa}g and Vinay Setty}, title = {ranx: {A} Blazing-Fast Python Library for Ranking Evaluation and Comparison}, booktitle = {Advances in Information Retrieval - 44th European Conference on {IR} Research, {ECIR} 2022, Stavanger, Norway, April 10-14, 2022, Proceedings, Part {II}}, series = {Lecture Notes in Computer Science}, volume = {13186}, pages = {259--264}, publisher = {Springer}, year = {2022}, url = {https://doi.org/10.1007/978-3-030-99739-7\_30}, doi = {10.1007/978-3-030-99739-7\_30}, timestamp = {Wed, 27 Apr 2022 20:12:25 +0200}, biburl = {https://dblp.org/rec/conf/ecir/Bassani22.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
Installation:
pip install ir-measures[ranx]
Supported Measures:
P(rel=ANY,judged_only=False)@ANY
SetP(rel=ANY,judged_only=False)
RR(rel=ANY,judged_only=False)@NOT_PROVIDED
Rprec(rel=ANY,judged_only=False)
AP(rel=ANY,judged_only=False)@ANY
nDCG(dcg=('log2', 'exp-log2'),gains=NOT_PROVIDED,judged_only=False)@ANY
R(judged_only=False)@ANY
SetR(rel=ANY)
NumRet(rel=REQUIRED)
Success(rel=ANY,judged_only=False)@REQUIRED
runtime
¶
Supports measures that are defined at runtime via ir_measures.define() and ir_measures.define_byquery().
Supported Measures:
trectools
¶
trectools
https://github.com/joaopalotti/trectools
Citation
Palotti et al. TrecTools: an Open-source Python Library for Information Retrieval Practitioners Involved in TREC-like Campaigns. SIGIR 2019. [link]
@inproceedings{DBLP:conf/sigir/PalottiSZ19, author = {Jo{\~{a}}o R. M. Palotti and Harrisen Scells and Guido Zuccon}, editor = {Benjamin Piwowarski and Max Chevalier and {\'{E}}ric Gaussier and Yoelle Maarek and Jian{-}Yun Nie and Falk Scholer}, title = {TrecTools: an Open-source Python Library for Information Retrieval Practitioners Involved in TREC-like Campaigns}, booktitle = {Proceedings of the 42nd International {ACM} {SIGIR} Conference on Research and Development in Information Retrieval, {SIGIR} 2019, Paris, France, July 21-25, 2019}, pages = {1325--1328}, publisher = {{ACM}}, year = {2019}, url = {https://doi.org/10.1145/3331184.3331399}, doi = {10.1145/3331184.3331399}, timestamp = {Sun, 04 Aug 2024 19:39:40 +0200}, biburl = {https://dblp.org/rec/conf/sigir/PalottiSZ19.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
Installation:
pip install ir-measures[trectools]
Supported Measures:
P(rel=1,judged_only=False)@ANY
RR(rel=1,judged_only=False)@NOT_PROVIDED
Rprec(rel=1,judged_only=False)
AP(rel=1,judged_only=False)@ANY
nDCG(dcg=ANY,gains=NOT_PROVIDED,judged_only=False)@ANY
Bpref(rel=1)
RBP(p=ANY,rel=ANY)@ANY