|  | nDCG@20, Test Set B | Hausa | Somali | Swahili | Yoruba |  | Avg | 
    
    
|  | BM25 Human QT | 0.2121 | 0.1725 | 0.1727 | 0.3459 |  | 0.2258 | 
| 
  
Command to generate run:
   
python -m pyserini.search.lucene \
  --language ha \
  --topics ciral-v1.0-ha-test-b-native \
  --index ciral-v1.0-ha \
  --output run.ciral.bm25-qt.ha.test-b.txt --threads 16 --bm25 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m ndcg_cut.20 ciral-v1.0-ha-test-b \
  run.ciral.bm25-qt.ha.test-b.txt
Command to generate run:
   
python -m pyserini.search.lucene \
  --language so \
  --topics ciral-v1.0-so-test-b-native \
  --index ciral-v1.0-so \
  --output run.ciral.bm25-qt.so.test-b.txt \
  --batch 128 --threads 16 --bm25 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m ndcg_cut.20 ciral-v1.0-so-test-b \
  run.ciral.bm25-qt.so.test-b.txt
Command to generate run:
   
python -m pyserini.search.lucene \
  --language sw \
  --topics ciral-v1.0-sw-test-b-native \
  --index ciral-v1.0-sw \
  --output run.ciral.bm25-qt.sw.test-b.txt \
  --batch 128 --threads 16 --bm25 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m ndcg_cut.20 ciral-v1.0-sw-test-b \
  run.ciral.bm25-qt.sw.test-b.txt
Command to generate run:
   
python -m pyserini.search.lucene \
  --language yo \
  --topics ciral-v1.0-yo-test-b-native \
  --index ciral-v1.0-yo \
  --output run.ciral.bm25-qt.yo.test-b.txt \
  --batch 128 --threads 16 --bm25 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m ndcg_cut.20 ciral-v1.0-yo-test-b \
  run.ciral.bm25-qt.yo.test-b.txt
 | 
|  | BM25 Machine DT | 0.2124 | 0.2186 | 0.2582 | 0.3700 |  | 0.2648 | 
| 
  
Command to generate run:
   
python -m pyserini.search.lucene \
  --topics ciral-v1.0-ha-test-b \
  --index ciral-v1.0-ha-en \
  --output run.ciral.bm25-dt.ha.test-b.txt \
  --batch 128 --threads 16 --bm25 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m ndcg_cut.20 ciral-v1.0-ha-test-b \
  run.ciral.bm25-dt.ha.test-b.txt
Command to generate run:
   
python -m pyserini.search.lucene \
  --topics ciral-v1.0-so-test-b \
  --index ciral-v1.0-so-en \
  --output run.ciral.bm25-dt.so.test-b.txt \
  --batch 128 --threads 16 --bm25 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m ndcg_cut.20 ciral-v1.0-so-test-b \
  run.ciral.bm25-dt.so.test-b.txt
Command to generate run:
   
python -m pyserini.search.lucene \
  --topics ciral-v1.0-sw-test-b \
  --index ciral-v1.0-sw-en \
  --output run.ciral.bm25-dt.sw.test-b.txt \
  --batch 128 --threads 16 --bm25 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m ndcg_cut.20 ciral-v1.0-sw-test-b \
  run.ciral.bm25-dt.sw.test-b.txt
Command to generate run:
   
python -m pyserini.search.lucene \
  --topics ciral-v1.0-yo-test-b \
  --index ciral-v1.0-yo-en \
  --output run.ciral.bm25-dt.yo.test-b.txt \
  --batch 128 --threads 16 --bm25 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m ndcg_cut.20 ciral-v1.0-yo-test-b \
  run.ciral.bm25-dt.yo.test-b.txt
 | 
|  | mDPR (tied encoders), pre-FT w/ MS MARCO | 0.0397 | 0.0635 | 0.1227 | 0.1458 |  | 0.0929 | 
| 
  
Command to generate run:
   
python -m pyserini.search.faiss \
  --encoder-class auto \
  --encoder castorini/mdpr-tied-pft-msmarco \
  --topics ciral-v1.0-ha-test-b \
  --index ciral-v1.0-ha-mdpr-tied-pft-msmarco \
  --output run.ciral.mdpr-tied-pft-msmarco.ha.test-b.txt \
  --batch 128 --threads 16 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m ndcg_cut.20 ciral-v1.0-ha-test-b \
  run.ciral.mdpr-tied-pft-msmarco.ha.test-b.txt
Command to generate run:
   
python -m pyserini.search.faiss \
  --encoder-class auto \
  --encoder castorini/mdpr-tied-pft-msmarco \
  --topics ciral-v1.0-so-test-b \
  --index ciral-v1.0-so-mdpr-tied-pft-msmarco \
  --output run.ciral.mdpr-tied-pft-msmarco.so.test-b.txt \
  --batch 128 --threads 16 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m ndcg_cut.20 ciral-v1.0-so-test-b \
  run.ciral.mdpr-tied-pft-msmarco.so.test-b.txt
Command to generate run:
   
python -m pyserini.search.faiss \
  --encoder-class auto \
  --encoder castorini/mdpr-tied-pft-msmarco \
  --topics ciral-v1.0-sw-test-b \
  --index ciral-v1.0-sw-mdpr-tied-pft-msmarco \
  --output run.ciral.mdpr-tied-pft-msmarco.sw.test-b.txt \
  --batch 128 --threads 16 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m ndcg_cut.20 ciral-v1.0-sw-test-b \
  run.ciral.mdpr-tied-pft-msmarco.sw.test-b.txt
Command to generate run:
   
python -m pyserini.search.faiss \
  --encoder-class auto \
  --encoder castorini/mdpr-tied-pft-msmarco \
  --topics ciral-v1.0-yo-test-b \
  --index ciral-v1.0-yo-mdpr-tied-pft-msmarco \
  --output run.ciral.mdpr-tied-pft-msmarco.yo.test-b.txt \
  --batch 128 --threads 16 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m ndcg_cut.20 ciral-v1.0-yo-test-b \
  run.ciral.mdpr-tied-pft-msmarco.yo.test-b.txt
 | 
|  | AfriBERTa, pre-FT w/ MS MARCO FT w/ latin Mr. TyDi | 0.2028 | 0.1682 | 0.2166 | 0.1157 |  | 0.1758 | 
| 
  
Command to generate run:
   
python -m pyserini.search.faiss \
  --encoder-class auto \
  --encoder castorini/afriberta-dpr-pft-msmarco-ft-latin-mrtydi \
  --topics ciral-v1.0-ha-test-b \
  --index ciral-v1.0-ha-afriberta-dpr-ptf-msmarco-ft-latin-mrtydi \
  --output run.ciral.afriberta-pft-msmarco-ft-mrtydi.ha.test-b.txt \
  --batch 128 --threads 16 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m ndcg_cut.20 ciral-v1.0-ha-test-b \
  run.ciral.afriberta-pft-msmarco-ft-mrtydi.ha.test-b.txt
Command to generate run:
   
python -m pyserini.search.faiss \
  --encoder-class auto \
  --encoder castorini/afriberta-dpr-pft-msmarco-ft-latin-mrtydi \
  --topics ciral-v1.0-so-test-b \
  --index ciral-v1.0-so-afriberta-dpr-ptf-msmarco-ft-latin-mrtydi \
  --output run.ciral.afriberta-pft-msmarco-ft-mrtydi.so.test-b.txt \
  --batch 128 --threads 16 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m ndcg_cut.20 ciral-v1.0-so-test-b \
  run.ciral.afriberta-pft-msmarco-ft-mrtydi.so.test-b.txt
Command to generate run:
   
python -m pyserini.search.faiss \
  --encoder-class auto \
  --encoder castorini/afriberta-dpr-pft-msmarco-ft-latin-mrtydi \
  --topics ciral-v1.0-sw-test-b \
  --index ciral-v1.0-sw-afriberta-dpr-ptf-msmarco-ft-latin-mrtydi \
  --output run.ciral.afriberta-pft-msmarco-ft-mrtydi.sw.test-b.txt \
  --batch 128 --threads 16 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m ndcg_cut.20 ciral-v1.0-sw-test-b \
  run.ciral.afriberta-pft-msmarco-ft-mrtydi.sw.test-b.txt
Command to generate run:
   
python -m pyserini.search.faiss \
  --encoder-class auto \
  --encoder castorini/afriberta-dpr-pft-msmarco-ft-latin-mrtydi \
  --topics ciral-v1.0-yo-test-b \
  --index ciral-v1.0-yo-afriberta-dpr-ptf-msmarco-ft-latin-mrtydi \
  --output run.ciral.afriberta-pft-msmarco-ft-mrtydi.yo.test-b.txt \
  --batch 128 --threads 16 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m ndcg_cut.20 ciral-v1.0-yo-test-b \
  run.ciral.afriberta-pft-msmarco-ft-mrtydi.yo.test-b.txt
 | 
|  | RRF Fusion of BM25 Machine DT and AfriBERTa-DPR | 0.2935 | 0.2878 | 0.3187 | 0.3435 |  | 0.3109 | 
| 
  
Command to generate run:
   
python -m pyserini.fusion \
  --runs  run.ciral.bm25-dt.ha.test-b.txt run.ciral.afriberta-pft-msmarco-ft-mrtydi.ha.test-b.txt \
  --runtag  rrf-afridpr-bmdt --method rrf --rrf.k 60 \
  --output run.ciral.bm25-dt-afriberta-dpr-fusion.ha.test-b.txt
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m ndcg_cut.20 ciral-v1.0-ha-test-b \
  run.ciral.bm25-dt-afriberta-dpr-fusion.ha.test-b.txt
Command to generate run:
   
python -m pyserini.fusion \
  --runs  run.ciral.bm25-dt.so.test-b.txt run.ciral.afriberta-pft-msmarco-ft-mrtydi.so.test-b.txt \
  --runtag  rrf-afridpr-bmdt --method rrf --rrf.k 60 \
  --output run.ciral.bm25-dt-afriberta-dpr-fusion.so.test-b.txt
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m ndcg_cut.20 ciral-v1.0-so-test-b \
  run.ciral.bm25-dt-afriberta-dpr-fusion.so.test-b.txt
Command to generate run:
   
python -m pyserini.fusion \
  --runs  run.ciral.bm25-dt.sw.test-b.txt run.ciral.afriberta-pft-msmarco-ft-mrtydi.sw.test-b.txt \
  --runtag  rrf-afridpr-bmdt --method rrf --rrf.k 60 \
  --output run.ciral.bm25-dt-afriberta-dpr-fusion.sw.test-b.txt
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m ndcg_cut.20 ciral-v1.0-sw-test-b \
  run.ciral.bm25-dt-afriberta-dpr-fusion.sw.test-b.txt
Command to generate run:
   
python -m pyserini.fusion \
  --runs  run.ciral.bm25-dt.yo.test-b.txt run.ciral.afriberta-pft-msmarco-ft-mrtydi.yo.test-b.txt \
  --runtag  rrf-afridpr-bmdt --method rrf --rrf.k 60 \
  --output run.ciral.bm25-dt-afriberta-dpr-fusion.yo.test-b.txt
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m ndcg_cut.20 ciral-v1.0-yo-test-b \
  run.ciral.bm25-dt-afriberta-dpr-fusion.yo.test-b.txt
 | 
    
  
 
 
  
    
      
        |  | Recall@100, Test Set B | Hausa | Somali | Swahili | Yoruba |  | Avg | 
    
    
|  | BM25 Human QT | 0.3800 | 0.3479 | 0.4166 | 0.6434 |  | 0.4470 | 
| 
  
Command to generate run:
   
python -m pyserini.search.lucene \
  --language ha \
  --topics ciral-v1.0-ha-test-b-native \
  --index ciral-v1.0-ha \
  --output run.ciral.bm25-qt.ha.test-b.txt --threads 16 --bm25 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m recall.100 ciral-v1.0-ha-test-b \
  run.ciral.bm25-qt.ha.test-b.txt
Command to generate run:
   
python -m pyserini.search.lucene \
  --language so \
  --topics ciral-v1.0-so-test-b-native \
  --index ciral-v1.0-so \
  --output run.ciral.bm25-qt.so.test-b.txt \
  --batch 128 --threads 16 --bm25 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m recall.100 ciral-v1.0-so-test-b \
  run.ciral.bm25-qt.so.test-b.txt
Command to generate run:
   
python -m pyserini.search.lucene \
  --language sw \
  --topics ciral-v1.0-sw-test-b-native \
  --index ciral-v1.0-sw \
  --output run.ciral.bm25-qt.sw.test-b.txt \
  --batch 128 --threads 16 --bm25 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m recall.100 ciral-v1.0-sw-test-b \
  run.ciral.bm25-qt.sw.test-b.txt
Command to generate run:
   
python -m pyserini.search.lucene \
  --language yo \
  --topics ciral-v1.0-yo-test-b-native \
  --index ciral-v1.0-yo \
  --output run.ciral.bm25-qt.yo.test-b.txt \
  --batch 128 --threads 16 --bm25 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m recall.100 ciral-v1.0-yo-test-b \
  run.ciral.bm25-qt.yo.test-b.txt
 | 
|  | BM25 Machine DT | 0.4394 | 0.4637 | 0.4918 | 0.7348 |  | 0.5324 | 
| 
  
Command to generate run:
   
python -m pyserini.search.lucene \
  --topics ciral-v1.0-ha-test-b \
  --index ciral-v1.0-ha-en \
  --output run.ciral.bm25-dt.ha.test-b.txt \
  --batch 128 --threads 16 --bm25 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m recall.100 ciral-v1.0-ha-test-b \
  run.ciral.bm25-dt.ha.test-b.txt
Command to generate run:
   
python -m pyserini.search.lucene \
  --topics ciral-v1.0-so-test-b \
  --index ciral-v1.0-so-en \
  --output run.ciral.bm25-dt.so.test-b.txt \
  --batch 128 --threads 16 --bm25 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m recall.100 ciral-v1.0-so-test-b \
  run.ciral.bm25-dt.so.test-b.txt
Command to generate run:
   
python -m pyserini.search.lucene \
  --topics ciral-v1.0-sw-test-b \
  --index ciral-v1.0-sw-en \
  --output run.ciral.bm25-dt.sw.test-b.txt \
  --batch 128 --threads 16 --bm25 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m recall.100 ciral-v1.0-sw-test-b \
  run.ciral.bm25-dt.sw.test-b.txt
Command to generate run:
   
python -m pyserini.search.lucene \
  --topics ciral-v1.0-yo-test-b \
  --index ciral-v1.0-yo-en \
  --output run.ciral.bm25-dt.yo.test-b.txt \
  --batch 128 --threads 16 --bm25 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m recall.100 ciral-v1.0-yo-test-b \
  run.ciral.bm25-dt.yo.test-b.txt
 | 
|  | mDPR (tied encoders), pre-FT w/ MS MARCO | 0.1027 | 0.1345 | 0.3019 | 0.3249 |  | 0.2160 | 
| 
  
Command to generate run:
   
python -m pyserini.search.faiss \
  --encoder-class auto \
  --encoder castorini/mdpr-tied-pft-msmarco \
  --topics ciral-v1.0-ha-test-b \
  --index ciral-v1.0-ha-mdpr-tied-pft-msmarco \
  --output run.ciral.mdpr-tied-pft-msmarco.ha.test-b.txt \
  --batch 128 --threads 16 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m recall.100 ciral-v1.0-ha-test-b \
  run.ciral.mdpr-tied-pft-msmarco.ha.test-b.txt
Command to generate run:
   
python -m pyserini.search.faiss \
  --encoder-class auto \
  --encoder castorini/mdpr-tied-pft-msmarco \
  --topics ciral-v1.0-so-test-b \
  --index ciral-v1.0-so-mdpr-tied-pft-msmarco \
  --output run.ciral.mdpr-tied-pft-msmarco.so.test-b.txt \
  --batch 128 --threads 16 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m recall.100 ciral-v1.0-so-test-b \
  run.ciral.mdpr-tied-pft-msmarco.so.test-b.txt
Command to generate run:
   
python -m pyserini.search.faiss \
  --encoder-class auto \
  --encoder castorini/mdpr-tied-pft-msmarco \
  --topics ciral-v1.0-sw-test-b \
  --index ciral-v1.0-sw-mdpr-tied-pft-msmarco \
  --output run.ciral.mdpr-tied-pft-msmarco.sw.test-b.txt \
  --batch 128 --threads 16 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m recall.100 ciral-v1.0-sw-test-b \
  run.ciral.mdpr-tied-pft-msmarco.sw.test-b.txt
Command to generate run:
   
python -m pyserini.search.faiss \
  --encoder-class auto \
  --encoder castorini/mdpr-tied-pft-msmarco \
  --topics ciral-v1.0-yo-test-b \
  --index ciral-v1.0-yo-mdpr-tied-pft-msmarco \
  --output run.ciral.mdpr-tied-pft-msmarco.yo.test-b.txt \
  --batch 128 --threads 16 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m recall.100 ciral-v1.0-yo-test-b \
  run.ciral.mdpr-tied-pft-msmarco.yo.test-b.txt
 | 
|  | AfriBERTa, pre-FT w/ MS MARCO FT w/ latin Mr. TyDi | 0.3900 | 0.3558 | 0.4608 | 0.2907 |  | 0.3743 | 
| 
  
Command to generate run:
   
python -m pyserini.search.faiss \
  --encoder-class auto \
  --encoder castorini/afriberta-dpr-pft-msmarco-ft-latin-mrtydi \
  --topics ciral-v1.0-ha-test-b \
  --index ciral-v1.0-ha-afriberta-dpr-ptf-msmarco-ft-latin-mrtydi \
  --output run.ciral.afriberta-pft-msmarco-ft-mrtydi.ha.test-b.txt \
  --batch 128 --threads 16 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m recall.100 ciral-v1.0-ha-test-b \
  run.ciral.afriberta-pft-msmarco-ft-mrtydi.ha.test-b.txt
Command to generate run:
   
python -m pyserini.search.faiss \
  --encoder-class auto \
  --encoder castorini/afriberta-dpr-pft-msmarco-ft-latin-mrtydi \
  --topics ciral-v1.0-so-test-b \
  --index ciral-v1.0-so-afriberta-dpr-ptf-msmarco-ft-latin-mrtydi \
  --output run.ciral.afriberta-pft-msmarco-ft-mrtydi.so.test-b.txt \
  --batch 128 --threads 16 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m recall.100 ciral-v1.0-so-test-b \
  run.ciral.afriberta-pft-msmarco-ft-mrtydi.so.test-b.txt
Command to generate run:
   
python -m pyserini.search.faiss \
  --encoder-class auto \
  --encoder castorini/afriberta-dpr-pft-msmarco-ft-latin-mrtydi \
  --topics ciral-v1.0-sw-test-b \
  --index ciral-v1.0-sw-afriberta-dpr-ptf-msmarco-ft-latin-mrtydi \
  --output run.ciral.afriberta-pft-msmarco-ft-mrtydi.sw.test-b.txt \
  --batch 128 --threads 16 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m recall.100 ciral-v1.0-sw-test-b \
  run.ciral.afriberta-pft-msmarco-ft-mrtydi.sw.test-b.txt
Command to generate run:
   
python -m pyserini.search.faiss \
  --encoder-class auto \
  --encoder castorini/afriberta-dpr-pft-msmarco-ft-latin-mrtydi \
  --topics ciral-v1.0-yo-test-b \
  --index ciral-v1.0-yo-afriberta-dpr-ptf-msmarco-ft-latin-mrtydi \
  --output run.ciral.afriberta-pft-msmarco-ft-mrtydi.yo.test-b.txt \
  --batch 128 --threads 16 --hits 1000
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m recall.100 ciral-v1.0-yo-test-b \
  run.ciral.afriberta-pft-msmarco-ft-mrtydi.yo.test-b.txt
 | 
|  | RRF Fusion of BM25 Machine DT and AfriBERTa-DPR | 0.6007 | 0.5618 | 0.7007 | 0.7525 |  | 0.6539 | 
| 
  
Command to generate run:
   
python -m pyserini.fusion \
  --runs  run.ciral.bm25-dt.ha.test-b.txt run.ciral.afriberta-pft-msmarco-ft-mrtydi.ha.test-b.txt \
  --runtag  rrf-afridpr-bmdt --method rrf --rrf.k 60 \
  --output run.ciral.bm25-dt-afriberta-dpr-fusion.ha.test-b.txt
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m recall.100 ciral-v1.0-ha-test-b \
  run.ciral.bm25-dt-afriberta-dpr-fusion.ha.test-b.txt
Command to generate run:
   
python -m pyserini.fusion \
  --runs  run.ciral.bm25-dt.so.test-b.txt run.ciral.afriberta-pft-msmarco-ft-mrtydi.so.test-b.txt \
  --runtag  rrf-afridpr-bmdt --method rrf --rrf.k 60 \
  --output run.ciral.bm25-dt-afriberta-dpr-fusion.so.test-b.txt
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m recall.100 ciral-v1.0-so-test-b \
  run.ciral.bm25-dt-afriberta-dpr-fusion.so.test-b.txt
Command to generate run:
   
python -m pyserini.fusion \
  --runs  run.ciral.bm25-dt.sw.test-b.txt run.ciral.afriberta-pft-msmarco-ft-mrtydi.sw.test-b.txt \
  --runtag  rrf-afridpr-bmdt --method rrf --rrf.k 60 \
  --output run.ciral.bm25-dt-afriberta-dpr-fusion.sw.test-b.txt
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m recall.100 ciral-v1.0-sw-test-b \
  run.ciral.bm25-dt-afriberta-dpr-fusion.sw.test-b.txt
Command to generate run:
   
python -m pyserini.fusion \
  --runs  run.ciral.bm25-dt.yo.test-b.txt run.ciral.afriberta-pft-msmarco-ft-mrtydi.yo.test-b.txt \
  --runtag  rrf-afridpr-bmdt --method rrf --rrf.k 60 \
  --output run.ciral.bm25-dt-afriberta-dpr-fusion.yo.test-b.txt
 
Evaluation commands:
   
python -m pyserini.eval.trec_eval \
  -c -m recall.100 ciral-v1.0-yo-test-b \
  run.ciral.bm25-dt-afriberta-dpr-fusion.yo.test-b.txt
 | 
    
  
 
    
Programmatic Execution
All experimental runs shown in the above table can be programmatically executed based on the instructions below.
To list all the experimental conditions:
python -m pyserini.2cr.ciral --list-conditions
Run all languages for a specific condition and show commands:
python -m pyserini.2cr.ciral --condition bm25-qt --display-commands
Run a particular language for a specific condition and show commands:
python -m pyserini.2cr.ciral --condition bm25-qt --language somali --display-commands
Run all languages for all conditions and show commands:
python -m pyserini.2cr.ciral --all --display-commands
With the above commands, run files will be placed in the current directory. Use the option --directory runs to place the runs in a sub-directory.
For a specific condition, just show the commands and do not run:
python -m pyserini.2cr.ciral --condition bm25-qt --display-commands --dry-run
This will generate exactly the commands for a specific condition above (corresponding to a row in the table).
For a specific condition and language, just show the commands and do not run:
python -m pyserini.2cr.ciral --condition bm25-qt --language somali --display-commands --dry-run
For all conditions, just show the commands and do not run and skip evaluation:
python -m pyserini.2cr.ciral --all --display-commands --dry-run --skip-eval
Finally, to generate this page:
python -m pyserini.2cr.ciral --generate-report --output docs/2cr/ciral.html --display-split test-b
The output file ciral.html should be identical to this page.