RAG Benchmarks
The benchmark harness measures dense, sparse, GraphRAG, hybrid, and rerank-overlay adapters for both quality and latency. It is designed for repeatable regressions on Gold-derived QA/qrels, produces consistent run artifacts, and emits the latency traces needed for SLO tracking.
Scope and prerequisites
Python environment activated with PYTHONPATH=$PWD/src.
Gold buckets (fixed_size or semantic) containing generated QA/qrels under bench_out/<dataset>/<date>/qa/.
Retriever assets: dense Chroma index, BM25 index, and optionally a GraphRAG ingest tag for community expansion.
Reranker providers (optional): Cohere API key (CO_API_KEY/COHERE_API_KEY/COHERE_KEY) or local HuggingFace/FlashRank weights.
Artifacts and logging
Inputs: QA/qrels parquet (or JSON/JSONL) plus the indexes noted above.
Outputs:
QA artifacts: bench_out/<dataset>/<date>/qa/{qa.parquet,qrels.parquet,qa.jsonl,qrels.json}.
Run artifacts: bench_out/<dataset>/<date>/evals/{<adapter>_run.json,latency_*.jsonl,evals_rerank/*} and leaderboard.csv.
Console metrics: map, ndcg_cut_10, recip_rank, P_5, P_10, recall_100 (chunk-level) and optional doc-level metrics.
Latency logs:
BENCH_LATENCY_LOG — end-to-end for adapters.
HYB_LATENCY_LOG — hybrid/graph stage timings (seed, graph, rerank, fuse).
RERANK_LATENCY_LOG — reranker-only JSONL emitted by run_rerank_overlay.
Overlay helpers: --overlay-log for end-to-end overlay timing and --stage-log for hybrid stage timing during rerank overlays.
If you store benchmark outputs outside bench_out/, pass --bench-root <path> to rag.retrieval.triple_retriever so tuned weights and latency logs resolve correctly.
Workflow overview
Generate QA/qrels from Gold once per dataset/date.
Establish baselines (BM25, Dense).
Tune Hybrid fusion and run the tuned hybrid variant.
Add graph expansion when applicable.
Overlay rerankers (HuggingFace, Cohere, FlashRank) with and without graph expansion.
Evaluate all runs, persist latency logs, and refresh the leaderboard.
Optionally build the IR + latency summary table for reporting.
Reference run script (PowerShell)
The script below mirrors the full sweep, including rerank overlays across providers.
PowerShell # 0) Activate env & set paths
. .\. venv \ Scripts \ Activate . ps1
$env:PYTHONPATH = "$PWD\src"
python -m rag . bench - -help
# Dataset/date / top-k
$env:DATASET = "fixed_size"
$TOPK = 100
$DATE = $( python -c "import os; from rag.utils.paths import latest_gold_chunk_date; print(latest_gold_chunk_date(os.environ['DATASET']))" )
# Paths
$QA = "bench_out\$env:DATASET\$DATE\qa.parquet"
$QRELS = "bench_out\$env:DATASET\$DATE\qrels.parquet"
$EVALS = "bench_out\$env:DATASET\$DATE\evals"
mkdir $EVALS -Force | Out-Null
# Clean old latency logs for a fresh view (optional)
Remove-Item "$EVALS\latency_e2e.jsonl" -ErrorAction SilentlyContinue
Remove-Item "$EVALS\latency_stages.jsonl" -ErrorAction SilentlyContinue
# 1) Generate QA (once per dataset/date)
python -m rag . bench generate `
- -dataset $env:DATASET `
- -date $DATE `
- -n 100 `
- -use-text `
- -provider auto
# 2) Baselines — BM25 & Dense
python -m rag . bench run `
- -qa -file $QA `
- -adapter bm25 `
- -dataset $env:DATASET - -date $DATE `
- -top-k $TOPK - -bm25-use-text
python -m rag . bench evaluate `
- -qrels $QRELS `
- -run ( Join-Path $EVALS "bm25_run.json" ) `
- -run-id "01_BM25"
python -m rag . bench run `
- -qa -file $QA `
- -adapter dense `
- -dataset $env:DATASET - -date $DATE `
- -top-k $TOPK
python -m rag . bench evaluate `
- -qrels $QRELS `
- -run ( Join-Path $EVALS "dense_run.json" ) `
- -run-id "02_Dense"
# 3) Tune Hybrid fusion
python -m rag . bench tune-hybrid `
- -qa -file $QA `
- -dataset $env:DATASET - -date $DATE `
- -top-k $TOPK `
- -metric ndcg_cut_10
$best = Get-Content -Raw "$EVALS\hybrid_tuned.json" | ConvertFrom-Json
# 4) Hybrid (grid best) & Hybrid + Graph
python -m rag . bench run `
- -qa -file $QA `
- -adapter hybrid `
- -dataset $env:DATASET - -date $DATE `
- -top-k $TOPK `
- -hyb-rrf-k $best . rrf_k `
- -hyb-w-bm25 $best . w_bm25 `
- -hyb-w-dense $best . w_dense `
- -hyb-seed-k 120
python -m rag . bench evaluate `
- -qrels $QRELS `
- -run ( Join-Path $EVALS "hybrid_run.json" ) `
- -run-id "03_Hybrid_gridbest"
$env:INGEST_TAG = "comm_fixed_C1_g1_2"
python -m rag . bench run `
- -qa -file $QA `
- -adapter hybrid `
- -dataset $env:DATASET - -date $DATE `
- -graph-expand `
- -ingest-tag $env:INGEST_TAG - -level C1 `
- -hyb-expand-ratio 2 . 0 - -hyb-expand-limit 800 `
- -rerank-mode dense - -hyb-w-graph 1 . 0 `
- -hyb-seed-k 120 `
- -top-k $TOPK `
- -hyb-rrf-k $best . rrf_k `
- -hyb-w-bm25 $best . w_bm25 `
- -hyb-w-dense $best . w_dense `
- -out ( Join-Path $EVALS "hybrid_graph_run.json" )
python -m rag . bench evaluate `
- -qrels $QRELS `
- -run ( Join-Path $EVALS "hybrid_graph_run.json" ) `
- -run-id "05_Hybrid_graph"
# 5) Rerank overlay WITHOUT graph (HF)
python -m rag . bench . run_rerank_overlay `
$env:DATASET `
$QA `
- -date $DATE `
- -top-k $TOPK `
- -rerank-spec "hf:cross-encoder/ms-marco-MiniLM-L-12-v2" `
- -prefer summary `
- -alpha 0 . 7 `
- -sort-by final `
- -no-graph-expand `
- -out ( Join-Path $EVALS "hybrid_rerank_hf_msmarco_no_graph_run.json" ) `
- -overlay-log ( Join-Path $EVALS "latency_rerank_overlay_hf_msmarco_no_graph.jsonl" )
$RERANK_STAGE_NO_GRAPH = Join-Path $EVALS "evals_rerank\latency_rerank_no_graph.jsonl"
python -m rag . bench evaluate `
- -qrels $QRELS `
- -run ( Join-Path $EVALS "hybrid_rerank_hf_msmarco_no_graph_run.json" ) `
- -run-id "06_Hybrid_rerank(hf-msmarco)_no_graph" `
- -e2e-log ( Join-Path $EVALS "latency_rerank_overlay_hf_msmarco_no_graph.jsonl" ) `
- -stage-log ( Join-Path $EVALS "latency_stages_rerank_no_graph.jsonl" ) `
- -rerank-log $RERANK_STAGE_NO_GRAPH
# 6) Rerank overlay WITH graph (HF)
python -m rag . bench . run_rerank_overlay `
$env:DATASET `
$QA `
- -date $DATE `
- -top-k $TOPK `
- -rerank-spec "hf:cross-encoder/ms-marco-MiniLM-L-12-v2" `
- -prefer summary `
- -alpha 0 . 7 `
- -sort-by final `
- -graph-expand `
- -ingest-tag "comm_fixed_C1_g1_2" `
- -out ( Join-Path $EVALS "hybrid_rerank_hf_msmarco_graph_run.json" ) `
- -overlay-log ( Join-Path $EVALS "latency_rerank_overlay_hf_msmarco_graph.jsonl" )
$RERANK_STAGE_GRAPH = Join-Path $EVALS "evals_rerank\latency_rerank_graph.jsonl"
python -m rag . bench evaluate `
- -qrels $QRELS `
- -run ( Join-Path $EVALS "hybrid_rerank_hf_msmarco_graph_run.json" ) `
- -run-id "07_Hybrid_rerank(hf-msmarco)_graph" `
- -e2e-log ( Join-Path $EVALS "latency_rerank_overlay_hf_msmarco_graph.jsonl" ) `
- -stage-log ( Join-Path $EVALS "latency_stages_rerank_graph.jsonl" ) `
- -rerank-log $RERANK_STAGE_GRAPH
# 6b) Rerank overlay WITHOUT graph (Cohere)
python -m rag . bench . run_rerank_overlay `
$env:DATASET `
$QA `
- -date $DATE `
- -top-k $TOPK `
- -rerank-spec "cohere:rerank-v3.5" `
- -prefer summary `
- -alpha 0 . 7 `
- -sort-by final `
- -no-graph-expand `
- -out ( Join-Path $EVALS "hybrid_rerank_cohere_no_graph_run.json" ) `
- -overlay-log ( Join-Path $EVALS "latency_rerank_overlay_cohere_no_graph.jsonl" )
$RERANK_STAGE_NO_GRAPH = Join-Path $EVALS "evals_rerank\latency_rerank_no_graph.jsonl"
python -m rag . bench evaluate `
- -qrels $QRELS `
- -run ( Join-Path $EVALS "hybrid_rerank_cohere_no_graph_run.json" ) `
- -run-id "08_Hybrid_rerank(cohere)_no_graph" `
- -e2e-log ( Join-Path $EVALS "latency_rerank_overlay_cohere_no_graph.jsonl" ) `
- -stage-log ( Join-Path $EVALS "latency_stages_rerank_no_graph.jsonl" ) `
- -rerank-log $RERANK_STAGE_NO_GRAPH
# 6c) Rerank overlay WITH graph (Cohere)
python -m rag . bench . run_rerank_overlay `
$env:DATASET `
$QA `
- -date $DATE `
- -top-k $TOPK `
- -rerank-spec "cohere:rerank-v3.5" `
- -prefer summary `
- -alpha 0 . 7 `
- -sort-by final `
- -graph-expand `
- -ingest-tag "comm_fixed_C1_g1_2" `
- -out ( Join-Path $EVALS "hybrid_rerank_cohere_graph_run.json" ) `
- -overlay-log ( Join-Path $EVALS "latency_rerank_overlay_cohere_graph.jsonl" )
$RERANK_STAGE_GRAPH = Join-Path $EVALS "evals_rerank\latency_rerank_graph.jsonl"
python -m rag . bench evaluate `
- -qrels $QRELS `
- -run ( Join-Path $EVALS "hybrid_rerank_cohere_graph_run.json" ) `
- -run-id "09_Hybrid_rerank(cohere)_graph" `
- -e2e-log ( Join-Path $EVALS "latency_rerank_overlay_cohere_graph.jsonl" ) `
- -stage-log ( Join-Path $EVALS "latency_stages_rerank_graph.jsonl" ) `
- -rerank-log $RERANK_STAGE_GRAPH
# 6d) Rerank overlay WITHOUT graph (FlashRank)
python -m rag . bench . run_rerank_overlay `
$env:DATASET `
$QA `
- -date $DATE `
- -top-k $TOPK `
- -rerank-spec "flashrank:ms-marco-MiniLM-L-12-v2" `
- -prefer summary `
- -alpha 0 . 7 `
- -sort-by final `
- -no-graph-expand `
- -out ( Join-Path $EVALS "hybrid_rerank_flashrank_no_graph_run.json" ) `
- -overlay-log ( Join-Path $EVALS "latency_rerank_overlay_flashrank_no_graph.jsonl" )
$RERANK_STAGE_NO_GRAPH = Join-Path $EVALS "evals_rerank\latency_rerank_no_graph.jsonl"
python -m rag . bench evaluate `
- -qrels $QRELS `
- -run ( Join-Path $EVALS "hybrid_rerank_flashrank_no_graph_run.json" ) `
- -run-id "10_Hybrid_rerank(flashrank)_no_graph" `
- -e2e-log ( Join-Path $EVALS "latency_rerank_overlay_flashrank_no_graph.jsonl" ) `
- -stage-log ( Join-Path $EVALS "latency_stages_rerank_no_graph.jsonl" ) `
- -rerank-log $RERANK_STAGE_NO_GRAPH
# 6e) Rerank overlay WITH graph (FlashRank)
python -m rag . bench . run_rerank_overlay `
$env:DATASET `
$QA `
- -date $DATE `
- -top-k $TOPK `
- -rerank-spec "flashrank:ms-marco-MiniLM-L-12-v2" `
- -prefer summary `
- -alpha 0 . 7 `
- -sort-by final `
- -graph-expand `
- -ingest-tag "comm_fixed_C1_g1_2" `
- -out ( Join-Path $EVALS "hybrid_rerank_flashrank_graph_run.json" ) `
- -overlay-log ( Join-Path $EVALS "latency_rerank_overlay_flashrank_graph.jsonl" )
$RERANK_STAGE_GRAPH = Join-Path $EVALS "evals_rerank\latency_rerank_graph.jsonl"
python -m rag . bench evaluate `
- -qrels $QRELS `
- -run ( Join-Path $EVALS "hybrid_rerank_flashrank_graph_run.json" ) `
- -run-id "11_Hybrid_rerank(flashrank)_graph" `
- -e2e-log ( Join-Path $EVALS "latency_rerank_overlay_flashrank_graph.jsonl" ) `
- -stage-log ( Join-Path $EVALS "latency_stages_rerank_graph.jsonl" ) `
- -rerank-log $RERANK_STAGE_GRAPH
# 7) IR + Latency table
python -m rag . bench ir-table `
- -dataset $env:DATASET `
- -date $DATE `
- -root .
Parameters (quick reference)
generate: --dataset, --date, --n (QA items), --use-text/--use-summary, --provider auto|openai|gemini|none, --out-dir.
run: --adapter {bm25|dense|graphrag|hybrid}, --dataset, --date, --qa-file, --top-k (default 100), --out plus adapter-specific flags:
BM25: --bm25-use-text/--bm25-use-summary, --bm25-route.
Dense: no extra flags (requires dense index).
GraphRAG: --ingest-tag, --level, --k-comms, --rerank-mode, --dense-date, --dense-persist.
Hybrid: --hyb-rrf-k, --hyb-w-bm25, --hyb-w-dense, --hyb-w-graph, --hyb-seed-k, --hyb-graph-expand/--no-graph-expand, --hyb-expand-ratio, --hyb-expand-limit.
Rerank overlay: python -m rag.bench.run_rerank_overlay mirrors the hybrid stack with --rerank-spec, --alpha, --sort-by, --fetch-top-n, --prefer {summary|text}, and writes reranker stage logs (RERANK_LATENCY_LOG) plus overlay e2e logs.
evaluate: --qrels, --run, --run-id, --leaderboard, --e2e-log, --stage-log, --rerank-log.
Latency env vars: BENCH_LATENCY_LOG (end-to-end), HYB_LATENCY_LOG (stage), BENCH_QID (annotate rows), RERANK_LATENCY_LOG (rerank-only).
Validation and quality gates
QA generation: ensure qa.parquet row count equals --n and qrels.parquet provides at least one relevant chunk per question (rel >= 1).
Quality thresholds (guidance): map >= 0.45, ndcg_cut_10 >= 0.55, recip_rank >= 0.60 for hybrid; adjust weights if below target.
Latency: keep hybrid end-to-end p95 < 1200 ms and stage graph_expand p95 < 400 ms; reduce --k-comms or hyb_expand_ratio if violated.
Consistency: leaderboard.csv should append rows chronologically; persist BENCH_LATENCY_LOG / HYB_LATENCY_LOG / RERANK_LATENCY_LOG alongside runs for repeatable analysis.
Reporting and notebooks
Render leaderboard slices:
Python import pandas as pd
leaderboard = pd . read_csv ( "bench_out/fixed_size/2025-09-14/evals/leaderboard.csv" )
cols = [ c for c in leaderboard . columns if c in { "run_id" , "map" , "ndcg_cut_10" , "recip_rank" , "P_5" , "P_10" , "recall_100" , "e2e_p95_ms" }]
display ( leaderboard [ cols ] . tail ())
Doc-level metrics:
Python import json
from pathlib import Path
from rag.bench.eval_scores import load_qrels , evaluate_run_doclevel , build_chunk_to_doc_map
qrels = load_qrels ( "bench_out/fixed_size/2025-09-14/qa/qrels.parquet" )
run = json . loads ( Path ( "bench_out/fixed_size/2025-09-14/evals/hybrid_run.json" ) . read_text ())
c2d = build_chunk_to_doc_map ( "fixed_size" , "2025-09-14" )
doc_metrics , _ = evaluate_run_doclevel ( qrels , run , c2d )
Related notebook: 06_graphrag_gamma_selection — informs community settings used during hybrid benchmarking.
See also: Gold , RAG overview , Retrievers , GraphRAG , App , Rerankers .