Header menu logo TestPrune

Indexing Benchmarks Design

Goal

Establish benchmarks measuring cold and warm index time, symbol count, and cache hit rate on the SampleSolution. Serves two purposes:

  1. CI regression detection — catch metric regressions (symbol counts, cache behavior) on every PR
  2. Experiment comparison — baseline for TransparentCompiler vs BackgroundCompiler and other Tier 3 FCS experiments

Benchmark Project

benchmarks/TestPrune.Benchmarks/ — a console app referencing TestPrune.Core.

Experiment Flags

CLI flags to toggle FCS experiment settings:

Each flag passes through to FCS options, enabling side-by-side flame graph comparison between configurations.

Profiling

Uses dotnet-trace + speedscope for auto-instrumented flame graphs:

Mise Tasks

*mise run bench* — run under dotnet-trace:

dotnet-trace collect --format speedscope \
  --output benchmarks/results/trace \
  -- dotnet run --project benchmarks/TestPrune.Benchmarks

Produces benchmarks/results/trace.speedscope.json and JSON metrics on stdout.

*mise run bench-raw* — run without tracing (CI / quick checks):

dotnet run --project benchmarks/TestPrune.Benchmarks

benchmarks/results/ is gitignored.

CI Integration

Type something to start searching.