Conversation
|
run benchmark in_list |
|
🤖 |
|
🤖: Benchmark completed Details
|
datafusion/physical-expr/src/expressions/in_list/array_filter.rs
Outdated
Show resolved
Hide resolved
datafusion/physical-expr/src/expressions/in_list/array_filter.rs
Outdated
Show resolved
Hide resolved
|
run benchmarks |
|
🤖 |
|
run benchmark tpch tpchds |
|
🤖 Hi @Dandandan, thanks for the request (#19390 (comment)).
Please choose one or more of these with |
|
🤖: Benchmark completed Details
|
|
run benchmark tpch tpcds |
|
🤖 |
|
🤖: Benchmark completed Details
|
|
🤖 |
|
🤖: Benchmark completed Details
|
@Dandandan how do I think once this optim is done, there could be a lot to reuse for broadcast joins... |
For plain (non dynamic) filters, I think based on a treshold (<= 3) it either gets planned as a chain of or expressions or using |
7ba1c85 to
276a37f
Compare
|
run benchmark in_list |
276a37f to
d18b346
Compare
|
🤖 |
|
🤖: Benchmark completed Details
|
2fc00e5 to
3db393a
Compare
|
run benchmark in_list |
|
🤖 |
|
🤖: Benchmark completed Details
|
aeabae5 to
d522fcf
Compare
|
run benchmark in_list |
|
🤖 |
|
🤖: Benchmark completed Details
|
|
@Dandandan and @adriangb : I've rebased my PR and tried to clean it up so it is reviewable commit by commit. I could have done 10 stacked PRs, but it seemed simpler and clearer to me to have a single common diff! What I'm mostly interested in right now is which optims you think are useful, and which are pushing it too far into micro-optimizations (hint hint: I'm looking at commit 6's custom hash table. It brings nice gains, but the maintenance cost is high.) Once we've settled that we can go a bit deeper into the actual code itself... |
|
run benchmarks in_list_strategy in_list |
|
Benchmark job started for this request (job |
|
Benchmark job started for this request (job |
|
🤖 Criterion benchmark running (GKE) | trigger |
|
🤖 Criterion benchmark running (GKE) | trigger |
|
🤖 Criterion benchmark completed (GKE) | trigger Details
Resource Usagebase (merge-base)
branch
|
|
🤖 Criterion benchmark completed (GKE) | trigger New benchmark — branch-only results (no baseline comparison) Details
Resource Usagebranch
|
Add a new in_list_strategy benchmark file with targeted coverage of each optimization strategy, without replacing the existing in_list benchmarks which are kept intact for historical comparison.
Introduces the StaticFilter trait to decouple membership testing from InListExpr. Migrates existing HashSet optimizations into primitive_filter.rs to maintain performance parity while enabling future specialized implementations. Triggers for all constant IN lists.
Replaces HashSet<u8> with a 32-byte stack-allocated bitmap. Provides O(1) membership testing via bit-shifting, significantly reducing memory overhead and improving cache locality. Triggers for UInt8 arrays.
Implements an 8 KB heap-allocated bitmap for UInt16. Maintains O(1) performance while handling the larger value space. Triggers for UInt16 arrays.
Introduces zero-copy buffer reinterpretation to allow signed integers and other 1 or 2-byte primitive types (e.g. Float16) to use the high-performance bitmap filters. Triggers for all types with 1-byte or 2-byte width.
Adds a const-generic unrolled comparison chain that avoids CPU branching. Outperforms hash lookups for very small lists. Triggers for primitives when list size <= 32 (4-byte), 16 (8-byte), or 4 (16-byte).
Implements a fast hash table using open addressing with linear probing and a 25% load factor. Replaces the legacy HashSet for primitives, reducing indirection. Triggers for primitives when list size exceeds branchless thresholds.
Introduces a two-stage filter for ByteView types. Stage 1 uses a fast DirectProbeFilter on masked views (len + prefix) for quick rejection; Stage 2 performs full verification only for potential long-string matches. Triggers for Utf8View and BinaryView.
Port of the two-stage View optimization to standard Utf8 and LargeUtf8 types. Encodes strings as i128 (len + prefix) for fast O(1) pre-filtering before falling back to full string comparison. Triggers for Utf8 and LargeUtf8.
d522fcf to
f2d1e00
Compare
FixedSizeBinary(N) arrays share the same contiguous buffer layout as primitive arrays, so for power-of-2 widths (1, 2, 4, 8, 16) we can zero-copy reinterpret them and use the optimized primitive filters (bitmap, branchless, hash) instead of falling through to the NestedTypeFilter fallback.
f2d1e00 to
459bd09
Compare
Which issue does this PR close?
Rationale for this change
The current
InListexpression implementation uses a genericArrayStaticFilterthat relies onmake_comparatorfor all types, which adds significant overhead for primitive types. This PR introduces type-specialized filters that exploit the properties of different data types to achieve substantial performance improvements.What changes are included in this PR?
This PR refactors the
InListexpression to use specialized filter strategies based on data type and list size. The implementation is split into 10 incremental commits:Commit 1: Strategy-Focused InList Benchmarks
benches/in_list_strategy.rsto establish focused microbenchmarks for the filter strategies and threshold boundaries explored in the follow-up commitsCommit 2: Modular StaticFilter Architecture
Refactors
InListExprfrom a single monolithic file into a module (in_list/) with submodules:static_filter.rs(trait),primitive_filter.rs(primitive optimizations),nested_filter.rs(complex type fallback),result.rs(result construction),strategy.rs(filter selection), andtransform.rs(type transformations). Introduces theStaticFiltertrait to decouple membership testing fromInListExpr, enabling pluggable filter implementations without changing the public API.Commit 3: Bitmap Filter for UInt8 (stack-based)
bitmap/u8_list=4_match=50%is 8.1× faster than baselineCommit 4: Bitmap Filter for UInt16 (heap-based)
Commit 5: Zero-Copy Reinterpretation for Int8/Int16/Float16
in_list_Int16_list=28_nulls=0%is 13.6× faster than baselineCommit 6: Branchless Filter for Small Primitive Lists
values.iter().fold(false, |acc, &v| acc | (v == needle))primitive/i32_branchless_list=4_match=50%is 13.4× faster;reinterpret/timestamp_ns_branchless_list=4_match=50%is 22.5× fasterCommit 7: Direct Probe Hash Filter for Large Primitive Lists
std::HashSetfor primitives when list exceeds branchless thresholdsCommit 8: ByteView Two-Stage Filter (Utf8View/BinaryView)
in_list_Utf8View_list=28_nulls=0%_str=100is 11.1× faster than baselineUtf8Viewbenches are 3.70× geomean faster overallCommit 9: Utf8/LargeUtf8 Two-Stage Filter
[len:u32][data:12 bytes]for quick rejectionUtf8benches are 1.20× geomean faster overall, with a few small-list regressions still presentCommit 10: FixedSizeBinary Zero-Copy Reinterpretation
FixedSizeBinary(N)forN ∈ {1, 2, 4, 8, 16}now uses the primitive fast paths instead of the generic nested fallbackfixed_size_binary/fsb16_list=10000_match=50%is 7.6× faster than baselinePerformance Summary
Benchmarks were compared by scanning
target/criterion, pairingbefore/sample.jsonwith the latestnew/sample.json, and using the best observed sample (min(time / iters)) on each side to reduce system noise.Overall on the current benchmark corpus:
Representative wins from the latest run:
reinterpret/timestamp_ns_branchless_list=4_match=50%: 22.5× fasterin_list_TimestampNs_list=3_nulls=20%: 17.9× fasterin_list_Int16_list=28_nulls=0%: 13.6× fasterprimitive/i32_branchless_list=4_match=50%: 13.4× fasterin_list_Utf8View_list=28_nulls=0%_str=100: 11.1× fasterfixed_size_binary/fsb16_list=10000_match=50%: 7.6× fasterKnown regressions from the latest run are limited to a few legacy
Utf8cases.Filter Selection Strategy
Are these changes tested?
Yes, the optimizations are covered by the existing
in_listtest suite, which exercises:Benchmark coverage lives in both:
benches/in_list.rsfor broad end-to-end legacy coveragebenches/in_list_strategy.rsfor strategy-focused microbenchmarks at the threshold boundariesAre there any user-facing changes?
No user-facing API changes. This is a pure performance optimization that maintains identical behavior.