Skip to content

Enhance code review instructions with additional checks#866

Merged
harsha-simhadri merged 2 commits intomainfrom
users/harshasi/update-review-agent-instructions
Mar 27, 2026
Merged

Enhance code review instructions with additional checks#866
harsha-simhadri merged 2 commits intomainfrom
users/harshasi/update-review-agent-instructions

Conversation

@harsha-simhadri
Copy link
Copy Markdown
Contributor

Added additional code review checks regarding unit tests, dependencies, and build times.

Added additional code review checks regarding unit tests, dependencies, and build times.
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates the repository’s code review guidance to include additional review checks beyond license headers, focusing on test coverage, dependency hygiene, and build-time impact.

Changes:

  • Added checklist items to flag removed unit tests without strong justification.
  • Added guidance to scrutinize newly introduced dependencies.
  • Added a reminder to consider potential build time regressions, while retaining the license header check.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented Mar 27, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 89.30%. Comparing base (b616aa3) to head (a7f6a67).

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #866      +/-   ##
==========================================
- Coverage   90.45%   89.30%   -1.16%     
==========================================
  Files         442      442              
  Lines       83248    83248              
==========================================
- Hits        75301    74341     -960     
- Misses       7947     8907     +960     
Flag Coverage Δ
miri 89.30% <ø> (-1.16%) ⬇️
unittests 89.14% <ø> (-1.28%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.
see 38 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@harsha-simhadri harsha-simhadri merged commit 1ad2cae into main Mar 27, 2026
23 checks passed
@harsha-simhadri harsha-simhadri deleted the users/harshasi/update-review-agent-instructions branch March 27, 2026 18:59
@hildebrandmw hildebrandmw mentioned this pull request Mar 30, 2026
hildebrandmw added a commit that referenced this pull request Mar 31, 2026
# Version 0.50.0

Release notes generated with the help of Copilot.

## API Breaking Changes

This release contains several large API changes that should be
mechanical to apply to existing code but that will enable forward
evolution of our integrations.

### WorkingSet/Fillset/Strategy API Changes (#859)

#### Removals
The following items have been removed:

| Removed | Replacement |
|--------|--------------|
| `Fill` | `diskann::graph::workingset::Fill<WorkingSet>` and
`diskann::graph::workingset::Map` |
| `AsElement<T>` | `diskann::graph::glue::MultiInsertStrategy::finish`
and `diskann::graph::workingset::AsWorkingSet` |
| `Accessor::Extended` | Just gone! |
| `VectorIdBoxSlice` | `diskann::graph::glue::Batch` (and `Matrix<T>`) |
| `reborrow::Lower`/`AsyncLower` | No longer needed |

#### Strategy Type Parameters (no longer `?Sized`)
The `T` parameter on `SearchStrategy`, `InsertStrategy`, and
`SetElement` is no longer `?Sized`.

```rust
// Before
impl SearchStrategy<DP, [f32]> for MyStrategy { ... }
impl SetElement<[f32]> for MyProvider { ... }

// After
impl SearchStrategy<DP, &[f32]> for MyStrategy { ... }
impl SetElement<&[f32]> for MyProvider { ... }
```
HRTB syntax is now generally needed, but makes contraints far more
uniform.

Similarly, `InplaceDeleteStrategy::DeleteElement<'a>` has been tightened
from `Send + Sync + ?Sized`
to `Copy + Send + Sync` (sized). If your delete element was an unsized
type like `[f32]`, change
it to a reference like `&'a [f32]`.

#### `Guard` moved from `SetElement` to `DataProvider`

```rust
// Before
trait SetElement<T>: DataProvider {
    type Guard: Guard<Id = Self::InternalId>;
    ...
}

// After
trait DataProvider {
    type Guard: Guard<Id = Self::InternalId> + 'static;
    ...
}

trait SetElement<T>: DataProvider {
    // Returns Self::Guard from DataProvider
    fn set_element(...) -> Result<Self::Guard, Self::SetError>;
}
```

Move your `Guard` associated type from your `SetElement` impl to your
`DataProvider` impl.

#### `PruneStrategy` gains `WorkingSet`

```rust
// Before
trait PruneStrategy<Provider> {
    type PruneAccessor<'a>: Accessor + ... + FillSet;
    ...
}

// After
trait PruneStrategy<Provider> {
    type WorkingSet: Send + Sync;
    type PruneAccessor<'a>: Accessor + ... + workingset::Fill<Self::WorkingSet>;
    fn create_working_set(&self, capacity: usize) -> Self::WorkingSet;
    ...
}
```

**What to do:**
1. Add a `WorkingSet` associated type. Use `workingset::Map<Id,
Box<[T]>>` as the default.
Users of multi-insert will want to use `workingset::Map<Id, Box<[T]>,
workingset::map::Ref<[T]>>`.
3. Add `create_working_set` returning a `Map` via
`Builder::new(Capacity::Default).build(capacity)`.
4. Replace `FillSet` bound on `PruneAccessor` with
`workingset::Fill<Self::WorkingSet>`.

#### `FillSet` to `Fill<WorkingSet>`
```rust
// Before
impl FillSet for MyAccessor {
    fn fill_set(&mut self, ids: &[Id], map: &mut HashMap<Id, Extended>) -> Result<()> { ... }
}

// After
impl Fill<MyWorkingSet> for MyAccessor {
    type Error = ...;
    type View<'a> = ... where Self: 'a, MyWorkingSet: 'a;

    fn fill<'a, Itr>(
        &'a mut self,
        working_set: &'a mut MyWorkingSet,
        itr: Itr,
    ) -> impl SendFuture<Result<Self::View<'a>, Self::Error>>
    where
        Itr: ExactSizeIterator<Item = Self::Id> + Clone + Send + Sync,
    { ... }
}
```

**Shortcut:** If your `WorkingSet` is `workingset::Map<K, V, P>` and
your accessor's
`ElementRef` converts `Into<V>`, the blanket `Fill` impl works
automatically, then you don't
need to implement `Fill` at all.

#### New `MultiInsertStrategy` (replaces multi-insert portion of
`InsertStrategy`)

If you use `multi_insert`, you now need a `MultiInsertStrategy`
implementation:

```rust
trait MultiInsertStrategy<Provider, B: Batch>: Send + Sync {
    type WorkingSet: Send + Sync + 'static;
    type Seed: AsWorkingSet<Self::WorkingSet> + Send + Sync + 'static;
    type FinishError: Into<ANNError> + std::fmt::Debug + Send + Sync;
    type InsertStrategy: for<'a> InsertStrategy<
        Provider,
        B::Element<'a>,
        PruneStrategy: PruneStrategy<Provider, WorkingSet = Self::WorkingSet>,
    >;

    fn insert_strategy(&self) -> Self::InsertStrategy;
    fn finish<Itr>(&self, provider: &Provider, context: &Provider::Context,
        batch: &Arc<B>, ids: Itr,
    ) -> impl Future<Output = Result<Self::Seed, Self::FinishError>> + Send
    where Itr: ExactSizeIterator<Item = Provider::InternalId> + Send;
}
```

**What to do:**
- `finish` is called once after the batch is inserted. Use it to
pre-populate working set
state (e.g., fetch quantized representations). Return a `Seed` that
implements
  `AsWorkingSet`.
- For the common case, use `workingset::map::Builder` as your `Seed`
type.
- `multi_insert` now takes `Arc<B>` where `B: Batch` instead of
`VectorIdBoxSlice`.
  `Matrix<T>` implements `Batch` out of the box.

Using `diskann::graph::workingset::Map` and its associated
infrastructure should largely work.
#### `multi_insert` call site

```rust
// Before
index.multi_insert(strategy, context, vectors, ids).await?;
// where vectors: VectorIdBoxSlice

// After
index.multi_insert::<S, B>(strategy, context, vectors, ids).await?;
// where vectors: Arc<B>, B: Batch
// DP: for<'a> SetElement<B::Element<'a>> is required
```

The turbofish `<S, B>` is often needed because HRTB bounds make
inference conservative.

### Search Post Processing (#828)

The `Search` trait now requires `S: SearchStrategy<DP, T>` at the trait
level, and introduces two new generic parameters on the `search` method:

- `O` - the output type written to the buffer (used to be a parameter of
`SearchStrategy`)
- `PP` — the post-processor, bounded by `for<'a>
SearchPostProcess<S::SearchAccessor<'a>, T, O> + Send + Sync`
- `OB` — the output buffer, bounded by `SearchOutputBuffer<O> + Send +
?Sized`

Previously, `OB` was a trait-level generic and the post-processor was
obtained internally from the strategy. Now both are method-level
generics, giving callers control over each independently.

The output "id" type `O` has been removed from `SearchStrategy`
entirely, which greatly simplifies things.

At the `DiskANNIndex` level, two entry points are provided:

- `search()` — requires `S: DefaultPostProcessor`, creates the
strategy's default processor automatically, then delegates to
`search_with()`. This is the common path.
- `search_with()` — requires only `S: SearchStrategy`, accepts any
compatible `PP`. This is the extension point for custom post-processing.

#### DefaultPostProcessor

Strategies opt in to default post-processing by implementing
`DefaultPostProcessor`:

```rust
pub trait DefaultPostProcessor<Provider, T, O>: SearchStrategy<Provider, T> {
    type Processor: for<'a> SearchPostProcess<Self::SearchAccessor<'a>, T, O> + Send + Sync;
    fn default_post_processor(&self) -> Self::Processor;
}
```

The `default_post_processor!` macro covers the common case where the
processor is `Default`-constructible. `DefaultSearchStrategy` is a
convenience trait (with a blanket impl…) for `SearchStrategy +
DefaultPostProcessor`.

#### Start-point filtering

The in-mem code used to rely on a special `Internal` type to customize
post-processing for the inplace delete pipeline.
Now that post-processing is customizable, this workaround is no longer
needed.

Start-point filtering is composed at the type level using the existing
`Pipeline` mechanism:

- `HasDefaultProcessor::Processor = Pipeline<FilterStartPoints,
RemoveDeletedIdsAndCopy>` — filters start points during regular search
- `InplaceDeleteStrategy::SearchPostProcessor = RemoveDeletedIdsAndCopy`
— no start-point filtering during delete re-pruning

#### Range search

To respond to moving the output buffer to a method level geneirc, range
search now writes results directly through a `DistanceFiltered` output
buffer wrapper that applies radius and inner-radius filtering inline
during post-processing. This replaces the previous approach of
allocating intermediate `Vec`s and copying results through an
`IdDistance` buffer.

`SearchOutputBuffer` is now implemented for `Vec<Neighbor<I>>`,
providing an unbounded growable buffer suitable for range search and
other cases where the result count is not known in advance.

The `RangeSearchResults` type has been removed now that a
`SearchOutputBuffer` is properly used.

#### Migration guide

**If you implement `SearchStrategy`** — minimal changes required. Remove
the `O` parameter (if needed). Implement `DefaultPostProcessor` with
your current post-processor if you want your strategy to work with
`index.search()`. Use the `default_post_processor!` macro for simple
cases:

```rust
impl DefaultPostProcessor<MyProvider, [f32]> for MyStrategy {
    default_post_processor!(Pipeline<FilterStartPoints, RemoveDeletedIdsAndCopy>);
}
```

**If you call `index.search()`** — no API change for strategies that
implement `DefaultPostProcessor` (all built-in strategies do).

**If you need custom post-processing** — use `index.search_with()` and
pass your post-processor directly:

```rust
let processor = MyCustomPostProcessor::new(/* ... */);
let stats = index.search_with(params, &strategy, processor, &context, &query, &mut output).await?;
```

**If you implement `Search`** — the method signature has changed. `PP`
and `OB` are now method-level generics, and `processor` is an explicit
parameter:

```rust
fn search_with<O, PP, OB>(
    self,
    index: &DiskANNIndex<DP>,
    strategy: &S,
    processor: PP,       // new
    context: &DP::Context,
    query: T,
    output: &mut OB,
) -> impl SendFuture<ANNResult<Self::Output>>
where
    P: Search<DP, S, T>,
    S: glue::SearchStrategy<DP, T>,
    PP: for<'a> glue::SearchPostProcess<S::SearchAccessor<'a>, T, O> + Send + Sync,
    O: Send,
    OB: search_output_buffer::SearchOutputBuffer<O> + Send + ?Sized,
```

**If you use range search results** — `RangeSearchOutput` no longer
carries `ids` and `distances` fields. Results are written to the output
buffer passed to `search()`. Use `Vec<Neighbor<u32>>` (or any
`SearchOutputBuffer` impl) to collect them.

**If you implement `InplaceDeleteStrategy`**: Add the
`DeleteSearchAccessor` associated type, ensuring it is the same as the
accessor selected by the `SearchStrategy`. This is redundant
information, but needed to help constrain downstream trait bounds.

### V4 Backend Raised to Ice Lake / Zen4+ (#861)

The `V4` SIMD backend in `diskann-wide` now requires additional AVX-512
extensions beyond the
baseline `x86-64-v4` feature set:

| New requirement | Description |
|-----------------|-------------|
| `avx512vnni` | Vector Neural Network Instructions |
| `avx512bitalg` | Bit algorithm instructions (VPOPCNT on bytes/words) |
| `avx512vpopcntdq` | POPCNT on dword/qword elements |
| `avx512vbmi` | Vector Byte Manipulation Instructions (cross-lane byte
permutes) |

This effectively raises the minimum CPU for the `V4` tier from Cascade
Lake to **Ice Lake Server
(Intel)** or **Zen4+ (AMD)**.

**Impact:** On CPUs with baseline AVX-512 but without these extensions
(e.g., Skylake-X,
Cascade Lake), runtime dispatch will select the `V3` backend instead. No
code
changes are required.

## What's Changed
* Remove dead code from diskann-tools by @Copilot in
#829
* CI maintenance by @hildebrandmw in
#838
* Add baseline tests for insert/multi-insert. by @hildebrandmw in
#837
* Remove unused file by @ChenSunriseJiaBao in
#850
* Add compatibility tests for spherical flatbuffer serialization by
@hildebrandmw in #839
* Overhaul Post-Processing by @hildebrandmw in
#828
* Bump the `V4` backend to IceLake/Zen4+ by @hildebrandmw in
#861
* Remove `ConcurrentQueue` from `diskann-providers` by @hildebrandmw in
#862
* Enhance code review instructions with additional checks by
@harsha-simhadri in #866
* Add synchronous `load_with` constructors and `run` escape hatch to
`wrapped_async` by @suri-kumkaran in
#864
* Overhaul WorkingSet/FillSet and its interaction with Multi-Insert by
@hildebrandmw in #859

## New Contributors
* @ChenSunriseJiaBao made their first contribution in
#850

**Full Changelog**:
v0.49.1...v0.50.0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants