forked from apache/cassandra
-
Notifications
You must be signed in to change notification settings - Fork 21
c15485 oct release #2091
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
c15485 oct release #2091
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
pkolaczk
commented
Oct 29, 2025
- CNDB-14861: Fix usage of PrimaryKeyWithSource in SAI
- CNDB-15570: Fix handling mixed key types in SAI iterators
- CNDB-15448: Bump jvector to 4.0.0-rc.3 (CNDB-15448: Bump jvector to 4.0.0-rc.3 #2012)
- CNDB-15619: Allow to customize SAI format to write and to consider as current() depending on a keyspace (CNDB-15619: Allow to customize SAI format to write and to consider as… #2081)
- CNDB-15701: Forbid creating vector indexes if version is earlier than CA (CNDB-15701: Forbid creating vector indexes if version is earlier than CA #2084)
- CNDB-15623: Only use write path for CDC tables in CassandraStreamReceiver if CDC is enabled on the node (backport CNDB-15623: Only use write path for CDC tables in CassandraStreamReceiver if CDC is enabled on the node #2043) (CNDB-15623: Only use write path for CDC tables in CassandraStreamReceiver if CDC is enabled on the node (backport #2043) #2086)
- CNDB-15554: Bump jvector to 4.0.0-rc.5
- CNDB-15485: Fix ResultRetriever key comparison to prevent dupes in result set (CNDB-15485: Fix ResultRetriever key comparison to prevent dupes in result set #2024)
The PrimaryKeyWithSource class has been present for two years in the code base as an optimization for hybrid vector workloads, which have to materialize many primary keys in the search-then-sort query path. However, the logic is invalid for version aa (because we have the bug where compacted sstables write per row, not per partition) and it is also invalid for static columns. This commit avoids creation of PrimaryKeyWithSource in those cases.
This commit fixes multiple issues with KeyRangeIterator implementations occasionally skipping or emitting duplicate keys when working on a mix of primary keys with empty / non-empty clusterings. This situation is possible while scanning tables with static columns or when some indexes are partition-aware (e.g. version AA) and others have been updated to a row-aware version (e.g. DC or EC). Due to those bugs, users could get incorrect results from SAI queries, e.g. results containing duplicated rows, duplicated partitions or even missing rows. The commit introduces extensive randomized property-based tests for KeyRangeUnionIterator and KeyIntersectionIterator. Previously, the tests did not test for keys with mixed empty/non-empty clusterings. Changes in KeyRangeUnionIterator: KeyRangeUnionIterator merges streams of primary keys in such a way that duplicates are removed. Unfortunately it does not properly account for the fact that if a key with an empty clustering meets a key with a non-empty clustering and the same partition key, we must always return the key with an empty clustering. A key with an empty clustering will always fetch the rows matched by any specific row key for the same partition, but the reverse is not true. The iterator implementation has been modified to always pick the key that matches more rows - a key with empty clustering wins over a key with non-empty clustering. Additionally, once a key with an empty clustering is emitted, no more keys in that partition are emitted. Changes in KeyRangeIntersectionIterator: Due to a very similar problem like in KeyRangeUnionIterator, KeyRangeIntersectionIterator could return either too few or too many keys, when keys with empty clusterings and keys with non-empty clusterings were present in the input key streams. In particular consider 2 input streams A and B with the following keys: A: 0: (1, Clustering.EMPTY) B: 0: (1, 1) 1: (1, 2) Key A.0 matches the whole partition 1. Therefore, the correct result of intersection are both keys of stream B. Unfortunately, the algorithm before this patch would advance both A and B iterators when emitting the first matching key. At the beginning of the second step, the iterator A would be already exhausted and no more keys would be produced. Finally key B.1 would be missing from the results. This patch fixes it by introducing two changes to the intersection algorithm: 1. A key with non-empty clustering wins over a key with empty clustering and same partition. 2. The selected highest key is not consumed while searching for the highest matching key, but that happens only after the search loop finds a match. Then we have more information which iterators would be moved to the next item. Iterators positioned at a key with an empty clustering can be advanced only after we run out of keys with non-empty clustering in the same partition or if there are no other keys with non-empty clustering. This patch also fixes another issue where we could return a less-specific key matching a full partition instead of a key matching one row: A: 0: (1, Clustering.EMPTY) B: 0: (1, 1) In that case the iterator returned a key with empty clustering, which would result in fetching and postfiltering many unnecessary rows.
### What is the issue Fixes: riptano/cndb#15448 ### What does this PR fix and why was it fixed Bumps jvector version. Commits: datastax/jvector@4.0.0-rc.2...4.0.0-rc.3 diff: ``` jvector % git log 4.0.0-rc.2...4.0.0-rc.3 --oneline 17169513 (tag: 4.0.0-rc.3) chore: update changelog for 4.0.0-rc.3 (#528) 67b2f88d Regression enhancements (#526) baf87e80 chore: update changelog for 4.0.0-rc.3 (#527) f3d235cc Release 4.0.0-rc.3 cfb3004f streamline PR checklist (#525) df4a0688 add checklist template and initial CONTRIBUTIONS.md guide (#523) 63db005a GraphIndexBuilder::addGraphNode must iterate all graph levels to estimate used bytes (#521) 817a25c4 GitHub actions regression test (#499) 8364012f Remove unused construction batch member from OnHeapGraphIndex (#510) 1823b9be Switch from syncronized to concurrent map for pq codebook (#518) 6d590ad7 Enable specifying the benchmarks in the yaml file (#515) 1c298218 Create partial sums for PQ codebook for use during diversity checks (#511) a916a07c PQ ranging bugfix and refactoring (#508) 66399923 Reducing the number of allocations in GraphSearcher (#501) 51d4f0bb SimdOps and NativeSimd ops refactored, VectorUtilSupport simplified (#498) c5c3ff97 Add specific BuildScoreProvider for diversity to avoid extra encoding… (#503) 631515df Start development on 4.0.0-rc.3-SNAPSHOT ```
… current() depending on a keyspace (#2081) There is a new cassandra.sai.version.selector.class system property allowing to provide an implementation of the o.a.c.index.sai.disk.format.Version.Selector interface to specify that version of the SAI on-disk index format should be used for each keyspace. Co-authored-by: Enrico Olivelli <[email protected]> Co-authored-by: Andrés de la Peña <[email protected]>
… CA (#2084) Creating vector indexes if version is earlier than CA would usually fail in the asynchronous build. This patch makes them fail synchronously at CREATE INDEX depending on the local index version. If the local node has the right version but any of the remotes doesn't, the failure will remain asynchronous. This cherry picks 945b7a1 from main into the October release branch.
(cherry picked from commit b5e76e0)
…sult set (#2024) (cherry picked from commit ada025c) Copy of #2023, but targeting `main` riptano/cndb#15485 This PR fixes a bug introduced to this branch via #1884. The bug only impacts SAI file format `aa` when the index file was produced via compaction, which is why the modified test simply adds coverage to compact the table and hit the bug. The bug happens when an iterator produces the same partition across two different batch fetches from storage. These keys were not collapsed in the `key.equals(lastKey)` logic because compacted indexes use a row id per row instead of per partition, and the logic in `PrimaryKeyWithSource` considers rows with different row ids to be distinct. However, when we went to materialize a batch from storage, we hit this code: ```java ClusteringIndexFilter clusteringIndexFilter = command.clusteringIndexFilter(firstKey.partitionKey()); if (cfs.metadata().comparator.size() == 0 || firstKey.hasEmptyClustering()) { return clusteringIndexFilter; } else { nextClusterings.clear(); for (PrimaryKey key : keys) nextClusterings.add(key.clustering()); return new ClusteringIndexNamesFilter(nextClusterings, clusteringIndexFilter.isReversed()); } ``` which returned `clusteringIndexFilter` for `aa` because those indexes do not have the clustering information. Therefore, each batch fetched the whole partition (which was subsequently filtered to the proper results), and produced a multiplier effect where we saw `batch` many duplicates. This fix works by comparing partition keys and clustering keys directly, which is a return to the old comparison logic from before #1884. There was actually a discussion about this in the PR to `main`, but unfortunately, we missed this case #1883 (comment). A more proper long term fix might be to remove the logic of creating a `PrimaryKeyWithSource` for AA indexes. However, I preferred this approach because it is essentially a `revert` instead of fixing forward solution.
|
❌ Build ds-cassandra-pr-gate/PR-2091 rejected by Butler1 regressions found Found 1 new test failures
No known test failures found |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.


