Skip to content

Conversation

@jenn-le
Copy link
Contributor

@jenn-le jenn-le commented Dec 19, 2025

TODO

* @param count - Number of IDs to burn (default: 1000)
* @returns The first local ID of the burned range
*/
burnIds(count?: number): number;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an interesting addition and brings up some questions and observations.

  1. We can already burn IDs without this API. Simply call generateCompressedIds 1000 times.
  2. We aren't truly burning them. They are in fact going to be used (potentially), though in a strange way (having been fed through another shim compressor in a sandbox). But nonetheless, they may end up in the ops and be normalized just like any other compressed ID.
  3. ID Compresser APIs are split into two interfaces. IIDCompressorCore is for the runtime or the owner of the compressor. IIDCompressor is for users of the compressor. This API belongs in the second camp. It is the first API in that second camp that exposes the fact that compressed IDs are numerically adjacent. Currently we don't promise anything about the numbers (whether they are positive, or negative, or sequential, etc.). This new API implicitly exposes and locks in that implementation detail. Not necessarily bad, but it's not something we should do without good reason.
  4. The previous point (3) interacts poorly with our plans for sharding. If I have sharded this compressor into thirds, and I burn 2 IDs, which IDs are they? They're not -1 and -2. They're maybe -1 and -4? How do we express that in the API contract?

My opinion - if we are committed to doing sharding in the near or medium term future, then we should just implement this approach in the inefficient, but least friction way, given all the points above. That would be that we don't add any new APIs to IDCompressor. The code that sets up the sandbox simply calls generateCompressedId 1000 times and adds each returned ID to an array, which we then pass to the sandbox. This is wasteful when we know the IDs are sequential, but as pointed out above, they might not be. If we want to reduce the data sent to/from the sandbox we can run the list through a sort followed by a compaction step that identifies ranges and compresses the list. But even that is probably overkill. This is a temporary solution for a scenario that takes 10 seconds minimum to run regardless of what we do here - so even if we add an enormous, say, 10 ms of overhead, I don't think it matters.

// All other methods throw errors

public get localSessionId(): SessionId {
throw new UsageError(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jenn-le jenn-le closed this Jan 6, 2026
@jenn-le
Copy link
Contributor Author

jenn-le commented Jan 6, 2026

Closed in favor of implementing the temporary solution in boards directly

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants