You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
fix: wait for index idle before returning data (#389)
* WIP initial work
* rename Rpc to LocalPeers
* Handle deviceInfo internally, id -> deviceId
* Tests for stream error handling
* remove unnecessary constructor
* return replication stream
* Attach protomux instance to peer info
* rename and re-organize
* revert changes outside scope of PR
* WIP initial work
* Tie everything together
* rename getProjectInstance
* feat: client.listLocalPeers() & `local-peers` evt
* feat: add $sync API methods
For now this simplifies the API (because we are only supporting local
sync, not remote sync over the internet) to:
- `project.$sync.getState()`
- `project.$sync.start()`
- `project.$sync.stop()`
- Events
- `sync-state`
It's currently not possible to stop local discovery, nor is it possible
to stop sync of the metadata namespaces (auth, config, blobIndex). The
start and stop methods stop the sync of the data and blob namespaces.
Fixes#134. Stacked on #360, #358 and #356.
* feat: Add project.$waitForInitialSync() method
Fixes Add project method to download auth + config cores #233
Rather than call this inside the `client.addProject()` method, instead I
think it is better for the API consumer to call
`project.$waitForInitialSync()` after adding a project, since this
allows the implementer to give user feedback about what is happening.
* Wait for initial sync within addProject()
* fix: don't add core bitfield until core is ready
* feat: expose deviceId on coreManager
* fix: wait for project.ready() in waitForInitialSync
* fix: skip waitForSync in tests
* don't enable/disable namespace if not needed
* start core download when created via sparse: false
* Add debug logging
This was a big lift, but necessary to be able to debug sync issues since
temporarily adding console.log statements was too much work, and
debugging requires knowing the deviceId associated with each message.
* fix timeout
* fix: Add new cores to the indexer (!!!)
This caused a day of work: a bug from months back
* remove unnecessary log stmt
* get capabilities.getMany() to include creator
* fix invite test
* keep blob cores sparse
* optional param for LocalPeers
* re-org sync and replication
Removes old replication code attached to CoreManager
Still needs tests to be updated
* update package-lock
* chore: Add debug logging
* Add new logger to discovery + dnssd
* Get invite test working
* fix manager logger
* cleanup invite test (and make it fail :(
* fix: handle duplicate connections to LocalPeers
* fix stream close before channel open
* send invite to non-existent peer
* fixed fake timers implementation for tests
* new tests for duplicate connections
* cleanup and small fix
* Better state debug logging
* chain of invites test
* fix max listeners and add skipped test
* fix: only request a core key from one peer
Reduces the number of duplicate requests for the same keys.
* cleanup members tests with new helprs
* wait for project ready when adding
* only create 4 clients for chain of invites test
* add e2e sync tests
* add published @mapeo/mock-data
* fix: don't open cores in sparse mode
Turns out this changes how core.length etc. work, which confuses things
* fix: option to skip auto download for tests
* e2e test for stop-start sync
* fix coreManager unit tests
* fix blob store tests
* fix discovery-key event
* add coreCount to sync state
* test sync with blocked peer & fix bugs
* fix datatype unit tests
* fix blobs server unit tests
* remote peer-sync-controller unit test
This is now tested in e2e tests
* fix type issues caused by bad lockfile
* ignore debug type errors
* fixes for review comments
* move utils-new into utils
* Add debug info to test that sometimes fails
* Update package-lock.json version
* remove project.ready() (breaks things)
* wait for coreOwnership write before returning
* use file storage in tests (breaks things)
* Catch race condition in CRUD tests
* fix race condition with parallel writes
* fix tests for new createManagers syntax
* fix flakey test
This test relied on `peer.connectedAt` changing in order to distinguish
connections, but sometimes `connectedAt` was the same for both peers.
This adds a 1ms delay before making the second connection, to attempt to
stop the flakiness.
* fix: wait for index idle before returning data
* temp fixes to run CI
* small fix for failing test
* update to published multi-core-indexer
---------
Co-authored-by: Andrew Chou <[email protected]>
0 commit comments