Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 20 additions & 4 deletions src/examples/IDBBatchAtomicVFS.js
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,8 @@ class File {
/** @type {number} */ flags;

/** @type {Metadata} */ metadata;
/** @type {number} */ fileSize = 0;

/** @type {boolean} */ needsMetadataSync = false;

/** @type {Metadata} */ rollback = null;
/** @type {Set<number>} */ changedPages = new Set();

Expand Down Expand Up @@ -259,13 +258,30 @@ export class IDBBatchAtomicVFS extends WebLocksMixin(FacadeVFS) {
if (!isOverwrite ||
file.flags & VFS.SQLITE_OPEN_MAIN_DB ||
file.flags & VFS.SQLITE_OPEN_TEMP_DB) {
const snapshotFileSize = file.metadata.fileSize;
const block = {
path: file.path,
offset: -iOffset,
version: version,
data: pData.slice()
data

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider using data.slice() here to avoid potential modification of the original pData buffer. While the current implementation works, explicitly creating a copy ensures that any subsequent changes to pData do not affect the data stored in the block.

Suggested change
data
data: data.slice()

};
this.#idb.q(({ blocks }) => {
if (file.flags & (VFS.SQLITE_OPEN_MAIN_DB | VFS.SQLITE_OPEN_TEMP_DB)) {
// If this write is past the end of the file, insert blocks
// for the skipped space. Note that we can't test against
// file.metadata.fileSize because that will have changed
// by the time this lambda is executed.
for (let skipOffset = snapshotFileSize;
skipOffset < iOffset; skipOffset += data.byteLength) {
blocks.put({
path: file.path,
offset: -skipOffset,
version: version,
data: new Uint8Array(data.byteLength)
});
file.changedPages.add(skipOffset);
}
Comment on lines +274 to +283

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

When writing far past the end of the file, this loop inserting zeroed blocks could become a performance bottleneck. Consider alternative strategies for large skips, such as creating a single large zeroed block or using a sparse representation.

}
blocks.put(block);
file.changedPages.add(iOffset);
Comment on lines 285 to 286

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The file.changedPages.add(iOffset) call is present in both the conditional block (when writing past EOF) and the unconditional block. This redundancy can be removed by placing the call outside the conditional block to avoid unnecessary duplication.

Suggested change
blocks.put(block);
file.changedPages.add(iOffset);
blocks.put(block);
}, 'rw', file.txOptions);
file.changedPages.add(iOffset);

}, 'rw', file.txOptions);
Expand All @@ -284,7 +300,6 @@ export class IDBBatchAtomicVFS extends WebLocksMixin(FacadeVFS) {
// Write back.
blocks.put(block);
}, 'rw', file.txOptions);

}

if (file.metadata.fileSize < iOffset + pData.length) {
Expand Down Expand Up @@ -454,6 +469,7 @@ export class IDBBatchAtomicVFS extends WebLocksMixin(FacadeVFS) {
if (file.flags & VFS.SQLITE_OPEN_MAIN_DB) {
// Don't allow changing the page size.
if (value && file.metadata.fileSize) {
console.error('IDBBatchAtomicVFS page size cannot be changed.');
return VFS.SQLITE_ERROR;
}
}
Expand Down