Skip to content

Commit 8932119

Browse files
committed
cmd-sign: make staging location stream-specific
We want to store all the signatures in the same location rather than stream-specific. But ideally we still want the staging location for signing to be stream-specific so that we can safely garbage collect stale files there without worrying about stepping on concurrent runs. Just pick up the stream from the metadata and use that to build the staging location. See also coreos/fedora-coreos-pipeline#1218.
1 parent 7b0a22b commit 8932119

File tree

1 file changed

+12
-7
lines changed

1 file changed

+12
-7
lines changed

src/cmd-sign

Lines changed: 12 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -382,20 +382,25 @@ def robosign_oci(args, s3, build, gpgkey):
382382
files_to_upload.append({'path': path, 'filename': filename,
383383
'identity': identity, 'digest': digest})
384384

385-
# Upload them to S3. We upload to `staging/` first, and then will move
386-
# them to their final location once they're verified.
385+
# work with older releases; we may want to sign some of them
386+
if 'ref' in build:
387+
_, stream = build['ref'].rsplit('/', 1)
388+
else: # let fail if this is somehow missing
389+
stream = build["coreos-assembler.oci-imported-labels"]["fedora-coreos.stream"]
390+
391+
# Upload them to S3. We upload to `staging/$stream` first, and then will
392+
# move them to their final location once they're verified.
387393
sigstore_bucket, sigstore_prefix = get_bucket_and_prefix(args.s3_sigstore)
388-
sigstore_staging = os.path.join(sigstore_prefix, 'staging')
394+
sigstore_staging = os.path.join(sigstore_prefix, 'staging', stream)
389395

390396
# First, empty out staging/ so we don't accumulate cruft over time
391397
# https://stackoverflow.com/a/59026702
392-
# Note this assumes we don't run in parallel on the same sigstore
393-
# target, which is the case for us since only one release job can run at
394-
# a time per-stream and the S3 target location is stream-based.
398+
# Note the staging directory is per-stream so that we can handle
399+
# running in parallel across different streams.
395400
staging_objects = s3.list_objects_v2(Bucket=sigstore_bucket, Prefix=sigstore_staging)
396401
objects_to_delete = [{'Key': obj['Key']} for obj in staging_objects.get('Contents', [])]
397402
if len(objects_to_delete) > 0:
398-
print(f'Deleting {len(objects_to_delete)} stale files')
403+
print(f'Deleting {len(objects_to_delete)} stale files in staging')
399404
s3.delete_objects(Bucket=sigstore_bucket, Delete={'Objects': objects_to_delete})
400405

401406
# now, upload the ones we want

0 commit comments

Comments
 (0)