This is a fork of the Concourse resource s3-resource-simple.
resource_types:
- name: s3-resource-simple
type: registry-image
source:
# No `tag:` specified: will pick up `latest`.
repository: MYREGISTRY/s3-resource-simpleThe pipeline file at ci/s3-resource-simple-pipeline.yml enables a workflow where you can have per-feature branch pipelines plus a master pipeline.
The master branch pipeline, s3-resource-simple-master, will push the Docker image for the Concourse resource to repository s3-resource-simple with tag latest, so that any client pipeline will pick it up immediately.
Each time a feat branch is merged into the master branch, this pipeline will run and publish a new Docker image.
Given feature branch feat-x, the associated feature branch pipeline s3-resource-simple-feat-x will push the Docker image for the Concourse resource to repository s3-resource-simple with tag feat-x, so that additional integrations tests can be done by referring to the scratch repository with the tag feat-x without impacting users of the published resource.
- Add to your Concourse credentials manager the needed secrets.
- Optionally change files
ci/settings/master-branch.ymlandci/settings/feature-branch.yml.
fly_helper set-pipeline --allow-setting-master-branchfly_helper set-pipelineWith reference to the pipeline, there are two jobs, work-img and resource-img.
- Job
work-imgis used to build images for the workings of the pipeline itself, that is, to run the test tasks. - Job
resource-imgbuilds the final product of this pipeline: the Concourse resources3-resource-simple.
Resource to upload files to S3. Unlike the the official S3 Resource, this Resource can upload or download multiple files.
Include the following in your Pipeline YAML file, replacing the values in the angle brackets (< >):
resource_types:
- name: <resource type name>
type: registry-image
source:
repository: 18fgsa/s3-resource-simple
resources:
- name: <resource name>
type: <resource type name>
source:
access_key_id: { { aws-access-key } }
secret_access_key: { { aws-secret-key } }
bucket: { { aws-bucket } }
path: <optional, use to sync to a specific path of the bucket instead of root of bucket>
options: [<optional, see note below>]
region: <optional, see below>
sync: <optional, see below>
jobs:
- name: <job name>
plan:
- <some Resource or Task that outputs files>
- put: <resource name>
params:
dir: assetsThe access_key_id and secret_access_key are optional and if not provided the EC2 Metadata service will be queried for role based credentials.
The options parameter is synonymous with the options that aws cli accepts for sync. Please see S3 Sync Options and pay special attention to the Use of Exclude and Include Filters.
Given the following directory test:
test
├── results
│ ├── 1.json
│ └── 2.json
└── scripts
└── bad.sh
we can upload only the results subdirectory by using the following options in our task configuration:
options:
- "--exclude '*'"
- "--include 'results/*'"Interacting with some AWS regions (like London) requires AWS Signature Version
- This options allows you to explicitly specify region where your bucket is located (if this is set, AWS_DEFAULT_REGION env variable will be set accordingly).
region: eu-west-2By default files will not be synced from s3. Disabling this will download all files.
sync: trueWill change the working directory to subdirectory before invoking the sync
dir: subdirectory