diff --git a/.github/workflows/main.yaml b/.github/workflows/main.yaml new file mode 100644 index 0000000..b713210 --- /dev/null +++ b/.github/workflows/main.yaml @@ -0,0 +1,43 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +name: main +on: + push: + pull_request: + schedule: + - cron: "0 0 * * 0" +jobs: + test: + strategy: + fail-fast: false + matrix: + skupper-version: [2.0.0-preview-2] + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + - uses: actions/setup-python@v5 + with: + python-version: "3.x" + - uses: medyagh/setup-minikube@latest + - run: curl https://skupper.io/install.sh | bash -s -- --version ${{matrix.skupper-version}} + - run: echo "$HOME/.local/bin" >> "$GITHUB_PATH" + - run: ./plano test + env: + PLANO_COLOR: 1 diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..500983c --- /dev/null +++ b/.gitignore @@ -0,0 +1,2 @@ +/README.html +__pycache__/ diff --git a/.plano.py b/.plano.py new file mode 100644 index 0000000..4609d49 --- /dev/null +++ b/.plano.py @@ -0,0 +1,20 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +from skewer.planocommands import * diff --git a/README.md b/README.md index b29655c..d0c904a 100644 --- a/README.md +++ b/README.md @@ -1,203 +1,346 @@ -# Multi-cluster Cloud-Native grpc (microservices) application demo + -This tutorial demonstrates how to deploy the [Online Boutique](https://github.com/GoogleCloudPlatform/microservices-demo/) microservices demo application across multiple Kubernetes clusters that are located in different public and private cloud providers. This project contains a 10-tier microservices application developed by Google to demonstrate the use of technologies like Kubernetes. +# Skupper Online Boutique -In this tutorial, you will create a Virtual Application Network that enables communications across the public and private clusters. You will then deploy a subset of the application's grpc based microservices to each cluster. You will then access the `Online Boutique` web interface to browse items, add them to the cart and purchase them. +[![main](https://github.com/c-kruse/skupper-example-grpc/actions/workflows/main.yaml/badge.svg)](https://github.com/c-kruse/skupper-example-grpc/actions/workflows/main.yaml) -Top complete this tutorial, do the following: +#### A Cloud-Native gRPC microservice-based application deployed across multiple Kubernetes clusters using Skupper +This example is part of a [suite of examples][examples] showing the +different ways you can use [Skupper][website] to connect services +across cloud providers, data centers, and edge sites. + +[website]: https://skupper.io/ +[examples]: https://skupper.io/examples/index.html + +#### Contents + +* [Overview](#overview) * [Prerequisites](#prerequisites) -* [Step 1: Set up the demo](#step-1-set-up-the-demo) -* [Step 2: Deploy the Virtual Application Network](#step-2-deploy-the-virtual-application-network) -* [Step 3: Deploy the application microservices](#step-3-deploy-the-application-microservices) -* [Step 4: Expose the microservices to the Virtual Application Network](#step-4-expose-the-microservices-to-the-virtual-application-network) -* [Step 5: Access the Online Boutique Application](#step-5-access-the-boutique-shop-application) +* [Step 1: Access your Kubernetes clusters](#step-1-access-your-kubernetes-clusters) +* [Step 2: Install Skupper on your Kubernetes clusters](#step-2-install-skupper-on-your-kubernetes-clusters) +* [Step 3: Apply Kubernetes Resources](#step-3-apply-kubernetes-resources) +* [Step 4: Wait for Sites Ready](#step-4-wait-for-sites-ready) +* [Step 5: Install the Skupper command-line tool](#step-5-install-the-skupper-command-line-tool) +* [Step 6: Link your sites](#step-6-link-your-sites) * [Cleaning up](#cleaning-up) +* [Summary](#summary) * [Next steps](#next-steps) +* [About this example](#about-this-example) + +## Overview + +This tutorial demonstrates how to deploy the [Online +Boutique](https://github.com/GoogleCloudPlatform/microservices-demo/) +microservices demo application across multiple Kubernetes clusters that are +located in different public and private cloud providers. This project +contains a 10-tier microservices application developed by Google to +demonstrate the use of technologies like Kubernetes. + +In this tutorial, you will create a Virtual Application Network that enables +communications across the public and private clusters. You will then deploy a +subset of the application's grpc based microservices to each cluster. You +will then access the `Online Boutique` web interface to browse items, add +them to the cart and purchase them. ## Prerequisites -* The `kubectl` command-line tool, version 1.15 or later ([installation guide](https://kubernetes.io/docs/tasks/tools/install-kubectl/)) -* The `skupper` command-line tool, the latest version ([installation guide](https://skupper.io/start/index.html#step-1-install-the-skupper-command-line-tool-in-your-environment)) +* Access to at least one Kubernetes cluster, from [any provider you + choose][kube-providers]. + +* The `kubectl` command-line tool, version 1.15 or later + ([installation guide][install-kubectl]). + +* The `skupper` command-line tool, version 2.0 or later. On Linux + or Mac, you can use the install script (inspect it + [here][cli-install-script]) to download and extract the command: + + ~~~ shell + curl https://skupper.io/install.sh | sh -s -- --version 2.0.0-preview-2 + ~~~ + + See [Installing the Skupper CLI][cli-install-docs] for more + information. + +[kube-providers]: https://skupper.io/start/kubernetes.html +[install-kubectl]: https://kubernetes.io/docs/tasks/tools/install-kubectl/ +[cli-install-script]: https://github.com/skupperproject/skupper-website/blob/main/input/install.sh +[cli-install-docs]: https://skupper.io/install/ + +## Step 1: Access your Kubernetes clusters + +Skupper is designed for use with multiple Kubernetes clusters. +The `skupper` and `kubectl` commands use your +[kubeconfig][kubeconfig] and current context to select the cluster +and namespace where they operate. + +[kubeconfig]: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/ + +This example uses multiple cluster contexts at once. The +`KUBECONFIG` environment variable tells `skupper` and `kubectl` +which kubeconfig to use. + +For each cluster, open a new terminal window. In each terminal, +set the `KUBECONFIG` environment variable to a different path and +log in to your cluster. + +_**gRPC A:**_ + +~~~ shell +export KUBECONFIG=~/.kube/config-grpc-a + +~~~ + +_**gRPC B:**_ + +~~~ shell +export KUBECONFIG=~/.kube/config-grpc-b + +~~~ + +_**gRPC C:**_ + +~~~ shell +export KUBECONFIG=~/.kube/config-grpc-c + +~~~ + +**Note:** The login procedure varies by provider. + +## Step 2: Install Skupper on your Kubernetes clusters + +Using Skupper on Kubernetes requires the installation of the +Skupper custom resource definitions (CRDs) and the Skupper +controller. + +For each cluster, use `kubectl apply` with the Skupper +installation YAML to install the CRDs and controller. + +_**gRPC A:**_ + +~~~ shell +kubectl apply -f https://skupper.io/v2/install.yaml +~~~ + +_**gRPC B:**_ + +~~~ shell +kubectl apply -f https://skupper.io/v2/install.yaml +~~~ + +_**gRPC C:**_ + +~~~ shell +kubectl apply -f https://skupper.io/v2/install.yaml +~~~ + +## Step 3: Apply Kubernetes Resources + +Apply the application deployment resources alongside the skupper +resources describing the application network. + +_**gRPC A:**_ + +~~~ shell +kubectl create namespace grpc-a +kubectl apply -f resources-a +~~~ + +_**gRPC B:**_ + +~~~ shell +kubectl create namespace grpc-b +kubectl apply -f resources-b +~~~ + +_**gRPC C:**_ + +~~~ shell +kubectl create namespace grpc-c +kubectl apply -f resources-c +~~~ -The basis for this demonstration is to depict the deployment of member microservices for an application across both private and public clusters and for the ability of these microsservices to communicate across a Virtual Application Network. As an example, the cluster deployment might be comprised of: +## Step 4: Wait for Sites Ready -* A private cloud cluster running on your local machine -* Two public cloud clusters running in public cloud providers +Before linking sites to form the network, wait for the Sites to be ready. -While the detailed steps are not included here, this demonstration can alternatively be performed with three separate namespaces on a single cluster. +_**gRPC A:**_ -## Step 1: Set up the demo +~~~ shell +kubectl wait --for condition=Ready site/grpc-a --timeout 240s +~~~ -1. On your local machine, make a directory for this tutorial and clone the example repo: +_**gRPC B:**_ - ```bash - mkdir boutique-demo - cd boutique-demo - git clone https://github.com/skupperproject/skupper-example-grpc.git - ``` +~~~ shell +kubectl wait --for condition=Ready site/grpc-b --timeout 120s +~~~ -3. Prepare the target clusters. +_**gRPC C:**_ - 1. On your local machine, log in to each cluster in a separate terminal session. - 2. In each cluster, create a namespace to use for the demo. - 3. In each cluster, set the kubectl config context to use the demo namespace [(see kubectl cheat sheet for more information)](https://kubernetes.io/docs/reference/kubectl/cheatsheet/) - ```bash - kubectl config set-context --current --namespace - ``` +~~~ shell +kubectl wait --for condition=Ready site/grpc-c --timeout 120s +~~~ -## Step 2: Deploy the Virtual Application Network +## Step 5: Install the Skupper command-line tool -On each cluster, using the `skupper` tool, define the Virtual Application Network and the connectivity for the peer clusters. +This example uses the Skupper command-line tool to create Skupper +resources. You need to install the `skupper` command only once +for each development environment. -1. In the terminal for the first public cluster, deploy the **public1** application router. Create a connection token for connections from the **public2** cluster and the **private1** cluster: +On Linux or Mac, you can use the install script (inspect it +[here][install-script]) to download and extract the command: - ```bash - skupper init --site-name public1 - skupper token create public1-token.yaml --uses 2 - ``` -2. In the terminal for the second public cluster, deploy the **public2** application router, create a connection token for connections from the **private1** cluser and connect to the **public1** cluster: +~~~ shell +curl https://skupper.io/install.sh | sh -s -- --version 2.0.0-preview-2 +~~~ - ```bash - skupper init --site-name public2 - skupper token create public2-token.yaml - skupper link create public1-token.yaml - ``` +The script installs the command under your home directory. It +prompts you to add the command to your path if necessary. -3. In the terminal for the private cluster, deploy the **private1** application router and define its connections to the **public1** and **public2** cluster +For Windows and other installation options, see [Installing +Skupper][install-docs]. - ```bash - skupper init --site-name private1 - skupper link create public1-token.yaml - skupper link create public2-token.yaml - ``` +[install-script]: https://github.com/skupperproject/skupper-website/blob/main/input/install.sh +[install-docs]: https://skupper.io/install/ -4. In each of the cluster terminals, verify connectivity has been established +## Step 6: Link your sites - ```bash - skupper link status - ``` +A Skupper _link_ is a channel for communication between two sites. +Links serve as a transport for application connections and +requests. -## Step 3: Deploy the application microservices +Creating a link requires the use of two Skupper commands in +conjunction: `skupper token issue` and `skupper token redeem`. +The `skupper token issue` command generates a secret token that +can be transferred to a remote site and redeemed for a link to the +issuing site. The `skupper token redeem` command uses the token +to create the link. -After creating the Virtual Application Network, deploy the grpc based microservices for the `Online Boutique` application. There are three `deployment .yaml` files -labelled *a, b, and c*. These files (arbitrarily) define a subset of the application microservices to deploy to a cluster. +**Note:** The link token is truly a *secret*. Anyone who has the +token can link to your site. Make sure that only those you trust +have access to it. -| Deployment | Microservices -| -------------------- | ---------------------------------------- | -| deployment-ms-a.yaml | frontend, productcatalog, recommendation | -| deployment-ms-b.yaml | ad, cart, checkout, currency, redis-cart | -| deployment-ms-c.yaml | email, payment, shipping | +First, use `skupper token issue` in gRPC A to generate the token. +Then, use `skupper token redeem` in gRPC B to link the sites. +_**gRPC A:**_ -1. In the terminal for the **private1** cluster, deploy the following: +~~~ shell +skupper token issue ~/grpc-a.token --redemptions-allowed=2 +~~~ - ```bash - kubectl apply -f skupper-example-grpc/deployment-ms-a.yaml - ``` +_Sample output:_ -2. In the terminal for the **public1** cluster, deploy the following: +~~~ console +$ skupper token issue ~/grpc-a.token --redemptions-allowed=2 +Waiting for token status ... - ```bash - kubectl apply -f skupper-example-grpc/deployment-ms-b.yaml - ``` +Grant "grpc-a-cad4f72d-2917-49b9-ab66-cdaca4d6cf9c" is ready +Token file grpc-a.token created -3. In the terminal for the **public2** cluster, deploy the following: +Transfer this file to a remote site. At the remote site, +create a link to this site using the "skupper token redeem" command: - ```bash - kubectl apply -f skupper-example-grpc/deployment-ms-c.yaml - ``` + skupper token redeem -## Step 4: Expose the microservices to the Virtual Application Network +The token expires after 1 use(s) or after 15m0s. +~~~ -There are three script files labelled *-a, -b, and -c*. These files expose the services created above to join the Virtual Application Network. Note that the frontend service is not assigned to the Virtual Application Network as it is setup for external web access. +_**gRPC B:**_ +~~~ shell +skupper token issue ~/grpc-b.token +skupper token redeem ~/grpc-a.token +~~~ -| File | Deployments -| ----------------------- | ---------------------------------------- | -| expose-deployments-a.sh | productcatalog, recommendation | -| expose-deployments-b.sh | ad, cart, checkout, currency, redis-cart | -| expose-deployments-c.sh | email, payment, shipping | +_Sample output:_ +~~~ console +$ skupper token redeem ~/grpc-a.token +Waiting for token status ... +Token "grpc-a-cad4f72d-2917-49b9-ab66-cdaca4d6cf9c" has been redeemed +You can now safely delete /run/user/1000/skewer/secret.token +~~~ -1. In the terminal for the **private1** cluster, execute the following annotation script: +_**gRPC C:**_ - ```bash - skupper-example-grpc/expose-deployments-a.sh - ``` +~~~ shell +skupper token redeem ~/grpc-a.token +skupper token redeem ~/grpc-b.token +~~~ -2. In the terminal for the **public1** cluster, execute the following annotation script: +_Sample output:_ - ```bash - skupper-example-grpc/expose-deployments-b.sh - ``` +~~~ console +$ skupper token redeem ~/grpc-a.token +Waiting for token status ... +Token "grpc-a-cad4f72d-2917-49b9-ab66-cdaca4d6cf9c" has been redeemed +You can now safely delete /run/user/1000/skewer/secret.token -3. In the terminal for the **public2** cluster, execute the following annotation script: +$ skupper token redeem ~/grpc-b.token +Waiting for token status ... +Token "grpc-b-cad4f72d-2917-49b9-ab66-cdaca4d6cf9c" has been redeemed +You can now safely delete /run/user/1000/skewer/secret.token +~~~ - ```bash - skupper-example-grpc/expose-deployments-c.sh - ``` +If your terminal sessions are on different machines, you may need +to use `scp` or a similar tool to transfer the token securely. By +default, tokens expire after a single use or 15 minutes after +being issued. -## Step 5: Access The Boutique Shop Application +## Cleaning up -The web frontend for the `Online Boutique` application can be accessed via the *frontend-external* service. In the -terminal for the **private1** cluster, start a firefox browser and access the shop UI. +To remove Skupper and the other resources from this exercise, use +the following commands. - ```bash - /usr/bin/firefox --new-window "http://$(kubectl get service frontend-external -o=jsonpath='{.spec.clusterIP}')/" - ``` +_**gRPC A:**_ -Open a browser and use the url provided above to access the `Online Boutique`. +~~~ shell +kubectl delete -f resources-a +~~~ -## Step 6: Run the load generator +_**gRPC B:**_ -The `Online Boutique` application has a load generator that creates realistic usage patterns on the website. +~~~ shell +kubectl delete -f resources-b +~~~ -1. In the terminal for the **private1** cluster, deploy the load generator: +_**gRPC C:**_ - ```bash - kubectl apply -f skupper-example-grpc/deployment-loadgenerator.yaml - ``` -2. In the terminal for the **private1** cluster, observe the output from the load generator: +~~~ shell +kubectl delete -f resources-c +~~~ - ```bash - kubectl logs -f deploy/loadgenerator - ``` -3. In the terminal for the **private1** cluster, stop the load generator: +## Summary - ```bash - kubectl delete -f skupper-example-grpc/deployment-loadgenerator.yaml - ``` - -## Cleaning Up +This example locates the many services that make up a microservice +application across three different namespaces on different clusters with no +modifications to the application. Without Skupper, it would normally take +careful network planning to avoid exposing these services over the public +internet. -Restore your cluster environment by returning the resources created in the demonstration. On each cluster, delete the demo resources and the skupper network: +Introducing Skupper into each namespace allows us to create a virtual +application network that can connect services in different clusters. Any +service exposed on the application network is represented as a local service in +all of the linked namespaces. -1. In the terminal for the **private1** cluster, delete the resources: + - ```bash - skupper-example-grpc/unexpose-deployments-a.sh - kubectl delete -f skupper-example-grpc/deployment-ms-a.yaml - skupper delete - ``` +## Next steps -2. In the terminal for the **public1** cluster, delete the resources: +Check out the other [examples][examples] on the Skupper website. - ```bash - skupper-example-grpc/unexpose-deployments-b.sh - kubectl delete -f skupper-example-grpc/deployment-ms-b.yaml - skupper delete - ``` +## About this example -3. In the terminal for the **public2** cluster, delete the resources: +This example was produced using [Skewer][skewer], a library for +documenting and testing Skupper examples. - ```bash - skupper-example-grpc/unexpose-deployments-c.sh - kubectl delete -f skupper-example-grpc/deployment-ms-c.yaml - skupper delete - ``` +[skewer]: https://github.com/skupperproject/skewer -## Next Steps +Skewer provides utility functions for generating the README and +running the example steps. Use the `./plano` command in the project +root to see what is available. - - [Try the example for multi-cluster distributed web services](https://github.com/skupperproject/skupper-example-bookinfo) - - [Find more examples](https://skupper.io/examples/) +To quickly stand up the example using Minikube, try the `./plano demo` +command. diff --git a/expose-deployments-a.sh b/expose-deployments-a.sh deleted file mode 100755 index e149c35..0000000 --- a/expose-deployments-a.sh +++ /dev/null @@ -1,3 +0,0 @@ -#!/bin/bash -skupper expose deployment productcatalogservice --address productcatalogservice --port 3550 --protocol http2 --target-port 3550 -skupper expose deployment recommendationservice --address recommendationservice --port 8080 --protocol http2 --target-port 8080 diff --git a/expose-deployments-b.sh b/expose-deployments-b.sh deleted file mode 100755 index d7267ea..0000000 --- a/expose-deployments-b.sh +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash -skupper expose deployment checkoutservice --address checkoutservice --port 5050 --protocol http2 --target-port 5050 -skupper expose deployment cartservice --address cartservice --port 7070 --protocol http2 --target-port 7070 -skupper expose deployment currencyservice --address currencyservice --port 7000 --protocol http2 --target-port 7000 -skupper expose deployment adservice --address adservice --port 9555 --protocol http2 --target-port 9555 -skupper expose deployment redis-cart --address redis-cart --port 6379 --protocol tcp --target-port 6379 - diff --git a/expose-deployments-c.sh b/expose-deployments-c.sh deleted file mode 100755 index 0b8671b..0000000 --- a/expose-deployments-c.sh +++ /dev/null @@ -1,4 +0,0 @@ -#!/bin/bash -skupper expose deployment emailservice --address emailservice --port 5000 --protocol http2 --target-port 8080 -skupper expose deployment paymentservice --address paymentservice --port 50051 --protocol http2 --target-port 50051 -skupper expose deployment shippingservice --address shippingservice --port 50051 --protocol http2 --target-port 50051 diff --git a/external/skewer/.github/workflows/main.yaml b/external/skewer/.github/workflows/main.yaml new file mode 100644 index 0000000..ced0c1f --- /dev/null +++ b/external/skewer/.github/workflows/main.yaml @@ -0,0 +1,24 @@ +name: main +on: + push: + pull_request: + schedule: + - cron: "0 0 * * 0" +jobs: + test: + strategy: + fail-fast: false + matrix: + skupper-version: [2.0.0-preview-2] + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + - uses: actions/setup-python@v5 + with: + python-version: "3.x" + - uses: medyagh/setup-minikube@latest + - run: curl https://skupper.io/install.sh | bash -s -- --version ${{matrix.skupper-version}} + - run: echo "$HOME/.local/bin" >> $GITHUB_PATH + - run: ./plano test + env: + PLANO_COLOR: 1 diff --git a/external/skewer/.gitignore b/external/skewer/.gitignore new file mode 100644 index 0000000..f651c26 --- /dev/null +++ b/external/skewer/.gitignore @@ -0,0 +1,4 @@ +__pycache__/ +/README.html +/htmlcov +/.coverage diff --git a/external/skewer/.plano.py b/external/skewer/.plano.py new file mode 100644 index 0000000..40c7b64 --- /dev/null +++ b/external/skewer/.plano.py @@ -0,0 +1,74 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +import skewer.tests + +from plano import * +from plano.github import * +from skewer import * + +@command(passthrough=True) +def test(passthrough_args=[]): + PlanoTestCommand(skewer.tests).main(args=passthrough_args) + +@command +def coverage(verbose=False, quiet=False): + check_program("coverage") + + with working_env(PYTHONPATH="python"): + run("coverage run --source skewer -m skewer.tests") + + run("coverage report") + run("coverage html") + + if not quiet: + print(f"file:{get_current_dir()}/htmlcov/index.html") + +@command +def render(verbose=False, quiet=False): + """ + Render README.html from README.md + """ + markdown = read("README.md") + html = convert_github_markdown(markdown) + + write("README.html", html) + + if not quiet: + print(f"file:{get_real_path('README.html')}") + +@command +def list_standard_steps(): + data = read_yaml("python/skewer/standardsteps.yaml") + for key in data: + print(key) + +@command +def clean(): + remove(find(".", "__pycache__")) + remove("README.html") + remove("htmlcov") + remove(".coverage") + +@command +def update_plano(): + """ + Update the embedded Plano repo + """ + update_external_from_github("external/plano", "ssorj", "plano") diff --git a/external/skewer/LICENSE.txt b/external/skewer/LICENSE.txt new file mode 100644 index 0000000..e06d208 --- /dev/null +++ b/external/skewer/LICENSE.txt @@ -0,0 +1,202 @@ +Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright {yyyy} {name of copyright owner} + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + diff --git a/external/skewer/README.md b/external/skewer/README.md new file mode 100644 index 0000000..534b9ae --- /dev/null +++ b/external/skewer/README.md @@ -0,0 +1,381 @@ +# Skewer + +[![main](https://github.com/skupperproject/skewer/actions/workflows/main.yaml/badge.svg)](https://github.com/skupperproject/skewer/actions/workflows/main.yaml) + +A library for documenting and testing Skupper examples + +A `skewer.yaml` file describes the steps and commands to achieve an +objective using Skupper. Skewer takes the `skewer.yaml` file as input +and produces two outputs: a `README.md` file and a test routine. + +#### Contents + +* [An example example](#an-example-example) +* [Setting up Skewer for your own example](#setting-up-skewer-for-your-own-example) +* [Skewer YAML](#skewer-yaml) +* [Standard steps](#standard-steps) +* [Demo mode](#demo-mode) +* [Troubleshooting](#troubleshooting) + +## An example example + +[Example `skewer.yaml` file](example/skewer.yaml) + +[Example `README.md` output](example/README.md) + +## Setting up Skewer for your own example + +**Note:** This is how you set things up from scratch. You can also +use the [Skupper example template][template] as a starting point. + +[template]: https://github.com/skupperproject/skupper-example-template + +Change directory to the root of your example project: + + cd / + +Add the Skewer code as a subdirectory: + + mkdir -p external + curl -sfL https://github.com/skupperproject/skewer/archive/main.tar.gz | tar -C external -xz + mv external/skewer-main external/skewer + +Symlink the Skewer and Plano libraries into your `python` directory: + + mkdir -p python + ln -s ../external/skewer/python/skewer python/skewer + ln -s ../external/skewer/python/plano python/plano + +Copy the `plano` command into the root of your project: + + cp external/skewer/plano plano + +Copy the standard config files: + + cp external/skewer/config/.plano.py .plano.py + cp external/skewer/config/.gitignore .gitignore + +Copy the standard workflow file: + + mkdir -p .github/workflows + cp external/skewer/config/.github/workflows/main.yaml .github/workflows/main.yaml + +Use your editor to create a `skewer.yaml` file in the root of your +project: + + emacs skewer.yaml + +To use the `./plano` command, you must have the Python `pyyaml` +package installed. Use `pip` (or `pip3` on some systems) to install +it: + + pip install pyyaml + +Run the `./plano` command to see the available commands: + +~~~ console +$ ./plano +usage: plano [-h] [-f FILE] [-m MODULE] {command} ... + +Run commands defined as Python functions + +options: + -h, --help Show this help message and exit + -f FILE, --file FILE Load commands from FILE (default '.plano.py') + -m MODULE, --module MODULE + Load commands from MODULE + +commands: + {command} + generate Generate README.md from the data in skewer.yaml + render Render README.html from README.md + clean Clean up the source tree + run Run the example steps + demo Run the example steps and pause for a demo before cleaning up + test Test README generation and run the steps on Minikube + update-skewer Update the embedded Skewer repo and GitHub workflow +~~~ + +## Skewer YAML + +The top level of the `skewer.yaml` file: + +~~~ yaml +title: # Your example's title (required) +subtitle: # Your chosen subtitle (optional) +workflow: # The filename of your GitHub workflow (optional, default 'main.yaml') +overview: # Text introducing your example (optional) +prerequisites: # Text describing prerequisites (optional, has default text) +sites: # A map of named sites (see below) +steps: # A list of steps (see below) +summary: # Text to summarize what the user did (optional) +next_steps: # Text linking to more examples (optional, has default text) +~~~ + +For fields with default text such as `prerequisites` and `next_steps`, +you can include the default text inside your custom text by using the +`@default@` placeholder: + +~~~ yaml +next_steps: + @default@ + + This Way to the Egress. +~~~ + +To disable the GitHub workflow and CI badge, set `workflow` to `null`. + +A **site**: + +~~~ yaml +: + title: # The site title (optional) + platform: # "kubernetes" or "podman" (required) + namespace: # The Kubernetes namespace (required for Kubernetes sites) + env: # A map of named environment variables +~~~ + +Kubernetes sites must have a `KUBECONFIG` environment variable with a +path to a kubeconfig file. A tilde (~) in the kubeconfig file path is +replaced with a temporary working directory during testing. + +Podman sites must have a `SKUPPER_PLATFORM` variable with the value +`podman`. + +Example sites: + +~~~ yaml +sites: + east: + title: East + platform: kubernetes + namespace: east + env: + KUBECONFIG: ~/.kube/config-east + west: + title: West + platform: podman + env: + SKUPPER_PLATFORM: podman +~~~ + +A **step**: + +~~~ yaml +- title: # The step title (required) + preamble: # Text before the commands (optional) + commands: # Named groups of commands. See below. + postamble: # Text after the commands (optional) +~~~ + +An example step: + +~~~ yaml +steps: + - title: Expose the frontend service + preamble: | + We have established connectivity between the two namespaces and + made the backend in `east` available to the frontend in `west`. + Before we can test the application, we need external access to + the frontend. + + Use `kubectl expose` with `--type LoadBalancer` to open network + access to the frontend service. Use `kubectl get services` to + check for the service and its external IP address. + commands: + east: + west: +~~~ + +The step commands are separated into named groups corresponding to the +sites. Each named group contains a list of command entries. Each +command entry has a `run` field containing a shell command and other +fields for awaiting completion or providing sample output. + +You can also use a named step from the library of [standard +steps](#standard-steps): + +~~~ yaml +- standard: kubernetes/access_your_kubernetes_clusters +~~~ + +A **command**: + +~~~ yaml +- run: # A shell command (required) + apply: # Use this command only for "readme" or "test" (default is both) + output: # Sample output to include in the README (optional) + expect_failure: # If true, check that the command fails and keep going (default false) +~~~ + +Only the `run` and `output` fields are used in the README content. +The `output` field is used as sample output only, not for any kind of +testing. + +The `apply` field is useful when you want the readme instructions to +be different from the test procedure, or you simply want to omit +something. + +There are also some special "await" commands that you can use to pause +for a condition you require before going to the next step. They are +used only for testing and do not impact the README. + +~~~ yaml +- await_resource: # A resource for which to await readiness (optional) + # Example: await_resource: deployment/frontend +- await_ingress: # A service for which to await an external hostname or IP (optional) + # Example: await_ingress: service/frontend +- await_http_ok: # A service and URL template for which to await an HTTP OK response (optional) + # Example: await_http_ok: [service/frontend, "http://{}:8080/api/hello"] +~~~ + +Example commands: + +~~~ yaml +commands: + east: + - run: skupper expose deployment/backend --port 8080 + output: | + deployment backend exposed as backend + west: + - await_resource: service/backend + - run: kubectl get service/backend + output: | + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + backend ClusterIP 10.102.112.121 8080/TCP 30s +~~~ + +## Standard steps + +Skewer includes a library of standard steps with descriptive text and +commands that we use a lot for our examples. + +The standard steps are defined in +[python/skewer/standardsteps.yaml](python/skewer/standardsteps.yaml). +They fall in three groups. + +Steps for setting up platforms: + +~~~ +platform/access_your_kubernetes_clusters +platform/access_your_kubernetes_cluster +platform/set_up_your_podman_environments +platform/set_up_your_podman_environment +platform/install_skupper_on_your_kubernetes_clusters +platform/install_skupper_on_your_kubernetes_cluster +platform/install_skupper_in_your_podman_environments +platform/install_skupper_in_your_podman_environment +~~~ + +Steps for primary Skupper operations: + +~~~ +skupper/create_your_sites/kubernetes_cli +skupper/create_your_sites/podman_cli +skupper/link_your_sites/kubernetes_cli +skupper/link_your_sites/podman_cli +skupper/cleaning_up/kubernetes_cli +skupper/cleaning_up/podman_cli +~~~ + + + + + + + + +Steps specific to the Hello World application: + +~~~ +hello_world/deploy_the_frontend_and_backend/kubernetes_cli +hello_world/expose_the_backend_service/kubernetes_cli +hello_world/access_the_frontend_service/kubernetes_cli +hello_world/cleaning_up/kubernetes_cli +~~~ + + + + + + +Some of the steps have a suffix indicating their target platform and +interface: `kubernetes_cli`, `kubernetes_yaml`, `podman_cli`, and +`podman_yaml`. + +**Note:** The `link_your_sites` and `cleaning_up` steps are less +generic than some of the other steps. For example, `cleaning_up` +doesn't delete any application workoads. Check that the text and +commands these steps produce are doing what you need for your example. +If not, you need to provide a custom step. + +You can create custom steps based on the standard steps by overriding +the `title`, `preamble`, `commands`, or `postamble` fields. + +~~~ yaml +- standard: skupper/cleaning_up/kubernetes_cli + commands: + east: + - run: skupper delete + - run: kubectl delete deployment/database + west: + - run: skupper delete +~~~ + +For string fields such as `preamble` and `postamble`, you can include +the standard text inside your custom text by using the `@default@` +placeholder: + +~~~ yaml +- standard: skupper/cleaning_up/kubernetes_cli + preamble: | + @default@ + + Note: You may also want to flirp your krupke. +~~~ + +A typical mix of standard and custom steps for a Kubernetes-based +example might look like this: + +~~~ yaml +steps: + - standard: platform/set_up_your_kubernetes_clusters + - + - standard: platform/install_the_skupper_command_line_tool + - standard: platform/install_skupper_on_your_kubernetes_clusters + - standard: skupper/create_your_sites/kubernetes_cli + - standard: skupper/link_your_sites/kubernetes_cli + - + - + - standard: skupper/cleaning_up/kubernetes_cli +~~~ + +## Demo mode + +Skewer has a mode where it executes all the steps, but before cleaning +up and exiting, it pauses so you can inspect things. + +It is enabled by setting the environment variable `SKEWER_DEMO` to any +value when you call `./plano run` or one of its variants. You can +also use `./plano demo`, which sets the variable for you. + +## Troubleshooting + +### Subnet is already used + +Error: + +~~~ console +plano: notice: Starting Minikube +plano: notice: Running command 'minikube start -p skewer --auto-update-drivers false' +* Creating podman container (CPUs=2, Memory=16000MB) ...- E0229 05:44:29.821273 12224 network_create.go:113] error while trying to create podman network skewer 192.168.49.0/24: create podman network skewer 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0: sudo -n podman network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=skewer skewer: exit status 125 + +Error: subnet 192.168.49.0/24 is already used on the host or by another config +~~~ + +Remove the existing Podman network. Note that it might belong to +another user on the host. + +~~~ shell +sudo podman network rm minikube +~~~ diff --git a/external/skewer/config/.github/workflows/main.yaml b/external/skewer/config/.github/workflows/main.yaml new file mode 100644 index 0000000..b713210 --- /dev/null +++ b/external/skewer/config/.github/workflows/main.yaml @@ -0,0 +1,43 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +name: main +on: + push: + pull_request: + schedule: + - cron: "0 0 * * 0" +jobs: + test: + strategy: + fail-fast: false + matrix: + skupper-version: [2.0.0-preview-2] + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + - uses: actions/setup-python@v5 + with: + python-version: "3.x" + - uses: medyagh/setup-minikube@latest + - run: curl https://skupper.io/install.sh | bash -s -- --version ${{matrix.skupper-version}} + - run: echo "$HOME/.local/bin" >> "$GITHUB_PATH" + - run: ./plano test + env: + PLANO_COLOR: 1 diff --git a/external/skewer/config/.gitignore b/external/skewer/config/.gitignore new file mode 100644 index 0000000..500983c --- /dev/null +++ b/external/skewer/config/.gitignore @@ -0,0 +1,2 @@ +/README.html +__pycache__/ diff --git a/external/skewer/config/.plano.py b/external/skewer/config/.plano.py new file mode 100644 index 0000000..4609d49 --- /dev/null +++ b/external/skewer/config/.plano.py @@ -0,0 +1,20 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +from skewer.planocommands import * diff --git a/external/skewer/example/.gitignore b/external/skewer/example/.gitignore new file mode 100644 index 0000000..500983c --- /dev/null +++ b/external/skewer/example/.gitignore @@ -0,0 +1,2 @@ +/README.html +__pycache__/ diff --git a/external/skewer/example/.plano.py b/external/skewer/example/.plano.py new file mode 100644 index 0000000..4609d49 --- /dev/null +++ b/external/skewer/example/.plano.py @@ -0,0 +1,20 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +from skewer.planocommands import * diff --git a/external/skewer/example/README.md b/external/skewer/example/README.md new file mode 100644 index 0000000..b3cabcd --- /dev/null +++ b/external/skewer/example/README.md @@ -0,0 +1,390 @@ + + +# Skupper Hello World + +[![main](https://github.com/skupperproject/skewer/actions/workflows/main.yaml/badge.svg)](https://github.com/skupperproject/skewer/actions/workflows/main.yaml) + +#### A minimal HTTP application deployed across Kubernetes clusters using Skupper + +This example is part of a [suite of examples][examples] showing the +different ways you can use [Skupper][website] to connect services +across cloud providers, data centers, and edge sites. + +[website]: https://skupper.io/ +[examples]: https://skupper.io/examples/index.html + +#### Contents + +* [Overview](#overview) +* [Prerequisites](#prerequisites) +* [Step 1: Access your Kubernetes clusters](#step-1-access-your-kubernetes-clusters) +* [Step 2: Install Skupper on your Kubernetes clusters](#step-2-install-skupper-on-your-kubernetes-clusters) +* [Step 3: Deploy the frontend and backend](#step-3-deploy-the-frontend-and-backend) +* [Step 4: Create your sites](#step-4-create-your-sites) +* [Step 5: Link your sites](#step-5-link-your-sites) +* [Step 6: Fail on demand](#step-6-fail-on-demand) +* [Step 7: Fail as expected](#step-7-fail-as-expected) +* [Step 8: Expose the backend service](#step-8-expose-the-backend-service) +* [Step 9: Access the frontend service](#step-9-access-the-frontend-service) +* [Cleaning up](#cleaning-up) +* [Summary](#summary) +* [Next steps](#next-steps) +* [About this example](#about-this-example) + +## Overview + +An overview + +## Prerequisites + +* Access to at least one Kubernetes cluster, from [any provider you + choose][kube-providers]. + +* The `kubectl` command-line tool, version 1.15 or later + ([installation guide][install-kubectl]). + +* The `skupper` command-line tool, version 2.0 or later. On Linux + or Mac, you can use the install script (inspect it + [here][cli-install-script]) to download and extract the command: + + ~~~ shell + curl https://skupper.io/install.sh | sh -s -- --version 2.0.0-preview-2 + ~~~ + + See [Installing the Skupper CLI][cli-install-docs] for more + information. + +[kube-providers]: https://skupper.io/start/kubernetes.html +[install-kubectl]: https://kubernetes.io/docs/tasks/tools/install-kubectl/ +[cli-install-script]: https://github.com/skupperproject/skupper-website/blob/main/input/install.sh +[cli-install-docs]: https://skupper.io/install/ + +## Step 1: Access your Kubernetes clusters + +Skupper is designed for use with multiple Kubernetes clusters. +The `skupper` and `kubectl` commands use your +[kubeconfig][kubeconfig] and current context to select the cluster +and namespace where they operate. + +[kubeconfig]: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/ + +This example uses multiple cluster contexts at once. The +`KUBECONFIG` environment variable tells `skupper` and `kubectl` +which kubeconfig to use. + +For each cluster, open a new terminal window. In each terminal, +set the `KUBECONFIG` environment variable to a different path and +log in to your cluster. + +_**West:**_ + +~~~ shell +export KUBECONFIG=~/.kube/config-west + +~~~ + +_**East:**_ + +~~~ shell +export KUBECONFIG=~/.kube/config-east + +~~~ + +**Note:** The login procedure varies by provider. See the +documentation for yours: + +* [Minikube](https://skupper.io/start/minikube.html#cluster-access) +* [Amazon Elastic Kubernetes Service (EKS)](https://skupper.io/start/eks.html#cluster-access) +* [Azure Kubernetes Service (AKS)](https://skupper.io/start/aks.html#cluster-access) +* [Google Kubernetes Engine (GKE)](https://skupper.io/start/gke.html#cluster-access) +* [IBM Kubernetes Service](https://skupper.io/start/ibmks.html#cluster-access) +* [OpenShift](https://skupper.io/start/openshift.html#cluster-access) + +## Step 2: Install Skupper on your Kubernetes clusters + +Using Skupper on Kubernetes requires the installation of the +Skupper custom resource definitions (CRDs) and the Skupper +controller. + +For each cluster, use `kubectl apply` with the Skupper +installation YAML to install the CRDs and controller. + +_**West:**_ + +~~~ shell +kubectl apply -f https://skupper.io/v2/install.yaml +~~~ + +_**East:**_ + +~~~ shell +kubectl apply -f https://skupper.io/v2/install.yaml +~~~ + +## Step 3: Deploy the frontend and backend + +This example runs the frontend and the backend in separate +Kubernetes namespaces, on different clusters. + +For each cluster, use `kubectl create namespace` and `kubectl +config set-context` to create the namespace you wish to use and +set the namespace on your current context. + +Then, use `kubectl create deployment` to deploy the frontend in +West and the backend in East. + +_**West:**_ + +~~~ shell +kubectl create namespace west +kubectl config set-context --current --namespace west +kubectl create deployment frontend --image quay.io/skupper/hello-world-frontend +~~~ + +_**East:**_ + +~~~ shell +kubectl create namespace east +kubectl config set-context --current --namespace east +kubectl create deployment backend --image quay.io/skupper/hello-world-backend --replicas 3 +~~~ + +## Step 4: Create your sites + +A Skupper _site_ is a location where your application workloads +are running. Sites are linked together to form a network for your +application. + +For each namespace, use `skupper site create` with a site name of +your choice. This creates the site resource and deploys the +Skupper router to the namespace. + +**Note:** If you are using Minikube, you need to [start minikube +tunnel][minikube-tunnel] before you run `skupper site create`. + + + +[minikube-tunnel]: https://skupper.io/start/minikube.html#running-minikube-tunnel + +_**West:**_ + +~~~ shell +skupper site create west --enable-link-access --timeout 2m +~~~ + +_Sample output:_ + +~~~ console +$ skupper site create west --enable-link-access --timeout 2m +Waiting for status... +Site "west" is configured. Check the status to see when it is ready +~~~ + +_**East:**_ + +~~~ shell +skupper site create east --timeout 2m +~~~ + +_Sample output:_ + +~~~ console +$ skupper site create east --timeout 2m +Waiting for status... +Site "east" is configured. Check the status to see when it is ready +~~~ + +You can use `skupper site status` at any time to check the status +of your site. + +## Step 5: Link your sites + +A Skupper _link_ is a channel for communication between two sites. +Links serve as a transport for application connections and +requests. + +Creating a link requires the use of two Skupper commands in +conjunction: `skupper token issue` and `skupper token redeem`. +The `skupper token issue` command generates a secret token that +can be transferred to a remote site and redeemed for a link to the +issuing site. The `skupper token redeem` command uses the token +to create the link. + +**Note:** The link token is truly a *secret*. Anyone who has the +token can link to your site. Make sure that only those you trust +have access to it. + +First, use `skupper token issue` in West to generate the token. +Then, use `skupper token redeem` in East to link the sites. + +_**West:**_ + +~~~ shell +skupper token issue ~/secret.token +~~~ + +_Sample output:_ + +~~~ console +$ skupper token issue ~/secret.token +Waiting for token status ... + +Grant "west-cad4f72d-2917-49b9-ab66-cdaca4d6cf9c" is ready +Token file /run/user/1000/skewer/secret.token created + +Transfer this file to a remote site. At the remote site, +create a link to this site using the "skupper token redeem" command: + + skupper token redeem + +The token expires after 1 use(s) or after 15m0s. +~~~ + +_**East:**_ + +~~~ shell +skupper token redeem ~/secret.token +~~~ + +_Sample output:_ + +~~~ console +$ skupper token redeem ~/secret.token +Waiting for token status ... +Token "west-cad4f72d-2917-49b9-ab66-cdaca4d6cf9c" has been redeemed +You can now safely delete /run/user/1000/skewer/secret.token +~~~ + +If your terminal sessions are on different machines, you may need +to use `scp` or a similar tool to transfer the token securely. By +default, tokens expire after a single use or 15 minutes after +being issued. + +## Step 6: Fail on demand + +_**West:**_ + +~~~ shell +if [ -n "${SKEWER_FAIL}" ]; then expr 1 / 0; fi +~~~ + +## Step 7: Fail as expected + +_**West:**_ + +~~~ shell +expr 1 / 0 +~~~ + +## Step 8: Expose the backend service + +We now have our sites linked to form a Skupper network, but no +services are exposed on it. + +Skupper uses _listeners_ and _connectors_ to expose services +across sites inside a Skupper network. A listener is a local +endpoint for client connections, configured with a routing key. A +connector exists in a remote site and binds a routing key to a +particular set of servers. Skupper routers forward client +connections from local listeners to remote connectors with +matching routing keys. + +In West, use the `skupper listener create` command to create a +listener for the backend. In East, use the `skupper connector +create` command to create a matching connector. + +_**West:**_ + +~~~ shell +skupper listener create backend 8080 +~~~ + +_Sample output:_ + +~~~ console +$ skupper listener create backend 8080 +Waiting for create to complete... +Listener "backend" is ready +~~~ + +_**East:**_ + +~~~ shell +skupper connector create backend 8080 +~~~ + +_Sample output:_ + +~~~ console +$ skupper connector create backend 8080 +Waiting for create to complete... +Connector "backend" is ready +~~~ + +The commands shown above use the name argument, `backend`, to also +set the default routing key and pod selector. You can use the +`--routing-key` and `--selector` options to set specific values. + + + +## Step 9: Access the frontend service + +In order to use and test the application, we need external access +to the frontend. + +Use `kubectl port-forward` to make the frontend available at +`localhost:8080`. + +_**West:**_ + +~~~ shell +kubectl port-forward deployment/frontend 8080:8080 +~~~ + +You can now access the web interface by navigating to +[http://localhost:8080](http://localhost:8080) in your browser. + +## Cleaning up + +To remove Skupper and the other resources from this exercise, use +the following commands: + +And more! + +_**West:**_ + +~~~ shell +skupper site delete --all +kubectl delete deployment/frontend +~~~ + +_**East:**_ + +~~~ shell +skupper site delete --all +kubectl delete deployment/backend +~~~ + +## Summary + +More summary + +## Next steps + +Check out the other [examples][examples] on the Skupper website. + +More steps + +## About this example + +This example was produced using [Skewer][skewer], a library for +documenting and testing Skupper examples. + +[skewer]: https://github.com/skupperproject/skewer + +Skewer provides utility functions for generating the README and +running the example steps. Use the `./plano` command in the project +root to see what is available. + +To quickly stand up the example using Minikube, try the `./plano demo` +command. diff --git a/external/skewer/example/plano b/external/skewer/example/plano new file mode 100755 index 0000000..476427d --- /dev/null +++ b/external/skewer/example/plano @@ -0,0 +1,28 @@ +#!/usr/bin/python3 +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +import sys + +sys.path.insert(0, "python") + +from plano import PlanoCommand + +if __name__ == "__main__": + PlanoCommand().main() diff --git a/external/skewer/example/python/plano b/external/skewer/example/python/plano new file mode 120000 index 0000000..2366248 --- /dev/null +++ b/external/skewer/example/python/plano @@ -0,0 +1 @@ +../../python/plano \ No newline at end of file diff --git a/external/skewer/example/python/skewer b/external/skewer/example/python/skewer new file mode 120000 index 0000000..d33ad4b --- /dev/null +++ b/external/skewer/example/python/skewer @@ -0,0 +1 @@ +../../python/skewer \ No newline at end of file diff --git a/external/skewer/example/skewer.yaml b/external/skewer/example/skewer.yaml new file mode 100644 index 0000000..6cc396a --- /dev/null +++ b/external/skewer/example/skewer.yaml @@ -0,0 +1,47 @@ +title: Skupper Hello World +subtitle: A minimal HTTP application deployed across Kubernetes clusters using Skupper +overview: | + An overview +sites: + west: + title: West + platform: kubernetes + namespace: west + env: + KUBECONFIG: ~/.kube/config-west + east: + title: East + platform: kubernetes + namespace: east + env: + KUBECONFIG: ~/.kube/config-east +steps: + - standard: platform/access_your_kubernetes_clusters + - standard: platform/install_skupper_on_your_kubernetes_clusters + - standard: hello_world/deploy_the_frontend_and_backend/kubernetes_cli + - standard: skupper/create_your_sites/kubernetes_cli + - standard: skupper/link_your_sites/kubernetes_cli + - title: Fail on demand + commands: + west: + - run: "if [ -n \"${SKEWER_FAIL}\" ]; then expr 1 / 0; fi" + - title: Fail as expected + commands: + west: + - run: "expr 1 / 0" + expect_failure: true + - standard: hello_world/expose_the_backend_service/kubernetes_cli + - standard: hello_world/access_the_frontend_service/kubernetes_cli + - standard: hello_world/cleaning_up/kubernetes_cli + preamble: | + @default@ + + And more! +summary: | + @default@ + + More summary +next_steps: | + @default@ + + More steps diff --git a/external/skewer/external/plano/.github/workflows/main.yaml b/external/skewer/external/plano/.github/workflows/main.yaml new file mode 100644 index 0000000..a416852 --- /dev/null +++ b/external/skewer/external/plano/.github/workflows/main.yaml @@ -0,0 +1,48 @@ +name: main +on: + push: + pull_request: + schedule: + - cron: "0 0 * * 0" +jobs: + main: + strategy: + fail-fast: false + matrix: + os: [macos-latest, ubuntu-latest, windows-latest] + version: [3.9, 3.x] + runs-on: ${{matrix.os}} + steps: + - uses: actions/checkout@v3 + - uses: actions/setup-python@v4 + with: + python-version: ${{matrix.version}} + - run: pip install build wheel + - run: python -m build + - run: pip install dist/ssorj_plano-1.0.0-py3-none-any.whl + - run: plano-self-test + cygwin: + runs-on: windows-latest + steps: + - run: git config --global core.autocrlf input + - uses: actions/checkout@v3 + - uses: cygwin/cygwin-install-action@master + with: + packages: python3 + - run: pip install build wheel + shell: C:\cygwin\bin\bash.exe -o igncr '{0}' + - run: make install + shell: C:\cygwin\bin\bash.exe -o igncr '{0}' + - run: echo "C:\Users\runneradmin\AppData\Roaming\Python\Python39\Scripts" >> "$GITHUB_PATH" + shell: C:\cygwin\bin\bash.exe -o igncr '{0}' + - run: plano-self-test + shell: C:\cygwin\bin\bash.exe -o igncr '{0}' + fedora: + runs-on: ubuntu-latest + container: fedora:latest + steps: + - uses: actions/checkout@v3 + - run: dnf -y install make pip python python-build python-wheel + - run: make install + - run: echo "$HOME/.local/bin" >> "$GITHUB_PATH" + - run: plano-self-test diff --git a/external/skewer/external/plano/.gitignore b/external/skewer/external/plano/.gitignore new file mode 100644 index 0000000..3af00c3 --- /dev/null +++ b/external/skewer/external/plano/.gitignore @@ -0,0 +1,6 @@ +__pycache__/ +*.egg-info/ +/build +/dist +/.coverage +/htmlcov diff --git a/external/skewer/external/plano/LICENSE.txt b/external/skewer/external/plano/LICENSE.txt new file mode 100644 index 0000000..e06d208 --- /dev/null +++ b/external/skewer/external/plano/LICENSE.txt @@ -0,0 +1,202 @@ +Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright {yyyy} {name of copyright owner} + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + diff --git a/external/skewer/external/plano/MANIFEST.in b/external/skewer/external/plano/MANIFEST.in new file mode 100644 index 0000000..778ca32 --- /dev/null +++ b/external/skewer/external/plano/MANIFEST.in @@ -0,0 +1 @@ +include src/plano/_testproject/* diff --git a/external/skewer/external/plano/Makefile b/external/skewer/external/plano/Makefile new file mode 100644 index 0000000..0ffef0c --- /dev/null +++ b/external/skewer/external/plano/Makefile @@ -0,0 +1,70 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +.NOTPARALLEL: + +# A workaround for an install-with-prefix problem in Fedora 36 +# +# https://docs.fedoraproject.org/en-US/fedora/latest/release-notes/developers/Development_Python/#_pipsetup_py_installation_with_prefix +# https://bugzilla.redhat.com/show_bug.cgi?id=2026979 + +export RPM_BUILD_ROOT := fake + +.PHONY: build +build: + python -m build + +.PHONY: test +test: clean build + python -m venv build/venv + . build/venv/bin/activate && pip install --force-reinstall dist/ssorj_plano-*-py3-none-any.whl + . build/venv/bin/activate && plano-self-test + +.PHONY: qtest +qtest: + PYTHONPATH=src python -m plano._tests + +.PHONY: install +install: build + pip install --user --force-reinstall dist/ssorj_plano-*-py3-none-any.whl + +.PHONY: clean +clean: + rm -rf build dist htmlcov .coverage src/plano/__pycache__ src/plano.egg-info + +.PHONY: docs +docs: + mkdir -p build + sphinx-build -M html docs build/docs + +# XXX Watch out: The 3.11 in this is environment dependent +.PHONY: coverage +coverage: build + python -m venv build/venv + . build/venv/bin/activate && pip install --force-reinstall dist/ssorj_plano-*-py3-none-any.whl + . build/venv/bin/activate && PYTHONPATH=build/venv/lib/python3.12/site-packages coverage run \ + --include build/venv/lib/python\*/site-packages/plano/\*,build/venv/bin/\* \ + build/venv/bin/plano-self-test + coverage report + coverage html + @echo "OUTPUT: file:${CURDIR}/htmlcov/index.html" + +.PHONY: upload +upload: build + twine upload --repository testpypi dist/* diff --git a/external/skewer/external/plano/README.md b/external/skewer/external/plano/README.md new file mode 100644 index 0000000..e44317d --- /dev/null +++ b/external/skewer/external/plano/README.md @@ -0,0 +1,155 @@ +# Plano + +[![main](https://github.com/ssorj/plano/workflows/main/badge.svg)](https://github.com/ssorj/plano/actions?query=workflow%3Amain) + +Python functions for writing shell-style system scripts. + +## Installation + +Install the dependencies if you need to: + +~~~ +sudo dnf -y install python-build python-pip python-pyyaml +~~~ + +Install plano globally for the current user: + +~~~ +make install +~~~ + +## A self-contained command with subcommands + +`~/.local/bin/widget`: +~~~ python +#!/usr/bin/python + +import sys +from plano import * + +@command +def greeting(message="Howdy"): + print(message) + +if __name__ == "__main__": + PlanoCommand(sys.modules[__name__]).main() +~~~ + +~~~ shell +$ widget greeting --message Hello +--> greeting +Hello +<-- greeting +OK (0s) +~~~ + +## A self-contained test command + +`~/.local/bin/widget-test`: +~~~ python +import sys +from plano import * + +@test +def check(): + run("widget greeting --message Yo") + +if __name__ == "__main__": + PlanoTestCommand(sys.modules[__name__]).main() +~~~ + +~~~ shell +$ widget-test +=== Configuration === +Modules: __main__ +Test timeout: 5m +Fail fast: False + +=== Module '__main__' === +check ........................................................... PASSED 0.0s + +=== Summary === +Total: 1 +Skipped: 0 +Failed: 0 + +=== RESULT === +All tests passed +~~~ + +## Programmatic test definition + +~~~ python +from plano import * + +def test_widget(message): + run(f"widget greeting --message {message}") + +for message in "hi", "lo", "in between": + add_test(f"message-{message}", test_widget, message) +~~~ + +## Things to know + +* The plano command accepts command sequences in the form "this,that" + (no spaces). The command arguments are applied to the last command + only. + +## Dependencies + +PyYAML: + +~~~ +pip install pyyaml +~~~ + +## Setting up Plano as an embedded dependency + +Change directory to the root of your project: + +~~~ console +cd / +~~~ + +Add the Plano code as a subdirectory: + +~~~ shell +mkdir -p external +curl -sfL https://github.com/ssorj/plano/archive/main.tar.gz | tar -C external -xz +mv external/plano-main external/plano +~~~ + +Symlink the Plano library into your `python` directory: + +~~~ shell +mkdir -p python +ln -s ../external/plano/src/plano python/plano +~~~ + +Copy the `plano` command into the root of your project: + +~~~ shell +cp external/plano/bin/plano plano +~~~ + +Optionally, add a command to `.plano.py` to update the embedded Plano: + +~~~ python +from plano.github import * + +@command +def update_plano(): + """ + Update the embedded Plano repo + """ + update_external_from_github("external/plano", "ssorj", "plano") +~~~ + +## Extending an existing command + +~~~ python +@command(parent=blammo) +def blammo(*args, **kwargs): + parent(*args, **kwargs) + # Do child stuff +~~~ diff --git a/external/skewer/external/plano/bin/plano b/external/skewer/external/plano/bin/plano new file mode 100755 index 0000000..476427d --- /dev/null +++ b/external/skewer/external/plano/bin/plano @@ -0,0 +1,28 @@ +#!/usr/bin/python3 +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +import sys + +sys.path.insert(0, "python") + +from plano import PlanoCommand + +if __name__ == "__main__": + PlanoCommand().main() diff --git a/external/skewer/external/plano/bin/plano-test b/external/skewer/external/plano/bin/plano-test new file mode 100755 index 0000000..f92ad34 --- /dev/null +++ b/external/skewer/external/plano/bin/plano-test @@ -0,0 +1,28 @@ +#!/usr/bin/python3 +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +import sys + +sys.path.insert(0, "python") + +from plano import PlanoTestCommand + +if __name__ == "__main__": + PlanoTestCommand().main() diff --git a/external/skewer/external/plano/docs/conf.py b/external/skewer/external/plano/docs/conf.py new file mode 100644 index 0000000..3277b1e --- /dev/null +++ b/external/skewer/external/plano/docs/conf.py @@ -0,0 +1,34 @@ +# import os +# import sys + +# sys.path.insert(0, os.path.abspath("../python")) + +extensions = [ + "sphinx.ext.autodoc", +] + +# autodoc_member_order = "bysource" +# autodoc_default_flags = ["members", "undoc-members", "inherited-members"] + +autodoc_default_options = { + "members": True, + "member-order": "bysource", + "undoc-members": True, + "imported-members": True, + "exclude-members": "PlanoProcess", +} + +master_doc = "index" +project = u"Plano" +copyright = u"1975" +author = u"Justin Ross" + +version = u"0.1.0" +release = u"" + +pygments_style = "sphinx" +html_theme = "nature" + +html_theme_options = { + "nosidebar": True, +} diff --git a/external/skewer/external/plano/docs/index.rst b/external/skewer/external/plano/docs/index.rst new file mode 100644 index 0000000..7441b03 --- /dev/null +++ b/external/skewer/external/plano/docs/index.rst @@ -0,0 +1,4 @@ +Plano +===== + +.. automodule:: plano diff --git a/external/skewer/external/plano/pyproject.toml b/external/skewer/external/plano/pyproject.toml new file mode 100644 index 0000000..a682141 --- /dev/null +++ b/external/skewer/external/plano/pyproject.toml @@ -0,0 +1,23 @@ +[build-system] +requires = [ "setuptools", "setuptools-scm" ] +build-backend = "setuptools.build_meta" + +[project] +name = "ssorj-plano" +version = "1.0.0" +authors = [ { name = "Justin Ross", email = "jross@apache.org" } ] +description = "Python functions for writing shell-style system scripts" +license = { file = "LICENSE.txt" } +readme = "README.md" +classifiers = [ "License :: OSI Approved :: Apache Software License" ] +requires-python = ">=3.7" +dependencies = [ "PyYAML" ] + +[project.scripts] +plano = "plano.command:_main" +plano-test = "plano.test:_main" +plano-self-test = "plano._tests:main" + +[project.urls] +"Homepage" = "https://github.com/ssorj/plano" +"Bug Tracker" = "https://github.com/ssorj/plano/issues" diff --git a/external/skewer/external/plano/src/plano/__init__.py b/external/skewer/external/plano/src/plano/__init__.py new file mode 100644 index 0000000..3218323 --- /dev/null +++ b/external/skewer/external/plano/src/plano/__init__.py @@ -0,0 +1,24 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +from .main import * +from .main import _default_sigterm_handler + +from .command import * +from .test import * diff --git a/external/skewer/external/plano/src/plano/_testproject/.plano.py b/external/skewer/external/plano/src/plano/_testproject/.plano.py new file mode 100644 index 0000000..8cda2e7 --- /dev/null +++ b/external/skewer/external/plano/src/plano/_testproject/.plano.py @@ -0,0 +1,112 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +from plano import * + +@command +def base_command(alpha, beta, omega="x"): + """ + Base command help + """ + + print("base", alpha, beta, omega) + +@command(name="extended-command", parent=base_command) +def extended_command(alpha, beta, omega="y"): + print("extended", alpha, omega) + parent(alpha, beta, omega) + +@command(parameters=[CommandParameter("message_", help="The message to print", display_name="message"), + CommandParameter("count", help="Print the message COUNT times"), + CommandParameter("extra", default=1, short_option="e")]) +def echo(message_, count=1, extra=None, trouble=False, verbose=False): + """ + Print a message to the console + """ + + print("Echoing (message={}, count={})".format(message_, count)) + + if trouble: + raise Exception("Trouble") + + for i in range(count): + print(message_) + +@command +def echoecho(message): + echo(message) + +@command +def haberdash(first, *middle, last="bowler"): + """ + Habberdash command help + """ + + data = [first, *middle, last] + write_json("haberdash.json", data) + +@command(parameters=[CommandParameter("optional", positional=True)]) +def balderdash(required, optional="malarkey", other="rubbish", **extra_kwargs): + """ + Balderdash command help + """ + + data = [required, optional, other] + write_json("balderdash.json", data) + +@command +def splasher(): + write_json("splasher.json", [1]) + +@command +def dasher(alpha, beta=123): + pass + +@command(passthrough=True) +def dancer(gamma, omega="abc", passthrough_args=[]): + write_json("dancer.json", passthrough_args) + +# Vixen's parent calls prancer. We are testing to ensure the extended +# prancer (below) is executed. + +from plano._tests import prancer, vixen + +@command(parent=prancer) +def prancer(): + parent() + + notice("Extended prancer") + + write_json("prancer.json", True) + +@command(parent=vixen) +def vixen(): + parent() + +@command +def no_parent(): + parent() + +@command(parameters=[CommandParameter("spinach")]) +def feta(*args, **kwargs): + write_json("feta.json", kwargs["spinach"]) + +@command(hidden=True) +def invisible(something="nothing"): + write_json("invisible.json", something) diff --git a/external/skewer/external/plano/src/plano/_testproject/src/chucker/__init__.py b/external/skewer/external/plano/src/plano/_testproject/src/chucker/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/external/skewer/external/plano/src/plano/_testproject/src/chucker/moretests.py b/external/skewer/external/plano/src/plano/_testproject/src/chucker/moretests.py new file mode 100644 index 0000000..2607880 --- /dev/null +++ b/external/skewer/external/plano/src/plano/_testproject/src/chucker/moretests.py @@ -0,0 +1,24 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +from plano import * + +@test +def hello_again(): + print("Hello again") diff --git a/external/skewer/external/plano/src/plano/_testproject/src/chucker/tests.py b/external/skewer/external/plano/src/plano/_testproject/src/chucker/tests.py new file mode 100644 index 0000000..4e0cec1 --- /dev/null +++ b/external/skewer/external/plano/src/plano/_testproject/src/chucker/tests.py @@ -0,0 +1,70 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +from plano import * + +@test +def hello(): + print("Hello") + +@test +async def hello_async(): + print("Hello") + +@test +def goodbye(): + print("Goodbye") + +@test(disabled=True) +def badbye(): + print("Badbye") + assert False + +@test(disabled=True) +def skipped(): + skip_test("Skipped") + assert False + +@test(disabled=True) +def keyboard_interrupt(): + raise KeyboardInterrupt() + +@test(disabled=True, timeout=0.05) +def timeout(): + sleep(10, quiet=True) + assert False + +@test(disabled=True) +def process_error(): + run("expr 1 / 0") + +@test(disabled=True) +def system_exit_(): + exit(1) + +def test_widget(message): + print(message) + +for message in "hi", "lo", "in between": + add_test(f"message-{message}", test_widget, message) + +@test(disabled=True) +def badbye2(): + print("Badbye 2") + assert False diff --git a/external/skewer/external/plano/src/plano/_tests.py b/external/skewer/external/plano/src/plano/_tests.py new file mode 100644 index 0000000..ae71ad9 --- /dev/null +++ b/external/skewer/external/plano/src/plano/_tests.py @@ -0,0 +1,1370 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +import datetime as _datetime +import getpass as _getpass +import os as _os +import signal as _signal +import socket as _socket +import sys as _sys +import threading as _threading + +from .github import * + +try: + import http.server as _http +except ImportError: # pragma: nocover + import BaseHTTPServer as _http + +from .test import * + +test_project_dir = join(get_parent_dir(__file__), "_testproject") + +class test_project(working_dir): + def __enter__(self): + dir = super(test_project, self).__enter__() + copy(test_project_dir, ".", inside=False) + return dir + +TINY_INTERVAL = 0.05 + +@test +def archive_operations(): + with working_dir(): + make_dir("some-dir") + touch("some-dir/some-file") + + make_archive("some-dir") + assert is_file("some-dir.tar.gz"), list_dir() + + extract_archive("some-dir.tar.gz", output_dir="some-subdir") + assert is_dir("some-subdir/some-dir"), list_dir("some-subdir") + assert is_file("some-subdir/some-dir/some-file"), list_dir("some-subdir/some-dir") + + rename_archive("some-dir.tar.gz", "something-else") + assert is_file("something-else.tar.gz"), list_dir() + + extract_archive("something-else.tar.gz") + assert is_dir("something-else"), list_dir() + assert is_file("something-else/some-file"), list_dir("something-else") + +@test +def command_operations(): + class SomeCommand(BaseCommand): + def __init__(self): + super().__init__() + + self.parser = BaseArgumentParser() + self.parser.add_argument("--interrupt", action="store_true") + self.parser.add_argument("--explode", action="store_true") + self.parser.add_argument("--verbose", action="store_true") + self.parser.add_argument("--quiet", action="store_true") + + def parse_args(self, args): + return self.parser.parse_args(args) + + def init(self, args): + self.interrupt = args.interrupt + self.explode = args.explode + self.verbose = args.verbose + self.quiet = args.quiet + + def run(self): + if self.interrupt: + raise KeyboardInterrupt() + + if self.explode: + raise PlanoError("Exploded") + + if self.verbose: + print("Hello") + + SomeCommand().main([]) + SomeCommand().main(["--verbose"]) + SomeCommand().main(["--interrupt"]) + + with expect_system_exit(): + SomeCommand().main(["--verbose", "--explode"]) + +@test +def console_operations(): + eprint("Here's a story") + eprint("About a", "man named Brady") + + pprint(list_dir()) + pprint(PlanoProcess, 1, "abc", end="\n\n") + + flush() + + with console_color("red"): + print("ALERT") + + print(cformat("AMBER ALERT", color="yellow")) + print(cformat("NO ALERT")) + + cprint("CRITICAL ALERT", color="red", bright=True) + +@test +def dir_operations(): + with working_dir(): + test_dir = make_dir("some-dir") + test_file_1 = touch(join(test_dir, "some-file-1")) + test_file_2 = touch(join(test_dir, "some-file-2")) + + result = list_dir(test_dir) + assert join(test_dir, result[0]) == test_file_1, (join(test_dir, result[0]), test_file_1) + + result = list_dir(test_dir, "*-file-1") + assert result == ["some-file-1"], (result, ["some-file-1"]) + + result = list_dir(test_dir, exclude="*-file-1") + assert result == ["some-file-2"], (result, ["some-file-2"]) + + result = list_dir("some-dir", "*.not-there") + assert result == [], result + + with working_dir(): + result = list_dir() + assert result == [], result + + print_dir() + print_dir(test_dir) + print_dir(test_dir, "*.not-there") + + result = find(test_dir) + assert result == [test_file_1, test_file_2], (result, [test_file_1, test_file_2]) + + result = find(test_dir, include="*-file-1") + assert result == [test_file_1], (result, [test_file_1]) + + result = find(test_dir, exclude="*-file-1") + assert result == [test_file_2], (result, [test_file_2]) + + with working_dir(): + result = find() + assert result == [], result + + make_dir("subdir") + + result = find("./subdir") + assert result == [], result + + with working_dir(): + with working_dir("a-dir", quiet=True): + touch("a-file") + + curr_dir = get_current_dir() + prev_dir = change_dir("a-dir") + new_curr_dir = get_current_dir() + new_prev_dir = change_dir(curr_dir) + + assert curr_dir == prev_dir, (curr_dir, prev_dir) + assert new_curr_dir == new_prev_dir, (new_curr_dir, new_prev_dir) + +@test +def env_operations(): + result = join_path_var("a", "b", "c", "a") + assert result == _os.pathsep.join(("a", "b", "c")), result + + curr_dir = get_current_dir() + + with working_dir("."): + assert get_current_dir() == curr_dir, (get_current_dir(), curr_dir) + + result = get_home_dir() + assert result == _os.path.expanduser("~"), (result, _os.path.expanduser("~")) + + result = get_home_dir("alice") + assert result.endswith("alice"), result + + user = _getpass.getuser() + result = get_user() + assert result == user, (result, user) + + result = get_hostname() + assert result, result + + result = get_program_name() + assert result, result + + result = get_program_name("alpha beta") + assert result == "alpha", result + + result = get_program_name("X=Y alpha beta") + assert result == "alpha", result + + result = which("echo") + assert result, result + + with working_env(YES_I_AM_SET=1): + check_env("YES_I_AM_SET") + + with expect_error(): + check_env("NO_I_AM_NOT") + + with working_env(I_AM_SET_NOW=1, amend=False): + check_env("I_AM_SET_NOW") + assert "YES_I_AM_SET" not in ENV, ENV + + with working_env(SOME_VAR=1): + assert ENV["SOME_VAR"] == "1", ENV.get("SOME_VAR") + + with working_env(SOME_VAR=2): + assert ENV["SOME_VAR"] == "2", ENV.get("SOME_VAR") + + with expect_error(): + check_program("not-there") + + with expect_error(): + check_module("not_there") + + with expect_output(contains="ARGS:") as out: + with open(out, "w") as f: + print_env(file=f) + + print_stack() + +@test +def file_operations(): + with working_dir(): + alpha_dir = make_dir("alpha-dir") + alpha_file = touch(join(alpha_dir, "alpha-file")) + alpha_link = make_link(join(alpha_dir, "alpha-file-link"), "alpha-file") + alpha_broken_link = make_link(join(alpha_dir, "broken-link"), "no-such-file") + + beta_dir = make_dir("beta-dir") + beta_file = touch(join(beta_dir, "beta-file")) + beta_link = make_link(join(beta_dir, "beta-file-link"), "beta-file") + beta_broken_link = make_link(join(beta_dir, "broken-link"), join("..", alpha_dir, "no-such-file")) + beta_another_link = make_link(join(beta_dir, "broken-link"), join("..", alpha_dir, "alpha-file-link")) + + assert exists(beta_link) + assert exists(beta_file) + + with working_dir("beta-dir"): + assert is_file(read_link("beta-file-link")) + + copied_file = copy(alpha_file, beta_dir) + assert copied_file == join(beta_dir, "alpha-file"), copied_file + assert is_file(copied_file), list_dir(beta_dir) + + copied_link = copy(beta_link, join(beta_dir, "beta-file-link-copy")) + assert copied_link == join(beta_dir, "beta-file-link-copy"), copied_link + assert is_link(copied_link), list_dir(beta_dir) + + copied_dir = copy(alpha_dir, beta_dir) + assert copied_dir == join(beta_dir, "alpha-dir"), copied_dir + assert is_link(join(copied_dir, "alpha-file-link")) + + moved_file = move(beta_file, alpha_dir) + assert moved_file == join(alpha_dir, "beta-file"), moved_file + assert is_file(moved_file), list_dir(alpha_dir) + assert not exists(beta_file), list_dir(beta_dir) + + moved_dir = move(beta_dir, alpha_dir) + assert moved_dir == join(alpha_dir, "beta-dir"), moved_dir + assert is_dir(moved_dir), list_dir(alpha_dir) + assert not exists(beta_dir) + + gamma_dir = make_dir("gamma-dir") + gamma_file = touch(join(gamma_dir, "gamma-file")) + + delta_dir = make_dir("delta-dir") + delta_file = touch(join(delta_dir, "delta-file")) + + copy(gamma_dir, delta_dir, inside=False) + assert is_file(join("delta-dir", "gamma-file")) + + move(gamma_dir, delta_dir, inside=False) + assert is_file(join("delta-dir", "gamma-file")) + assert not exists(gamma_dir) + + epsilon_dir = make_dir("epsilon-dir") + epsilon_file_1 = touch(join(epsilon_dir, "epsilon-file-1")) + epsilon_file_2 = touch(join(epsilon_dir, "epsilon-file-2")) + epsilon_file_3 = touch(join(epsilon_dir, "epsilon-file-3")) + epsilon_file_4 = touch(join(epsilon_dir, "epsilon-file-4")) + + remove("not-there") + + remove(epsilon_file_2) + assert not exists(epsilon_file_2) + + remove(epsilon_dir) + assert not exists(epsilon_file_1) + assert not exists(epsilon_dir) + + remove([epsilon_file_3, epsilon_file_4]) + assert not exists(epsilon_file_3) + assert not exists(epsilon_file_4) + + file = write("xes", "x" * 10) + result = get_file_size(file) + assert result == 10, result + + zeta_dir = make_dir("zeta-dir") + zeta_file = touch(join(zeta_dir, "zeta-file")) + + eta_dir = make_dir("eta-dir") + eta_file = touch(join(eta_dir, "eta-file")) + + replace(zeta_dir, eta_dir) + assert not exists(zeta_file) + assert exists(zeta_dir) + assert is_file(join(zeta_dir, "eta-file")) + + with expect_exception(): + replace(zeta_dir, "not-there") + + assert exists(zeta_dir) + assert is_file(join(zeta_dir, "eta-file")) + + theta_file = write("theta-file", "theta") + iota_file = write("iota-file", "iota") + + replace(theta_file, iota_file) + assert not exists(iota_file) + assert read(theta_file) == "iota" + +@test +def github_operations(): + result = convert_github_markdown("# Hello, Fritz") + assert "Hello, Fritz" in result, result + + with working_dir(): + update_external_from_github("temp", "ssorj", "plano") + assert is_file("temp/Makefile"), list_dir("temp") + +@test +def http_operations(): + class Handler(_http.BaseHTTPRequestHandler): + def do_GET(self): + if not self.path.startswith("/api"): + self.send_response(404) + self.end_headers() + return + + self.send_response(200) + self.end_headers() + self.wfile.write(b"[1]") + + def do_POST(self): + length = int(self.headers["content-length"]) + content = self.rfile.read(length) + + self.send_response(200) + self.end_headers() + self.wfile.write(content) + + def do_PUT(self): + length = int(self.headers["content-length"]) + content = self.rfile.read(length) + + self.send_response(200) + self.end_headers() + + class ServerThread(_threading.Thread): + def __init__(self, server): + _threading.Thread.__init__(self) + self.server = server + + def run(self): + self.server.serve_forever() + + host, port = "localhost", get_random_port() + url = "http://{}:{}/api".format(host, port) + missing_url = "http://{}:{}/nono".format(host, port) + + try: + server = _http.HTTPServer((host, port), Handler) + except (OSError, PermissionError): # pragma: nocover + # Try one more time + port = get_random_port() + server = _http.HTTPServer((host, port), Handler) + + server_thread = ServerThread(server) + server_thread.start() + + try: + with working_dir(): + result = http_get(url) + assert result == "[1]", result + + with expect_error(): + http_get(missing_url) + + result = http_get(url, insecure=True) + assert result == "[1]", result + + result = http_get(url, user="fritz", password="secret") + assert result == "[1]", result + + result = http_get(url, output_file="a") + output = read("a") + assert result is None, result + assert output == "[1]", output + + result = http_get_json(url) + assert result == [1], result + + file_b = write("b", "[2]") + + result = http_post(url, read(file_b), insecure=True) + assert result == "[2]", result + + result = http_post(url, read(file_b), output_file="x") + output = read("x") + assert result is None, result + assert output == "[2]", output + + result = http_post_file(url, file_b) + assert result == "[2]", result + + result = http_post_json(url, parse_json(read(file_b))) + assert result == [2], result + + file_c = write("c", "[3]") + + result = http_put(url, read(file_c), insecure=True) + assert result is None, result + + result = http_put_file(url, file_c) + assert result is None, result + + result = http_put_json(url, parse_json(read(file_c))) + assert result is None, result + finally: + server.shutdown() + server.server_close() + server_thread.join() + +@test +def io_operations(): + with working_dir(): + input_ = "some-text\n" + file_a = write("a", input_) + output = read(file_a) + + assert input_ == output, (input_, output) + + pre_input = "pre-some-text\n" + post_input = "post-some-text\n" + + prepend(file_a, pre_input) + append(file_a, post_input) + + output = tail(file_a, 100) + tailed = tail(file_a, 1) + + assert output.startswith(pre_input), (output, pre_input) + assert output.endswith(post_input), (output, post_input) + assert tailed == post_input, (tailed, post_input) + + input_lines = [ + "alpha\n", + "beta\n", + "gamma\n", + "chi\n", + "psi\n", + "omega\n", + ] + + file_b = write_lines("b", input_lines) + output_lines = read_lines(file_b) + + assert input_lines == output_lines, (input_lines, output_lines) + + pre_lines = ["pre-alpha\n"] + post_lines = ["post-omega\n"] + + prepend_lines(file_b, pre_lines) + append_lines(file_b, post_lines) + + output_lines = tail_lines(file_b, 100) + tailed_lines = tail_lines(file_b, 1) + + assert output_lines[0] == pre_lines[0], (output_lines[0], pre_lines[0]) + assert output_lines[-1] == post_lines[0], (output_lines[-1], post_lines[0]) + assert tailed_lines[0] == post_lines[0], (tailed_lines[0], post_lines[0]) + + file_c = touch("c") + assert is_file(file_c), file_c + + file_d = write("d", "front@middle@@middle@back") + path = string_replace_in_file(file_d, "@middle@", "M", count=1) + result = read(path) + assert result == "frontM@middle@back", result + + file_e = write("e", "123") + file_f = write("f", "456") + path = concatenate("g", (file_e, "not-there", file_f)) + result = read(path) + assert result == "123456", result + +@test +def iterable_operations(): + result = unique([1, 1, 1, 2, 2, 3]) + assert result == [1, 2, 3], result + + result = skip([1, "", 2, None, 3]) + assert result == [1, 2, 3], result + + result = skip([1, "", 2, None, 3], 2) + assert result == [1, "", None, 3], result + +@test +def json_operations(): + with working_dir(): + input_data = { + "alpha": [1, 2, 3], + } + + file_a = write_json("a", input_data) + output_data = read_json(file_a) + + assert input_data == output_data, (input_data, output_data) + + json = read(file_a) + parsed_data = parse_json(json) + emitted_json = emit_json(input_data) + + assert input_data == parsed_data, (input_data, parsed_data) + assert json == emitted_json, (json, emitted_json) + + with expect_output(equals=emitted_json) as out: + with open(out, "w") as f: + print_json(input_data, file=f, end="") + +@test +def link_operations(): + with working_dir(): + make_dir("some-dir") + path = get_absolute_path(touch("some-dir/some-file")) + + with working_dir("another-dir"): + link = make_link("a-link", path) + linked_path = read_link(link) + assert linked_path.endswith(path), (linked_path, path) + +@test +def logging_operations(): + error("Error!") + warning("Warning!") + notice("Take a look!") + notice(123) + debug("By the way") + debug("abc{}{}{}", 1, 2, 3) + + with expect_exception(RuntimeError): + fail(RuntimeError("Error!")) + + with expect_error(): + fail("Error!") + + with expect_error(): + fail("Error! {}", "Let me elaborate") + + for level in ("debug", "notice", "warning", "error"): + with expect_output(contains="Hello") as out: + with logging_disabled(): + with logging_enabled(level=level, output=out): + log(level, "hello") + + with expect_output(equals="") as out: + with logging_enabled(output=out): + with logging_disabled(): + error("Yikes") + + with expect_output(contains="flipper") as out: + with logging_enabled(output=out): + with logging_context("flipper"): + notice("Whhat") + + with logging_context("bip"): + with logging_context("boop"): + error("It's alarming!") + +@test +def path_operations(): + abspath = _os.path.abspath + normpath = _os.path.normpath + + with working_dir("/"): + result = get_current_dir() + expect = abspath(_os.sep) + assert result == expect, (result, expect) + + path = "a/b/c" + result = get_absolute_path(path) + expect = join(get_current_dir(), path) + assert result == expect, (result, expect) + + path = "/x/y/z" + result = get_absolute_path(path) + expect = abspath(path) + assert result == expect, (result, expect) + + path = "/x/y/z" + assert is_absolute(path) + + path = "x/y/z" + assert not is_absolute(path) + + path = "a//b/../c/" + result = normalize_path(path) + expect = normpath("a/c") + assert result == expect, (result, expect) + + path = "/a/../c" + result = get_real_path(path) + expect = abspath("/c") + assert result == expect, (result, expect) + + path = abspath("/a/b") + result = get_relative_path(path, "/a/c") + expect = normpath("../b") + assert result == expect, (result, expect) + + path = abspath("/a/b") + result = get_file_url(path) + expect = "file:{}".format(path) + assert result == expect, (result, expect) + + with working_dir(): + result = get_file_url("afile") + expect = join(get_file_url(get_current_dir()), "afile") + assert result == expect, (result, expect) + + path = "/alpha/beta.ext" + path_split = "/alpha", "beta.ext" + path_split_extension = "/alpha/beta", ".ext" + name_split_extension = "beta", ".ext" + + result = join(*path_split) + expect = normpath(path) + assert result == expect, (result, expect) + + result = split(path) + expect = normpath(path_split[0]), normpath(path_split[1]) + assert result == expect, (result, expect) + + result = split_extension(path) + expect = normpath(path_split_extension[0]), normpath(path_split_extension[1]) + assert result == expect, (result, expect) + + result = get_parent_dir(path) + expect = normpath(path_split[0]) + assert result == expect, (result, expect) + + result = get_base_name(path) + expect = normpath(path_split[1]) + assert result == expect, (result, expect) + + result = get_name_stem(path) + expect = normpath(name_split_extension[0]) + assert result == expect, (result, expect) + + result = get_name_stem("alpha.tar.gz") + expect = "alpha" + assert result == expect, (result, expect) + + result = get_name_extension(path) + expect = normpath(name_split_extension[1]) + assert result == expect, (result, expect) + + with working_dir(): + touch("adir/afile") + + check_exists("adir") + check_exists("adir/afile") + check_dir("adir") + check_file("adir/afile") + + with expect_error(): + check_exists("adir/notafile") + + with expect_error(): + check_file("adir/notafile") + + with expect_error(): + check_file("adir") + + with expect_error(): + check_dir("not-there") + + with expect_error(): + check_dir("adir/afile") + + await_exists("adir/afile") + + if not WINDOWS: + with expect_timeout(): + await_exists("adir/notafile", timeout=TINY_INTERVAL) + +@test +def port_operations(): + result = get_random_port() + assert result >= 49152 and result <= 65535, result + + server_port = get_random_port() + server_socket = _socket.socket(_socket.AF_INET, _socket.SOCK_STREAM) + + try: + try: + server_socket.bind(("localhost", server_port)) + except (OSError, PermissionError): # pragma: nocover + # Try one more time + server_port = get_random_port() + server_socket.bind(("localhost", server_port)) + + server_socket.listen(5) + + await_port(server_port) + await_port(str(server_port)) + + check_port(server_port) + + # Non-Linux platforms don't seem to produce the expected + # error. + if LINUX: + with expect_error(): + get_random_port(min=server_port, max=server_port) + finally: + server_socket.close() + + if not WINDOWS: + with expect_timeout(): + await_port(get_random_port(), timeout=TINY_INTERVAL) + +@test +def process_operations(): + result = get_process_id() + assert result, result + + proc = run("date") + assert proc is not None, proc + + print(repr(proc)) + + run("date", stash=True) + + run(["echo", 1, 2, 3]) + run(["echo", 1, 2, 3], shell=True) + + proc = run(["echo", "hello"], check=False) + assert proc.exit_code == 0, proc.exit_code + + proc = run("cat /uh/uh", check=False) + assert proc.exit_code > 0, proc.exit_code + + with expect_output() as out: + run("date", output=out) + + run("date", output=DEVNULL) + run("date", stdin=DEVNULL) + run("date", stdout=DEVNULL) + run("date", stderr=DEVNULL) + + run("echo hello", quiet=True) + run("echo hello | cat", shell=True) + run(["echo", "hello"], shell=True) + + with expect_error(): + run("/not/there") + + with expect_error(): + run("cat /whoa/not/really", stash=True) + + result = call("echo hello").strip() + expect = "hello" + assert result == expect, (result, expect) + + result = call("echo hello | cat", shell=True).strip() + expect = "hello" + assert result == expect, (result, expect) + + with expect_error(): + call("cat /whoa/not/really") + + proc = start("sleep 10") + + if not WINDOWS: + with expect_timeout(): + wait(proc, timeout=TINY_INTERVAL) + + proc = start("echo hello") + sleep(TINY_INTERVAL) + stop(proc) + + proc = start("sleep 10") + stop(proc) + + proc = start("sleep 10") + kill(proc) + sleep(TINY_INTERVAL) + stop(proc) + + proc = start("date --not-there") + sleep(TINY_INTERVAL) + stop(proc) + + with start("sleep 10"): + sleep(TINY_INTERVAL) + + with working_dir(): + touch("i") + + with start("date", stdin="i", stdout="o", stderr="e"): + pass + + with expect_system_exit(): + exit() + + with expect_system_exit(): + exit(verbose=True) + + with expect_system_exit(): + exit("abc") + + with expect_system_exit(): + exit("abc", verbose=True) + + with expect_system_exit(): + exit(Exception()) + + with expect_system_exit(): + exit(Exception(), verbose=True) + + with expect_system_exit(): + exit(123) + + with expect_system_exit(): + exit(123, verbose=True) + + with expect_system_exit(): + exit(-123) + + with expect_exception(PlanoException): + exit(object()) + +@test +def string_operations(): + result = string_replace_re("ab", "a", "b") + assert result == "bb", result + + result = string_replace_re("aba", "a", "b", count=1) + assert result == "bba", result + + result = string_matches_re("abc", "b") + assert result + + result = string_matches_re("abc", "^b") + assert not result + + result = string_matches_glob("abc", "*b*") + assert result + + result = string_matches_glob("abc", "b*") + assert not result + + result = shorten("abc", 2) + assert result == "ab", result + + result = shorten("abc", None) + assert result == "abc", result + + result = shorten("abc", 10) + assert result == "abc", result + + result = shorten("ellipsis", 6, ellipsis="...") + assert result == "ell...", result + + result = shorten(None, 6) + assert result == "", result + + result = plural(None) + assert result == "", result + + result = plural("") + assert result == "", result + + result = plural("test") + assert result == "tests", result + + result = plural("test", 1) + assert result == "test", result + + result = plural("bus") + assert result == "busses", result + + result = plural("bus", 1) + assert result == "bus", result + + result = plural("terminus", 2, "termini") + assert result == "termini", result + + result = capitalize(None) + assert result == "", result + + result = capitalize("") + assert result == "", result + + result = capitalize("hello, Frank") + assert result == "Hello, Frank", result + + encoded_result = base64_encode(b"abc") + decoded_result = base64_decode(encoded_result) + assert decoded_result == b"abc", decoded_result + + encoded_result = url_encode("abc=123&yeah!") + decoded_result = url_decode(encoded_result) + assert decoded_result == "abc=123&yeah!", decoded_result + + result = parse_url("http://example.net/index.html") + assert result.hostname == "example.net" + + append = StringBuilder() + + result = append.join() + assert result == "" + + append("alpha") + append("beta") + result = str(append) + assert result == "alpha\nbeta" + + append.clear() + append("abc") + append("123") + result = append.join() + assert result == "abc\n123" + + append.clear() + append() + append() + result = append.join() + assert result == "\n" + + append.clear() + append("xyz") + result = append.join() + assert result == "xyz" + + append.clear() + append("789") + + with temp_file() as f: + result = read(append.write(f)) + assert result == "789" + +@test +def temp_operations(): + system_temp_dir = get_system_temp_dir() + + result = make_temp_file() + assert result.startswith(system_temp_dir), result + + result = make_temp_file(suffix=".txt") + assert result.endswith(".txt"), result + + result = make_temp_dir() + assert result.startswith(system_temp_dir), result + + with temp_dir() as d: + assert is_dir(d), d + list_dir(d) + + with temp_file() as f: + assert is_file(f), f + write(f, "test") + + with working_dir() as d: + assert is_dir(d), d + list_dir(d) + + user_temp_dir = get_user_temp_dir() + assert user_temp_dir, user_temp_dir + + ENV.pop("XDG_RUNTIME_DIR", None) + + user_temp_dir = get_user_temp_dir() + assert user_temp_dir, user_temp_dir + +@test +def test_operations(): + with test_project(): + with working_module_path("src"): + import chucker + import chucker.tests + import chucker.moretests + + print_tests(chucker.tests) + + for verbose in (False, True): + # Module 'chucker' has no tests + with expect_error(): + run_tests(chucker, verbose=verbose) + + run_tests(chucker.tests, verbose=verbose) + run_tests(chucker.tests, exclude="*hello*", verbose=verbose) + run_tests(chucker.tests, enable="skipped", verbose=verbose) + + with expect_error(): + run_tests(chucker.tests, enable="skipped", unskip="*skipped*", verbose=verbose) + + with expect_error(): + run_tests(chucker.tests, enable="*badbye*", verbose=verbose) + + with expect_error(): + run_tests(chucker.tests, enable="*badbye*", fail_fast=True, verbose=verbose) + + with expect_error(): + run_tests([chucker.tests, chucker.moretests], enable="*badbye2*", fail_fast=True, verbose=verbose) + + with expect_exception(KeyboardInterrupt): + run_tests(chucker.tests, enable="keyboard-interrupt", verbose=verbose) + + with expect_error(): + run_tests(chucker.tests, enable="timeout", verbose=verbose) + + with expect_error(): + run_tests(chucker.tests, enable="process-error", verbose=verbose) + + with expect_error(): + run_tests(chucker.tests, enable="system-exit", verbose=verbose) + + with expect_system_exit(): + PlanoTestCommand().main(["--module", "nosuchmodule"]) + + def run_command(*args): + PlanoTestCommand(chucker.tests).main(args) + + run_command("--verbose") + run_command("--quiet") + run_command("--list") + + with expect_system_exit(): + run_command("--enable", "*badbye*") + + with expect_system_exit(): + run_command("--enable", "*badbye*", "--verbose") + + try: + with expect_exception(): + pass + raise Exception() # pragma: nocover + except AssertionError: + pass + + with expect_output(equals="abc123", contains="bc12", startswith="abc", endswith="123") as out: + write(out, "abc123") + +@test +def time_operations(): + start_time = get_time() + + sleep(TINY_INTERVAL) + + assert get_time() - start_time > TINY_INTERVAL + + start_datetime = get_datetime() + + sleep(TINY_INTERVAL) + + assert get_datetime() - start_datetime > _datetime.timedelta(seconds=TINY_INTERVAL) + + timestamp = format_timestamp() + result = parse_timestamp(timestamp) + assert format_timestamp(result) == timestamp + + result = parse_timestamp(None) + assert result is None + + earlier = get_datetime() + result = format_date() + later = _datetime.datetime.strptime(result, "%d %B %Y") + later = later.replace(tzinfo=_datetime.timezone.utc) + assert later - earlier < _datetime.timedelta(days=1) + + now = get_datetime() + result = format_date(now) + assert result == f"{now.day} {now.strftime('%B')} {now.strftime('%Y')}" + + now = get_datetime() + result = format_time() + later = _datetime.datetime.strptime(result, "%H:%M:%S") + later = later.replace(tzinfo=_datetime.timezone.utc) + assert later - earlier < _datetime.timedelta(seconds=1) + + now = get_datetime() + result = format_time(now) + assert result == f"{now.hour}:{now.strftime('%M')}:{now.strftime('%S')}" + + now = get_datetime() + result = format_time(now, precision="minute") + assert result == f"{now.hour}:{now.strftime('%M')}" + + result = format_duration(0.1) + assert result == "0.1s", result + + result = format_duration(1) + assert result == "1s", result + + result = format_duration(1, align=True) + assert result == "1.0s", result + + result = format_duration(60) + assert result == "60s", result + + result = format_duration(3600) + assert result == "1h", result + + with expect_system_exit(): + with start("sleep 10"): + from plano import _default_sigterm_handler + _default_sigterm_handler(_signal.SIGTERM, None) + + with Timer() as timer: + sleep(TINY_INTERVAL) + assert timer.elapsed_time > TINY_INTERVAL + + assert timer.elapsed_time > TINY_INTERVAL + + if not WINDOWS: + with expect_timeout(): + with Timer(timeout=TINY_INTERVAL) as timer: + sleep(10) + +@test +def unique_id_operations(): + id1 = get_unique_id() + id2 = get_unique_id() + + assert id1 != id2, (id1, id2) + + result = get_unique_id(1) + assert len(result) == 2 + + result = get_unique_id(16) + assert len(result) == 32 + +@test +def value_operations(): + result = nvl(None, "a") + assert result == "a", result + + result = nvl("b", "a") + assert result == "b", result + + assert is_string("a") + assert not is_string(1) + + for value in (None, "", (), [], {}): + assert is_empty(value), value + + for value in (object(), " ", (1,), [1], {"a": 1}): + assert not is_empty(value), value + + result = pformat({"z": 1, "a": 2}) + assert result == "{'a': 2, 'z': 1}", result + + result = format_empty((), "[nothing]") + assert result == "[nothing]", result + + result = format_empty((1,), "[nothing]") + assert result == (1,), result + + result = format_not_empty("abc", "[{}]") + assert result == "[abc]", result + + result = format_not_empty({}, "[{}]") + assert result == {}, result + + result = format_repr(Namespace(a=1, b=2), limit=1) + assert result == "Namespace(a=1)", result + + result = Namespace(a=1, b=2) + assert result.a == 1, result + assert result.b == 2, result + assert "a" in result, result + assert "c" not in result, result + repr(result) + + other = Namespace(a=1, b=2, c=3) + assert result != other, (result, other) + +@test +def yaml_operations(): + try: + import yaml as _yaml + except ImportError: # pragma: nocover + raise PlanoTestSkipped("PyYAML is not available") + + with working_dir(): + input_data = { + "alpha": [1, 2, 3], + } + + file_a = write_yaml("a", input_data) + output_data = read_yaml(file_a) + + assert input_data == output_data, (input_data, output_data) + + yaml = read(file_a) + parsed_data = parse_yaml(yaml) + emitted_yaml = emit_yaml(input_data) + + assert input_data == parsed_data, (input_data, parsed_data) + assert yaml == emitted_yaml, (yaml, emitted_yaml) + + with expect_output(equals=emitted_yaml) as out: + with open(out, "w") as f: + print_yaml(input_data, file=f, end="") + +@command +def prancer(): + notice("Base prancer") + +@command +def vixen(): + prancer() + +@test +def plano_command(): + with working_dir(): + PlanoCommand().main([]) + + PlanoCommand(_sys.modules[__name__]).main([]) + + PlanoCommand().main(["-m", "plano.test"]) + + with expect_system_exit(): + PlanoCommand().main(["-m", "nosuchmodule"]) + + with working_dir(): + write(".plano.py", "garbage") + + with expect_system_exit(): + PlanoCommand().main([]) + + with expect_system_exit(): + PlanoCommand().main(["-f", "no-such-file"]) + + def run_command(*args): + PlanoCommand().main(["-f", test_project_dir] + list(args)) + + with test_project(): + run_command() + run_command("--help") + + with expect_system_exit(): + run_command("no-such-command") + + with expect_system_exit(): + run_command("no-such-command", "--help") + + with expect_system_exit(): + run_command("--help", "no-such-command") + + run_command("extended-command", "a", "b", "--omega", "z") + run_command("extended-command", "a", "b", "--omega", "z", "--verbose") + run_command("extended-command", "a", "b", "--omega", "z", "--quiet") + + with expect_system_exit(): + run_command("echo") + + with expect_exception(contains="Trouble"): + run_command("echo", "Hello", "--trouble") + + run_command("echo", "Hello", "--count", "5") + + run_command("echoecho", "Greetings") + + with expect_system_exit(): + run_command("echo", "Hello", "--count", "not-an-int") + + run_command("haberdash", "ballcap", "fedora", "hardhat", "--last", "turban") + result = read_json("haberdash.json") + assert result == ["ballcap", "fedora", "hardhat", "turban"], result + + run_command("haberdash", "ballcap", "--last", "turban") + result = read_json("haberdash.json") + assert result == ["ballcap", "turban"], result + + run_command("haberdash", "ballcap") + result = read_json("haberdash.json") + assert result == ["ballcap", "bowler"], result + + run_command("balderdash", "bunk", "poppycock") + result = read_json("balderdash.json") + assert result == ["bunk", "poppycock", "rubbish"], result + + run_command("balderdash", "bunk") + result = read_json("balderdash.json") + assert result == ["bunk", "malarkey", "rubbish"], result + + run_command("balderdash", "bunk", "--other", "bollocks") + result = read_json("balderdash.json") + assert result == ["bunk", "malarkey", "bollocks"], result + + run_command("splasher,balderdash", "claptrap") + result = read_json("splasher.json") + assert result == [1], result + result = read_json("balderdash.json") + assert result == ["claptrap", "malarkey", "rubbish"], result + + with expect_system_exit(): + run_command("no-such-command,splasher") + + with expect_system_exit(): + run_command("splasher,no-such-command-nope") + + run_command("dasher", "alpha", "--beta", "123") + + # Gamma is an unexpected arg + with expect_system_exit(): + run_command("dasher", "alpha", "--gamma", "123") + + # Args after "xyz" are extra passthrough args + run_command("dancer", "gamma", "--omega", "xyz", "extra1", "--extra2", "extra3") + result = read_json("dancer.json") + assert result == ["extra1", "--extra2", "extra3"], result + + # Ensure indirect calls (through parent commands) are specialized + run_command("vixen") + assert exists("prancer.json") + + with expect_system_exit(): + run_command("no-parent") + + run_command("feta", "--spinach", "oregano") + result = read_json("feta.json") + assert result == "oregano" + + run_command("invisible") + result = read_json("invisible.json") + assert result == "nothing" + + + +def main(): + PlanoTestCommand(_sys.modules[__name__]).main() + +if __name__ == "__main__": # pragma: nocover + main() diff --git a/external/skewer/external/plano/src/plano/command.py b/external/skewer/external/plano/src/plano/command.py new file mode 100644 index 0000000..219f964 --- /dev/null +++ b/external/skewer/external/plano/src/plano/command.py @@ -0,0 +1,511 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +from .main import * + +import argparse as _argparse +import importlib as _importlib +import inspect as _inspect +import os as _os +import sys as _sys +import traceback as _traceback + +class BaseCommand: + def parse_args(self, args): # pragma: nocover + raise NotImplementedError() + + def configure_logging(self, args): + return "warning", None + + def init(self, args): # pragma: nocover + raise NotImplementedError() + + def run(self): # pragma: nocover + raise NotImplementedError() + + def main(self, args=None): + if args is None: + args = ARGS[1:] + + args = self.parse_args(args) + + assert isinstance(args, _argparse.Namespace), args + + level, output = self.configure_logging(args) + + with logging_enabled(level=level, output=output): + try: + self.init(args) + self.run() + except KeyboardInterrupt: + pass + except PlanoError as e: + if PLANO_DEBUG: # pragma: nocover + error(e) + else: + error(str(e)) + + exit(1) + +class BaseArgumentParser(_argparse.ArgumentParser): + def __init__(self, **kwargs): + super().__init__(**kwargs) + + self.allow_abbrev = False + self.formatter_class = _argparse.RawDescriptionHelpFormatter + + _capitalize_help(self) + +_plano_command = None + +class PlanoCommand(BaseCommand): + def __init__(self, module=None, description="Run commands defined as Python functions", epilog=None): + self.module = module + self.bound_commands = dict() + self.running_commands = list() + self.passthrough_args = None + self.verbose = False + self.quiet = False + + assert self.module is None or _inspect.ismodule(self.module), self.module + + self.pre_parser = BaseArgumentParser(description=description, add_help=False) + self.pre_parser.add_argument("-h", "--help", action="store_true", + help="Show this help message and exit") + + if self.module is None: + self.pre_parser.add_argument("-f", "--file", help="Load commands from FILE (default '.plano.py')") + self.pre_parser.add_argument("-m", "--module", help="Load commands from MODULE") + + self.parser = _argparse.ArgumentParser(parents=(self.pre_parser,), + description=description, epilog=epilog, + add_help=False, allow_abbrev=False) + + # This is intentionally added after self.pre_parser is passed + # as parent to self.parser, since it is used only in the + # preliminary parsing. + self.pre_parser.add_argument("command", nargs="?", help=_argparse.SUPPRESS) + + global _plano_command + _plano_command = self + + def parse_args(self, args): + pre_args, _ = self.pre_parser.parse_known_args(args) + + if self.module is None: + if pre_args.module is None: + self.module = self._load_file(pre_args.file) + else: + self.module = self._load_module(pre_args.module) + + if self.module is not None: + self._bind_commands(self.module) + + self._process_commands() + + self.preceding_commands = list() + + if pre_args.command is not None and "," in pre_args.command: + names = pre_args.command.split(",") + + for name in names[:-1]: + try: + self.preceding_commands.append(self.bound_commands[name]) + except KeyError: + self.parser.error(f"Command '{name}' is unknown") + + args[args.index(pre_args.command)] = names[-1] + + args, self.passthrough_args = self.parser.parse_known_args(args) + + return args + + def configure_logging(self, args): + if args.command is not None and not self.bound_commands[args.command].passthrough: + if args.verbose: + return "debug", None + + if args.quiet: + return "warning", None + + return "notice", None + + def init(self, args): + self.help = args.help + + self.selected_command = None + self.command_args = list() + self.command_kwargs = dict() + + if args.command is not None: + for command in self.preceding_commands: + command() + + self.selected_command = self.bound_commands[args.command] + + if not self.selected_command.passthrough and self.passthrough_args: + self.parser.error(f"unrecognized arguments: {' '.join(self.passthrough_args)}") + + for param in self.selected_command.parameters.values(): + if param.name == "passthrough_args": + continue + + if param.positional: + if param.multiple: + self.command_args.extend(getattr(args, param.name)) + else: + self.command_args.append(getattr(args, param.name)) + else: + self.command_kwargs[param.name] = getattr(args, param.name) + + if self.selected_command.passthrough: + self.command_kwargs["passthrough_args"] = self.passthrough_args + + def run(self): + if self.help or self.module is None or self.selected_command is None: + self.parser.print_help() + return + + with Timer() as timer: + self.selected_command(*self.command_args, **self.command_kwargs) + + if not self.quiet: + cprint("OK", color="green", file=_sys.stderr, end="") + cprint(" ({})".format(format_duration(timer.elapsed_time)), color="magenta", file=_sys.stderr) + + def _load_module(self, name): + try: + return _importlib.import_module(name) + except ImportError: + exit("Module '{}' not found", name) + + def _load_file(self, path): + if path is not None and is_dir(path): + path = self._find_file(path) + + if path is not None and not is_file(path): + exit("File '{}' not found", path) + + if path is None: + path = self._find_file(get_current_dir()) + + if path is None: + return + + debug("Loading '{}'", path) + + _sys.path.insert(0, join(get_parent_dir(path), "python")) + + spec = _importlib.util.spec_from_file_location("_plano", path) + module = _importlib.util.module_from_spec(spec) + _sys.modules["_plano"] = module + + try: + spec.loader.exec_module(module) + except Exception as e: + error(e) + exit("Failure loading {}: {}", path, str(e)) + + return module + + def _find_file(self, dir): + # Planofile and .planofile remain temporarily for backward compatibility + for name in (".plano.py", "Planofile", ".planofile"): + path = join(dir, name) + + if is_file(path): + return path + + def _bind_commands(self, module): + for var in vars(module).values(): + if callable(var) and var.__class__.__name__ == "Command": + self.bound_commands[var.name] = var + + def _process_commands(self): + subparsers = self.parser.add_subparsers(title="commands", dest="command", metavar="{command}") + + for command in self.bound_commands.values(): + # This doesn't work yet, but in the future it might. + # https://bugs.python.org/issue22848 + # + # help = _argparse.SUPPRESS if command.hidden else command.help + + help = "[internal]" if command.hidden else command.help + add_help = False if command.passthrough else True + description = nvl(command.description, command.help) + + subparser = subparsers.add_parser(command.name, help=help, add_help=add_help, description=description, + formatter_class=_argparse.RawDescriptionHelpFormatter) + + if not command.passthrough: + subparser.add_argument("--verbose", action="store_true", + help="Print detailed logging to the console") + subparser.add_argument("--quiet", action="store_true", + help="Print no logging to the console") + + for param in command.parameters.values(): + if not command.passthrough and param.name in ("verbose", "quiet"): + continue + + if param.positional: + if param.multiple: + subparser.add_argument(param.name, metavar=param.metavar, type=param.type, help=param.help, + nargs="*") + elif param.optional: + subparser.add_argument(param.name, metavar=param.metavar, type=param.type, help=param.help, + nargs="?", default=param.default) + else: + subparser.add_argument(param.name, metavar=param.metavar, type=param.type, help=param.help) + else: + flag_args = list() + + if param.short_option is not None: + flag_args.append("-{}".format(param.short_option)) + + flag_args.append("--{}".format(param.display_name)) + + help = param.help + + if param.default not in (None, False): + if help is None: + help = "Default value is {}".format(repr(param.default)) + else: + help += " (default {})".format(repr(param.default)) + + if param.default is False: + subparser.add_argument(*flag_args, dest=param.name, default=param.default, action="store_true", + help=help) + else: + subparser.add_argument(*flag_args, dest=param.name, default=param.default, + metavar=param.metavar, type=param.type, help=help) + + _capitalize_help(subparser) + +_command_help = { + "build": "Build artifacts from source", + "clean": "Clean up the source tree", + "dist": "Generate distribution artifacts", + "install": "Install the built artifacts on your system", + "test": "Run the tests", + "coverage": "Run the tests and measure code coverage", +} + +def command(_function=None, name=None, parameters=None, parent=None, passthrough=False, hidden=False): + class Command: + def __init__(self, function): + self.function = function + self.module = _inspect.getmodule(self.function) + + self.name = name + self.parent = parent + + if self.parent is None: + # Strip leading and trailing underscores and convert + # remaining underscores to hyphens + default = self.function.__name__.strip("_").replace("_", "-") + + self.name = nvl(self.name, default) + self.parameters = self._process_parameters(parameters) + self.passthrough = passthrough + else: + assert parameters is None + + self.name = nvl(self.name, self.parent.name) + self.parameters = self.parent.parameters + self.passthrough = self.parent.passthrough + + doc = _inspect.getdoc(self.function) + + if doc is None: + self.help = _command_help.get(self.name) + self.description = self.help + else: + self.help = doc.split("\n")[0] + self.description = doc + + if self.parent is not None: + self.help = nvl(self.help, self.parent.help) + self.description = nvl(self.description, self.parent.description) + + self.hidden = hidden + + debug("Defining {}", self) + + for param in self.parameters.values(): + debug(" {}", str(param).capitalize()) + + def __repr__(self): + return "command '{}:{}'".format(self.module.__name__, self.name) + + def _process_parameters(self, cparams): + # CommandParameter objects from the @command decorator + cparams_in = {x.name: x for x in nvl(cparams, ())} + cparams_out = dict() + + # Parameter objects from the function signature + sig = _inspect.signature(self.function) + sparams = list(sig.parameters.values()) + + if len(sparams) == 2 and sparams[0].name == "args" and sparams[1].name == "kwargs": + # Don't try to derive command parameters from *args and **kwargs + return cparams_in + + for sparam in sparams: + try: + cparam = cparams_in[sparam.name] + except KeyError: + cparam = CommandParameter(sparam.name) + + if sparam.kind is sparam.POSITIONAL_ONLY: # pragma: nocover + if sparam.positional is None: + cparam.positional = True + elif sparam.kind is sparam.POSITIONAL_OR_KEYWORD and sparam.default is sparam.empty: + if cparam.positional is None: + cparam.positional = True + elif sparam.kind is sparam.POSITIONAL_OR_KEYWORD and sparam.default is not sparam.empty: + cparam.optional = True + cparam.default = sparam.default + elif sparam.kind is sparam.VAR_POSITIONAL: + if cparam.positional is None: + cparam.positional = True + cparam.multiple = True + elif sparam.kind is sparam.VAR_KEYWORD: + continue + elif sparam.kind is sparam.KEYWORD_ONLY: + cparam.optional = True + cparam.default = sparam.default + else: # pragma: nocover + raise NotImplementedError(sparam.kind) + + if cparam.type is None and cparam.default not in (None, False): # XXX why false? + cparam.type = type(cparam.default) + + cparams_out[cparam.name] = cparam + + return cparams_out + + def __call__(self, *args, **kwargs): + from .command import _plano_command, PlanoCommand + assert isinstance(_plano_command, PlanoCommand), _plano_command + + app = _plano_command + command = app.bound_commands[self.name] + + if command is not self: + # The command bound to this name has been overridden. + # This happens when a parent command invokes a peer + # command that is overridden. + + command(*args, **kwargs) + + return + + debug("Running {} {} {}".format(self, args, kwargs)) + + app.running_commands.append(self) + + if not app.quiet: + dashes = "--- " * (len(app.running_commands) - 1) + display_args = list(self._get_display_args(args, kwargs)) + + with console_color("magenta", file=_sys.stderr): + eprint("{}--> {}".format(dashes, self.name), end="") + + if display_args: + eprint(" ({})".format(", ".join(display_args)), end="") + + eprint() + + self.function(*args, **kwargs) + + if not app.quiet: + cprint("{}<-- {}".format(dashes, self.name), color="magenta", file=_sys.stderr) + + app.running_commands.pop() + + def _get_display_args(self, args, kwargs): + for i, param in enumerate(self.parameters.values()): + if param.positional: + if param.multiple: + for va in args[i:]: + yield repr(va) + elif param.optional: + value = args[i] + + if value == param.default: + continue + + yield repr(value) + else: + yield repr(args[i]) + else: + value = kwargs.get(param.name, param.default) + + if value == param.default: + continue + + if value in (True, False): + value = str(value).lower() + else: + value = repr(value) + + yield "{}={}".format(param.display_name, value) + + if _function is None: + return Command + else: + return Command(_function) + +def parent(*args, **kwargs): + try: + f_locals = _inspect.stack()[2].frame.f_locals + parent_fn = f_locals["self"].parent.function + except: + fail("Missing parent command") + + parent_fn(*args, **kwargs) + +class CommandParameter: + def __init__(self, name, display_name=None, type=None, metavar=None, help=None, short_option=None, default=None, positional=None): + self.name = name + self.display_name = nvl(display_name, self.name.replace("_", "-")) + self.type = type + self.metavar = nvl(metavar, self.display_name.upper()) + self.help = help + self.short_option = short_option + self.default = default + self.positional = positional + + self.optional = False + self.multiple = False + + def __repr__(self): + return "parameter '{}' (default {})".format(self.name, repr(self.default)) + +# Patch the default help text +def _capitalize_help(parser): + try: + for action in parser._actions: + if action.help and action.help is not _argparse.SUPPRESS: + action.help = capitalize(action.help) + except: # pragma: nocover + pass + +def _main(): # pragma: nocover + PlanoCommand().main() diff --git a/external/skewer/external/plano/src/plano/github.py b/external/skewer/external/plano/src/plano/github.py new file mode 100644 index 0000000..e1714b5 --- /dev/null +++ b/external/skewer/external/plano/src/plano/github.py @@ -0,0 +1,80 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +from .main import * + +_html_template = """ + + + + + + +
+ +@content@ + +
+ + +""".strip() + +def convert_github_markdown(markdown): + json = emit_json({"text": markdown}) + content = http_post("https://api.github.com/markdown", json, content_type="application/json") + + # Remove the "user-content-" prefix from internal anchors + content = content.replace("id=\"user-content-", "id=\"") + + return _html_template.replace("@content@", content) + +def update_external_from_github(dir, owner, repo, ref="main"): + dir = get_absolute_path(dir) + make_parent_dir(dir) + + url = f"https://github.com/{owner}/{repo}/archive/{ref}.tar.gz" + + with temp_file() as temp: + assert exists(temp) + + http_get(url, output_file=temp) + + with working_dir(quiet=True): + extract_archive(temp) + + extracted_dir = list_dir()[0] + assert is_dir(extracted_dir) + + replace(dir, extracted_dir) diff --git a/external/skewer/external/plano/src/plano/main.py b/external/skewer/external/plano/src/plano/main.py new file mode 100644 index 0000000..bdd7370 --- /dev/null +++ b/external/skewer/external/plano/src/plano/main.py @@ -0,0 +1,1843 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +import base64 as _base64 +import binascii as _binascii +import code as _code +import datetime as _datetime +import fnmatch as _fnmatch +import getpass as _getpass +import json as _json +import os as _os +import pprint as _pprint +import pkgutil as _pkgutil +import random as _random +import re as _re +import shlex as _shlex +import shutil as _shutil +import signal as _signal +import socket as _socket +import subprocess as _subprocess +import sys as _sys +import tempfile as _tempfile +import time as _time +import traceback as _traceback +import urllib as _urllib +import urllib.parse as _urllib_parse +import uuid as _uuid + +_max = max + +## Exceptions + +class PlanoException(Exception): + pass + +class PlanoError(PlanoException): + pass + +class PlanoTimeout(PlanoException): + pass + +## Global variables + +ENV = _os.environ +ARGS = _sys.argv + +STDIN = _sys.stdin +STDOUT = _sys.stdout +STDERR = _sys.stderr +DEVNULL = _os.devnull + +LINUX = _sys.platform == "linux" +WINDOWS = _sys.platform in ("win32", "cygwin") + +PLANO_DEBUG = "PLANO_DEBUG" in ENV +PLANO_COLOR = "PLANO_COLOR" in ENV + +## Archive operations + +def make_archive(input_dir, output_file=None, quiet=False): + check_program("tar") + + archive_stem = get_base_name(input_dir) + + if output_file is None: + # tar on Windows needs this + base = join(get_current_dir(), archive_stem) + base = base.replace("\\", "/") + + output_file = f"{base}.tar.gz" + + _notice(quiet, "Making archive {} from directory {}", repr(output_file), repr(input_dir)) + + with working_dir(get_parent_dir(input_dir), quiet=True): + run(f"tar -czf {output_file} {archive_stem}", quiet=True) + + return output_file + +def extract_archive(input_file, output_dir=None, quiet=False): + check_program("tar") + + if output_dir is None: + output_dir = get_current_dir() + + _notice(quiet, "Extracting archive {} to directory {}", repr(input_file), repr(output_dir)) + + input_file = get_absolute_path(input_file) + + # tar on Windows needs this + input_file = input_file.replace("\\", "/") + + with working_dir(output_dir, quiet=True): + run(f"tar -xf {input_file}", quiet=True) + + return output_dir + +def rename_archive(input_file, new_archive_stem, quiet=False): + _notice(quiet, "Renaming archive {} with stem {}", repr(input_file), repr(new_archive_stem)) + + output_dir = get_absolute_path(get_parent_dir(input_file)) + output_file = "{}.tar.gz".format(join(output_dir, new_archive_stem)) + + # tar on Windows needs this + output_file = output_file.replace("\\", "/") + + input_file = get_absolute_path(input_file) + + with working_dir(quiet=True): + extract_archive(input_file, quiet=True) + + input_name = list_dir()[0] + input_dir = move(input_name, new_archive_stem, quiet=True) + + make_archive(input_dir, output_file=output_file, quiet=True) + + remove(input_file, quiet=True) + + return output_file + +## Console operations + +def flush(): + _sys.stdout.flush() + _sys.stderr.flush() + +def eprint(*args, **kwargs): + print(*args, file=_sys.stderr, **kwargs) + +def pprint(*args, **kwargs): + args = [pformat(x) for x in args] + print(*args, **kwargs) + +_color_codes = { + "black": "\u001b[30", + "red": "\u001b[31", + "green": "\u001b[32", + "yellow": "\u001b[33", + "blue": "\u001b[34", + "magenta": "\u001b[35", + "cyan": "\u001b[36", + "white": "\u001b[37", + "gray": "\u001b[90", +} + +_color_reset = "\u001b[0m" + +def _get_color_code(color, bright): + elems = [_color_codes[color]] + + if bright: + elems.append(";1") + + elems.append("m") + + return "".join(elems) + +def _is_color_enabled(file): + return PLANO_COLOR or hasattr(file, "isatty") and file.isatty() + +class console_color: + def __init__(self, color=None, bright=False, file=_sys.stdout): + self.file = file + self.color_code = None + + if (color, bright) != (None, False): + self.color_code = _get_color_code(color, bright) + + self.enabled = self.color_code is not None and _is_color_enabled(self.file) + + def __enter__(self): + if self.enabled: + print(self.color_code, file=self.file, end="", flush=True) + + def __exit__(self, exc_type, exc_value, traceback): + if self.enabled: + print(_color_reset, file=self.file, end="", flush=True) + +def cformat(value, color=None, bright=False, file=_sys.stdout): + if (color, bright) != (None, False) and _is_color_enabled(file): + return "".join((_get_color_code(color, bright), value, _color_reset)) + else: + return value + +def cprint(*args, **kwargs): + color = kwargs.pop("color", "white") + bright = kwargs.pop("bright", False) + file = kwargs.get("file", _sys.stdout) + + with console_color(color, bright=bright, file=file): + print(*args, **kwargs) + +class output_redirected: + def __init__(self, output, quiet=False): + self.output = output + self.quiet = quiet + + def __enter__(self): + flush() + + _notice(self.quiet, "Redirecting output to file {}", repr(self.output)) + + if is_string(self.output): + output = open(self.output, "w") + + self.prev_stdout, self.prev_stderr = _sys.stdout, _sys.stderr + _sys.stdout, _sys.stderr = output, output + + def __exit__(self, exc_type, exc_value, traceback): + flush() + + _sys.stdout, _sys.stderr = self.prev_stdout, self.prev_stderr + +try: + breakpoint +except NameError: # pragma: nocover + def breakpoint(): + import pdb + pdb.set_trace() + +def repl(locals): # pragma: nocover + _code.InteractiveConsole(locals=locals).interact() + +def print_properties(props, file=None): + size = max([len(x[0]) for x in props]) + + for prop in props: + name = "{}:".format(prop[0]) + template = "{{:<{}}} ".format(size + 1) + + print(template.format(name), prop[1], end="", file=file) + + for value in prop[2:]: + print(" {}".format(value), end="", file=file) + + print(file=file) + +## Directory operations + +def find(dirs=None, include="*", exclude=[]): + if dirs is None: + dirs = "." + + if is_string(dirs): + dirs = [dirs] + + if is_string(include): + include = [include] + + if is_string(exclude): + exclude = [exclude] + + found = set() + + for dir in dirs: + for root, dir_names, file_names in _os.walk(dir, followlinks=True): + names = dir_names + file_names + + for include_pattern in include: + names = _fnmatch.filter(names, include_pattern) + + for exclude_pattern in exclude: + for name in _fnmatch.filter(names, exclude_pattern): + names.remove(name) + + root = root.removeprefix("./") + + if root == ".": + root = "" + + found.update([join(root, x) for x in names]) + + return sorted(found) + +def make_dir(dir, quiet=False): + if dir == "": + return dir + + if not exists(dir): + _notice(quiet, "Making directory '{}'", dir) + _os.makedirs(dir) + + return dir + +def make_parent_dir(path, quiet=False): + return make_dir(get_parent_dir(path), quiet=quiet) + +# Returns the current working directory so you can change it back +def change_dir(dir, quiet=False): + _debug(quiet, "Changing directory to {}", repr(dir)) + + prev_dir = get_current_dir() + + if not dir: + return prev_dir + + _os.chdir(dir) + + return prev_dir + +def list_dir(dir=None, include="*", exclude=[]): + if dir is None: + dir = get_current_dir() + else: + dir = expand(dir) + + assert is_dir(dir), dir + + if is_string(include): + include = [include] + + if is_string(exclude): + exclude = [exclude] + + names = _os.listdir(dir) + + for include_pattern in include: + names = _fnmatch.filter(names, include_pattern) + + for exclude_pattern in exclude: + for name in _fnmatch.filter(names, exclude_pattern): + names.remove(name) + + return sorted(names) + +def print_dir(dir=None, include="*", exclude=[]): + if dir is None: + dir = get_current_dir() + else: + dir = expand(dir) + + names = list_dir(dir=dir, include=include, exclude=exclude) + + print("{}:".format(get_absolute_path(dir))) + + if names: + for name in names: + print(f" {name}") + else: + print(" [none]") + +# No args constructor gets a temp dir +class working_dir: + def __init__(self, dir=None, quiet=False): + self.dir = dir + self.prev_dir = None + self.remove = False + self.quiet = quiet + + if self.dir is None: + self.dir = make_temp_dir() + self.remove = True + else: + self.dir = expand(self.dir) + + def __enter__(self): + if self.dir == ".": + return + + _notice(self.quiet, "Entering directory {}", repr(get_absolute_path(self.dir))) + + make_dir(self.dir, quiet=True) + + self.prev_dir = change_dir(self.dir, quiet=True) + + return self.dir + + def __exit__(self, exc_type, exc_value, traceback): + if self.dir == ".": + return + + _debug(self.quiet, "Returning to directory {}", repr(get_absolute_path(self.prev_dir))) + + change_dir(self.prev_dir, quiet=True) + + if self.remove: + remove(self.dir, quiet=True) + +## Environment operations + +def join_path_var(*paths): + return _os.pathsep.join(unique(skip(paths))) + +def get_current_dir(): + return _os.getcwd() + +def get_home_dir(user=None): + return _os.path.expanduser("~{}".format(user or "")) + +def get_user(): + return _getpass.getuser() + +def get_hostname(): + return _socket.gethostname() + +def get_program_name(command=None): + if command is None: + args = ARGS + else: + args = command.split() + + for arg in args: + if "=" not in arg: + return get_base_name(arg) + +def which(program_name): + return _shutil.which(program_name) + +def check_env(var, message=None): + if var not in _os.environ: + if message is None: + message = "Environment variable {} is not set".format(repr(var)) + + raise PlanoError(message) + +def check_module(module, message=None): + if _pkgutil.find_loader(module) is None: + if message is None: + message = "Python module {} is not found".format(repr(module)) + + raise PlanoError(message) + +def check_program(program, message=None): + if which(program) is None: + if message is None: + message = "Program {} is not found".format(repr(program)) + + raise PlanoError(message) + +class working_env: + def __init__(self, **vars): + self.amend = vars.pop("amend", True) + self.vars = vars + + def __enter__(self): + self.prev_vars = dict(_os.environ) + + if not self.amend: + for name, value in list(_os.environ.items()): + if name not in self.vars: + del _os.environ[name] + + for name, value in self.vars.items(): + _os.environ[name] = str(value) + + def __exit__(self, exc_type, exc_value, traceback): + for name, value in self.prev_vars.items(): + _os.environ[name] = value + + for name, value in self.vars.items(): + if name not in self.prev_vars: + del _os.environ[name] + +class working_module_path: + def __init__(self, path, amend=True): + if is_string(path): + if not is_absolute(path): + path = get_absolute_path(path) + + path = [path] + + if amend: + path = path + _sys.path + + self.path = path + + def __enter__(self): + self.prev_path = _sys.path + _sys.path = self.path + + def __exit__(self, exc_type, exc_value, traceback): + _sys.path = self.prev_path + +def print_env(file=None): + props = ( + ("ARGS", ARGS), + ("ENV['PATH']", ENV.get("PATH")), + ("ENV['PYTHONPATH']", ENV.get("PYTHONPATH")), + ("sys.executable", _sys.executable), + ("sys.path", _sys.path), + ("sys.version", _sys.version.replace("\n", "")), + ("get_current_dir()", get_current_dir()), + ("get_home_dir()", get_home_dir()), + ("get_hostname()", get_hostname()), + ("get_program_name()", get_program_name()), + ("get_user()", get_user()), + ("plano.__file__", __file__), + ("which('plano')", which("plano")), + ) + + print_properties(props, file=file) + +def print_stack(file=None): + _traceback.print_stack(file=file) + +## File operations + +def touch(file, quiet=False): + file = expand(file) + + _notice(quiet, "Touching {}", repr(file)) + + try: + _os.utime(file, None) + except OSError: + append(file, "") + + return file + +# symlinks=True - Preserve symlinks +# inside=True - Place from_path inside to_path if to_path is a directory +def copy(from_path, to_path, symlinks=True, inside=True, quiet=False): + from_path = expand(from_path) + to_path = expand(to_path) + + _notice(quiet, "Copying {} to {}", repr(from_path), repr(to_path)) + + if is_dir(to_path) and inside: + to_path = join(to_path, get_base_name(from_path)) + else: + make_parent_dir(to_path, quiet=True) + + if is_link(from_path) and symlinks: + make_link(to_path, read_link(from_path), quiet=True) + elif is_dir(from_path): + for name in list_dir(from_path): + copy(join(from_path, name), join(to_path, name), symlinks=symlinks, inside=False, quiet=True) + + _shutil.copystat(from_path, to_path) + else: + _shutil.copy2(from_path, to_path) + + return to_path + +# inside=True - Place from_path inside to_path if to_path is a directory +def move(from_path, to_path, inside=True, quiet=False): + from_path = expand(from_path) + to_path = expand(to_path) + + _notice(quiet, "Moving {} to {}", repr(from_path), repr(to_path)) + + to_path = copy(from_path, to_path, inside=inside, quiet=True) + remove(from_path, quiet=True) + + return to_path + +def replace(path, replacement, quiet=False): + path = expand(path) + replacement = expand(replacement) + + _notice(quiet, "Replacing {} with {}", repr(path), repr(replacement)) + + with temp_dir() as backup_dir: + backup = join(backup_dir, "backup") + backup_created = False + + if exists(path): + move(path, backup, quiet=True) + backup_created = True + + try: + move(replacement, path, quiet=True) + except OSError: + notice("Removing") + remove(path, quiet=True) + + if backup_created: + move(backup, path, quiet=True) + + raise + + assert not exists(replacement), replacement + assert exists(path), path + + return path + +def remove(paths, quiet=False): + if is_string(paths): + paths = [paths] + + for path in paths: + path = expand(path) + + if not exists(path): + continue + + _debug(quiet, "Removing {}", repr(path)) + + if is_dir(path): + _shutil.rmtree(path, ignore_errors=True) + else: + _os.remove(path) + +def get_file_size(file): + file = expand(file) + return _os.path.getsize(file) + +## IO operations + +def read(file): + file = expand(file) + + with open(file) as f: + return f.read() + +def write(file, string): + file = expand(file) + + make_parent_dir(file, quiet=True) + + with open(file, "w") as f: + f.write(string) + + return file + +def append(file, string): + file = expand(file) + + make_parent_dir(file, quiet=True) + + with open(file, "a") as f: + f.write(string) + + return file + +def prepend(file, string): + file = expand(file) + + orig = read(file) + + return write(file, string + orig) + +def tail(file, count): + file = expand(file) + return "".join(tail_lines(file, count)) + +def read_lines(file): + file = expand(file) + + with open(file) as f: + return f.readlines() + +def write_lines(file, lines): + file = expand(file) + + make_parent_dir(file, quiet=True) + + with open(file, "w") as f: + f.writelines(lines) + + return file + +def append_lines(file, lines): + file = expand(file) + + make_parent_dir(file, quiet=True) + + with open(file, "a") as f: + f.writelines(lines) + + return file + +def prepend_lines(file, lines): + file = expand(file) + + orig_lines = read_lines(file) + + make_parent_dir(file, quiet=True) + + with open(file, "w") as f: + f.writelines(lines) + f.writelines(orig_lines) + + return file + +def tail_lines(file, count): + assert count >= 0, count + + lines = read_lines(file) + + return lines[-count:] + +def string_replace_in_file(file, old, new, count=0): + file = expand(file) + return write(file, read(file).replace(old, new, count)) + +def concatenate(file, input_files): + file = expand(file) + + assert file not in input_files + + make_parent_dir(file, quiet=True) + + with open(file, "wb") as f: + for input_file in input_files: + if not exists(input_file): + continue + + with open(input_file, "rb") as inf: + _shutil.copyfileobj(inf, f) + + return file + +## Iterable operations + +def unique(iterable): + return list(dict.fromkeys(iterable).keys()) + +def skip(iterable, values=(None, "", (), [], {})): + if is_scalar(values): + values = [values] + + items = list() + + for item in iterable: + if item not in values: + items.append(item) + + return items + +## JSON operations + +def read_json(file): + file = expand(file) + + with open(file) as f: + return _json.load(f) + +def write_json(file, data): + file = expand(file) + + make_parent_dir(file, quiet=True) + + with open(file, "w") as f: + _json.dump(data, f, indent=4, separators=(",", ": "), sort_keys=True) + + return file + +def parse_json(json): + return _json.loads(json) + +def emit_json(data): + return _json.dumps(data, indent=4, separators=(",", ": "), sort_keys=True) + +def print_json(data, **kwargs): + print(emit_json(data), **kwargs) + +## HTTP operations + +def _run_curl(method, url, content=None, content_file=None, content_type=None, output_file=None, + insecure=False, user=None, password=None, client_cert=None, client_key=None, server_cert=None, + quiet=False): + check_program("curl") + + _notice(quiet, f"Sending {method} request to '{url}'") + + args = ["curl", "-sfL"] + + if method != "GET": + args.extend(["-X", method]) + + if content is not None: + assert content_file is None + args.extend(["-H", "Expect:", "-d", "@-"]) + + if content_file is not None: + assert content is None, content + args.extend(["-H", "Expect:", "-d", f"@{content_file}"]) + + if content_type is not None: + args.extend(["-H", f"'Content-Type: {content_type}'"]) + + if output_file is not None: + args.extend(["-o", output_file]) + + if insecure: + args.append("--insecure") + + if user is not None: + assert password is not None + args.extend(["--user", f"{user}:{password}"]) + + if client_cert is not None: + args.extend(["--cert", client_cert]) + + if client_key is not None: + args.extend(["--key", client_key]) + + if server_cert is not None: + args.extend(["--cacert", server_cert]) + + args.append(url) + + if output_file is not None: + make_parent_dir(output_file, quiet=True) + + proc = run(args, stdin=_subprocess.PIPE, stdout=_subprocess.PIPE, stderr=_subprocess.PIPE, + input=content, check=False, quiet=True) + + if proc.exit_code > 0: + raise PlanoProcessError(proc) + + if output_file is None: + return proc.stdout_result + +def http_get(url, output_file=None, insecure=False, user=None, password=None, + client_cert=None, client_key=None, server_cert=None, + quiet=False): + return _run_curl("GET", url, output_file=output_file, insecure=insecure, user=user, password=password, + client_cert=client_cert, client_key=client_key, server_cert=server_cert, + quiet=quiet) + +def http_get_json(url, + insecure=False, user=None, password=None, + client_cert=None, client_key=None, server_cert=None, quiet=False): + return parse_json(http_get(url, insecure=insecure, user=user, password=password, + client_cert=client_cert, client_key=client_key, server_cert=server_cert, + quiet=quiet)) + +def http_put(url, content, content_type=None, insecure=False, user=None, password=None, + client_cert=None, client_key=None, server_cert=None, + quiet=False): + _run_curl("PUT", url, content=content, content_type=content_type, insecure=insecure, user=user, password=password, + client_cert=client_cert, client_key=client_key, server_cert=server_cert, + quiet=quiet) + +def http_put_file(url, content_file, content_type=None, insecure=False, user=None, password=None, + client_cert=None, client_key=None, server_cert=None, + quiet=False): + _run_curl("PUT", url, content_file=content_file, content_type=content_type, insecure=insecure, user=user, + password=password, client_cert=client_cert, client_key=client_key, server_cert=server_cert, + quiet=quiet) + +def http_put_json(url, data, insecure=False, user=None, password=None, + client_cert=None, client_key=None, server_cert=None, + quiet=False): + http_put(url, emit_json(data), content_type="application/json", insecure=insecure, user=user, password=password, + client_cert=client_cert, client_key=client_key, server_cert=server_cert, + quiet=quiet) + +def http_post(url, content, content_type=None, output_file=None, insecure=False, user=None, password=None, + client_cert=None, client_key=None, server_cert=None, + quiet=False): + return _run_curl("POST", url, content=content, content_type=content_type, output_file=output_file, + insecure=insecure, user=user, password=password, + client_cert=client_cert, client_key=client_key, server_cert=server_cert, + quiet=quiet) + +def http_post_file(url, content_file, content_type=None, output_file=None, insecure=False, user=None, password=None, + client_cert=None, client_key=None, server_cert=None, + quiet=False): + return _run_curl("POST", url, content_file=content_file, content_type=content_type, output_file=output_file, + insecure=insecure, user=user, password=password, + client_cert=client_cert, client_key=client_key, server_cert=server_cert, + quiet=quiet) + +def http_post_json(url, data, insecure=False, user=None, password=None, + client_cert=None, client_key=None, server_cert=None, + quiet=False): + return parse_json(http_post(url, emit_json(data), content_type="application/json", + insecure=insecure, user=user, password=password, + client_cert=client_cert, client_key=client_key, server_cert=server_cert, + quiet=quiet)) + +## Link operations + +def make_link(path: str, linked_path: str, quiet=False) -> str: + _notice(quiet, "Making symlink {} to {}", repr(path), repr(linked_path)) + + make_parent_dir(path, quiet=True) + remove(path, quiet=True) + + _os.symlink(linked_path, path) + + return path + +def read_link(path): + return _os.readlink(path) + +## Logging operations + +_logging_levels = ( + "debug", + "notice", + "warning", + "error", + "disabled", +) + +_DEBUG = _logging_levels.index("debug") +_NOTICE = _logging_levels.index("notice") +_WARNING = _logging_levels.index("warning") +_ERROR = _logging_levels.index("error") +_DISABLED = _logging_levels.index("disabled") + +_logging_output = None +_logging_threshold = _NOTICE +_logging_contexts = list() + +def enable_logging(level="notice", output=None, quiet=False): + assert level in _logging_levels, level + + _notice(quiet, "Enabling logging (level={}, output={})", repr(level), repr(nvl(output, "stderr"))) + + global _logging_threshold + _logging_threshold = _logging_levels.index(level) + + if is_string(output): + output = open(output, "w") + + global _logging_output + _logging_output = output + +def disable_logging(quiet=False): + _notice(quiet, "Disabling logging") + + global _logging_threshold + _logging_threshold = _DISABLED + +class logging_enabled: + def __init__(self, level="notice", output=None): + self.level = level + self.output = output + + def __enter__(self): + self.prev_level = _logging_levels[_logging_threshold] + self.prev_output = _logging_output + + if self.level == "disabled": + disable_logging(quiet=True) + else: + enable_logging(level=self.level, output=self.output, quiet=True) + + def __exit__(self, exc_type, exc_value, traceback): + if self.prev_level == "disabled": + disable_logging(quiet=True) + else: + enable_logging(level=self.prev_level, output=self.prev_output, quiet=True) + +class logging_disabled(logging_enabled): + def __init__(self): + super().__init__(level="disabled") + +class logging_context: + def __init__(self, name): + self.name = name + + def __enter__(self): + _logging_contexts.append(self.name) + + def __exit__(self, exc_type, exc_value, traceback): + _logging_contexts.pop() + +def fail(message, *args): + if isinstance(message, BaseException): + if not isinstance(message, PlanoError): + error(message) + + raise message + + if args: + message = message.format(*args) + + raise PlanoError(message) + +def error(message, *args): + log(_ERROR, message, *args) + +def warning(message, *args): + log(_WARNING, message, *args) + +def notice(message, *args): + log(_NOTICE, message, *args) + +def debug(message, *args): + log(_DEBUG, message, *args) + +def log(level, message, *args): + if is_string(level): + level = _logging_levels.index(level) + + if _logging_threshold <= level: + _print_message(level, message, args) + +def _print_message(level, message, args): + line = list() + out = nvl(_logging_output, _sys.stderr) + + program_text = "{}:".format(get_program_name()) + + line.append(cformat(program_text, color="gray")) + + level_text = "{}:".format(_logging_levels[level]) + level_color = ("white", "cyan", "yellow", "red", None)[level] + level_bright = (False, False, False, True, False)[level] + + line.append(cformat(level_text, color=level_color, bright=level_bright)) + + for name in _logging_contexts: + line.append(cformat("{}:".format(name), color="yellow")) + + if isinstance(message, BaseException): + exception = message + + line.append(str(exception)) + + print(" ".join(line), file=out) + + if hasattr(exception, "__traceback__"): + _traceback.print_exception(type(exception), exception, exception.__traceback__, file=out) + else: + message = str(message) + + if args: + message = message.format(*args) + + line.append(capitalize(message)) + + print(" ".join(line), file=out) + + out.flush() + +def _notice(quiet, message, *args): + if quiet: + debug(message, *args) + else: + notice(message, *args) + +def _debug(quiet, message, *args): + if not quiet: + debug(message, *args) + +## Path operations + +def expand(path): + path = _os.path.expanduser(path) + path = _os.path.expandvars(path) + + return path + +def get_absolute_path(path): + path = expand(path) + return _os.path.abspath(path) + +def normalize_path(path): + path = expand(path) + return _os.path.normpath(path) + +def get_real_path(path): + path = expand(path) + return _os.path.realpath(path) + +def get_relative_path(path, start=None): + path = expand(path) + return _os.path.relpath(path, start=start) + +def get_file_url(path): + path = expand(path) + return "file:{}".format(get_absolute_path(path)) + +def exists(path): + path = expand(path) + return _os.path.lexists(path) + +def is_absolute(path): + path = expand(path) + return _os.path.isabs(path) + +def is_dir(path): + path = expand(path) + return _os.path.isdir(path) + +def is_file(path): + path = expand(path) + return _os.path.isfile(path) + +def is_link(path): + path = expand(path) + return _os.path.islink(path) + +def join(*paths): + paths = [expand(x) for x in paths] + + path = _os.path.join(*paths) + path = normalize_path(path) + + return path + +def split(path): + path = expand(path) + path = normalize_path(path) + parent, child = _os.path.split(path) + + return parent, child + +def split_extension(path): + path = expand(path) + path = normalize_path(path) + root, ext = _os.path.splitext(path) + + return root, ext + +def get_parent_dir(path): + path = expand(path) + path = normalize_path(path) + parent, child = split(path) + + return parent + +def get_base_name(path): + path = expand(path) + path = normalize_path(path) + parent, name = split(path) + + return name + +def get_name_stem(file): + file = expand(file) + name = get_base_name(file) + + if name.endswith(".tar.gz"): + name = name[:-3] + + stem, ext = split_extension(name) + + return stem + +def get_name_extension(file): + file = expand(file) + name = get_base_name(file) + stem, ext = split_extension(name) + + return ext + +def _check_path(path, test_func, message): + path = expand(path) + + if not test_func(path): + parent_dir = get_parent_dir(path) + + if is_dir(parent_dir): + found_paths = ", ".join([repr(x) for x in list_dir(parent_dir)]) + message = "{}. The parent directory contains: {}".format(message.format(repr(path)), found_paths) + else: + message = "{}".format(message.format(repr(path))) + + raise PlanoError(message) + +def check_exists(path): + path = expand(path) + _check_path(path, exists, "File or directory {} not found") + +def check_file(path): + path = expand(path) + _check_path(path, is_file, "File {} not found") + +def check_dir(path): + path = expand(path) + _check_path(path, is_dir, "Directory {} not found") + +def await_exists(path, timeout=30, quiet=False): + path = expand(path) + + _notice(quiet, "Waiting for path {} to exist", repr(path)) + + timeout_message = "Timed out waiting for path {} to exist".format(path) + period = 0.03125 + + with Timer(timeout=timeout, timeout_message=timeout_message) as timer: + while True: + try: + check_exists(path) + except PlanoError: + sleep(period, quiet=True) + period = min(1, period * 2) + else: + return + +## Port operations + +def get_random_port(min=49152, max=65535): + ports = [_random.randint(min, max) for _ in range(3)] + + for port in ports: + try: + check_port(port) + except PlanoError: + return port + + raise PlanoError("Random ports unavailable") + +def check_port(port, host="localhost"): + sock = _socket.socket(_socket.AF_INET, _socket.SOCK_STREAM) + sock.setsockopt(_socket.SOL_SOCKET, _socket.SO_REUSEADDR, 1) + + if sock.connect_ex((host, port)) != 0: + raise PlanoError("Port {} (host {}) is not reachable".format(repr(port), repr(host))) + +def await_port(port, host="localhost", timeout=30, quiet=False): + _notice(quiet, "Waiting for port {}", port) + + if is_string(port): + port = int(port) + + timeout_message = "Timed out waiting for port {} to open".format(port) + period = 0.03125 + + with Timer(timeout=timeout, timeout_message=timeout_message) as timer: + while True: + try: + check_port(port, host=host) + except PlanoError: + sleep(period, quiet=True) + period = min(1, period * 2) + else: + return + +## Process operations + +def get_process_id(): + return _os.getpid() + +def _format_command(command, represent=True): + if is_string(command): + args = _shlex.split(command) + else: + args = command + + args = [expand(str(x)) for x in args] + command = " ".join(args) + + if represent: + return repr(command) + else: + return command + +# quiet=False - Don't log at notice level +# stash=False - No output unless there is an error +# output= - Send stdout and stderr to a file +# stdin= - XXX +# stdout= - Send stdout to a file +# stderr= - Send stderr to a file +# shell=False - XXX +def start(command, stdin=None, stdout=None, stderr=None, output=None, shell=False, stash=False, quiet=False): + _notice(quiet, "Starting a new process (command {})", _format_command(command)) + + if output is not None: + stdout, stderr = output, output + + if is_string(stdin): + stdin = expand(stdin) + stdin = open(stdin, "r") + + if is_string(stdout): + stdout = expand(stdout) + stdout = open(stdout, "w") + + if is_string(stderr): + stderr = expand(stderr) + stderr = open(stderr, "w") + + if stdin is None: + stdin = _sys.stdin + + if stdout is None: + stdout = _sys.stdout + + if stderr is None: + stderr = _sys.stderr + + stash_file = None + + if stash: + stash_file = make_temp_file() + out = open(stash_file, "w") + stdout = out + stderr = out + + if shell: + if is_string(command): + args = command + else: + args = " ".join(map(str, command)) + else: + if is_string(command): + args = _shlex.split(command) + else: + args = command + + args = [expand(str(x)) for x in args] + + try: + proc = PlanoProcess(args, stdin=stdin, stdout=stdout, stderr=stderr, shell=shell, close_fds=True, stash_file=stash_file) + except OSError as e: + raise PlanoError("Command {}: {}".format(_format_command(command), str(e))) + + _notice(quiet, "{} started", proc) + + return proc + +def stop(proc, timeout=None, quiet=False): + _notice(quiet, "Stopping {}", proc) + + if proc.poll() is not None: + if proc.exit_code == 0: + debug("{} already exited normally", proc) + elif proc.exit_code == -(_signal.SIGTERM): + debug("{} was already terminated", proc) + else: + debug("{} already exited with code {}", proc, proc.exit_code) + + return proc + + kill(proc, quiet=True) + + return wait(proc, timeout=timeout, quiet=True) + +def kill(proc, quiet=False): + _notice(quiet, "Killing {}", proc) + + proc.terminate() + +def wait(proc, timeout=None, check=False, quiet=False): + _notice(quiet, "Waiting for {} to exit", proc) + + try: + proc.wait(timeout=timeout) + except _subprocess.TimeoutExpired: + error("{} timed out after {} seconds", proc, timeout) + raise PlanoTimeout() + + if proc.exit_code == 0: + debug("{} exited normally", proc) + elif proc.exit_code < 0: + debug("{} was terminated by signal {}", proc, abs(proc.exit_code)) + else: + if check: + error("{} exited with code {}", proc, proc.exit_code) + else: + debug("{} exited with code {}", proc, proc.exit_code) + + if proc.stash_file is not None: + if proc.exit_code > 0: + eprint(read(proc.stash_file), end="") + + if not WINDOWS: + remove(proc.stash_file, quiet=True) + + if check and proc.exit_code > 0: + raise PlanoProcessError(proc) + + return proc + +# input= - Pipe to the process +def run(command, stdin=None, stdout=None, stderr=None, input=None, output=None, + stash=False, shell=False, check=True, quiet=False): + _notice(quiet, "Running command {}", _format_command(command)) + + if input is not None: + assert stdin in (None, _subprocess.PIPE), stdin + + input = input.encode("utf-8") + stdin = _subprocess.PIPE + + proc = start(command, stdin=stdin, stdout=stdout, stderr=stderr, output=output, + stash=stash, shell=shell, quiet=True) + + proc.stdout_result, proc.stderr_result = proc.communicate(input=input) + + if proc.stdout_result is not None: + proc.stdout_result = proc.stdout_result.decode("utf-8") + + if proc.stderr_result is not None: + proc.stderr_result = proc.stderr_result.decode("utf-8") + + return wait(proc, check=check, quiet=True) + +# input= - Pipe the given input into the process +def call(command, input=None, shell=False, quiet=False): + _notice(quiet, "Calling {}", _format_command(command)) + + proc = run(command, stdin=_subprocess.PIPE, stdout=_subprocess.PIPE, stderr=_subprocess.PIPE, + input=input, shell=shell, check=True, quiet=True) + + return proc.stdout_result + +def exit(arg=None, *args, **kwargs): + verbose = kwargs.get("verbose", False) + + if arg in (0, None): + if verbose: + notice("Exiting normally") + + _sys.exit() + + if is_string(arg): + if args: + arg = arg.format(*args) + + if verbose: + error(arg) + + _sys.exit(arg) + + if isinstance(arg, BaseException): + if verbose: + error(arg) + + _sys.exit(str(arg)) + + if isinstance(arg, int): + _sys.exit(arg) + + raise PlanoException("Illegal argument") + +_child_processes = list() + +class PlanoProcess(_subprocess.Popen): + def __init__(self, args, **options): + self.stash_file = options.pop("stash_file", None) + + super().__init__(args, **options) + + self.args = args + self.stdout_result = None + self.stderr_result = None + + _child_processes.append(self) + + @property + def exit_code(self): + return self.returncode + + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_value, traceback): + stop(self) + + def __repr__(self): + return "process {} (command {})".format(self.pid, _format_command(self.args)) + +class PlanoProcessError(_subprocess.CalledProcessError, PlanoError): + def __init__(self, proc): + super().__init__(proc.exit_code, _format_command(proc.args, represent=False)) + +def _default_sigterm_handler(signum, frame): + for proc in _child_processes: + if proc.poll() is None: + kill(proc, quiet=True) + + exit(-(_signal.SIGTERM)) + +_signal.signal(_signal.SIGTERM, _default_sigterm_handler) + +## String operations + +def string_replace_re(string, pattern, replacement, count=0): + return _re.sub(pattern, replacement, string, count) + +def string_matches_re(string, pattern): + return _re.search(pattern, string) is not None + +def string_matches_glob(string, pattern): + return _fnmatch.fnmatchcase(string, pattern) + +def shorten(string, max, ellipsis=None): + assert max is None or isinstance(max, int) + + if string is None: + return "" + + if max is None or len(string) < max: + return string + else: + if ellipsis is not None: + string = string + ellipsis + end = _max(0, max - len(ellipsis)) + return string[0:end] + ellipsis + else: + return string[0:max] + +def plural(noun, count=0, plural=None): + if noun in (None, ""): + return "" + + if count == 1: + return noun + + if plural is None: + if noun.endswith("s"): + plural = "{}ses".format(noun) + else: + plural = "{}s".format(noun) + + return plural + +def capitalize(string): + if not string: + return "" + + return string[0].upper() + string[1:] + +def base64_encode(string): + return _base64.b64encode(string) + +def base64_decode(string): + return _base64.b64decode(string) + +def url_encode(string): + return _urllib_parse.quote_plus(string) + +def url_decode(string): + return _urllib_parse.unquote_plus(string) + +def parse_url(url): + return _urllib_parse.urlparse(url) + +# A class for building up long strings +# +# append = StringBuilder() +# append("abc") +# append() +# append("123") +# str(append) -> "abc\n\n123" +class StringBuilder: + def __init__(self): + self._items = list() + + def __call__(self, item=""): + self.append(item=item) + + def __str__(self): + return self.join() + + def append(self, item=""): + assert item is not None + self._items.append(str(item)) + + def join(self, separator="\n"): + return separator.join(self._items) + + def write(self, file, separator="\n"): + return write(file, self.join(separator=separator)) + + def clear(self): + self._items.clear() + +## Temp operations + +def get_system_temp_dir(): + return _tempfile.gettempdir() + +def get_user_temp_dir(): + try: + return _os.environ["XDG_RUNTIME_DIR"] + except KeyError: + return join(get_system_temp_dir(), get_user()) + +def make_temp_file(prefix="plano-", suffix="", dir=None): + if dir is None: + dir = get_system_temp_dir() + + return _tempfile.mkstemp(prefix=prefix, suffix=suffix, dir=dir)[1] + +def make_temp_dir(prefix="plano-", suffix="", dir=None): + if dir is None: + dir = get_system_temp_dir() + + return _tempfile.mkdtemp(prefix=prefix, suffix=suffix, dir=dir) + +class temp_file: + def __init__(self, prefix="plano-", suffix="", dir=None): + if dir is None: + dir = get_system_temp_dir() + + self.fd, self.file = _tempfile.mkstemp(prefix=prefix, suffix=suffix, dir=dir) + + def __enter__(self): + return self.file + + def __exit__(self, exc_type, exc_value, traceback): + _os.close(self.fd) + + if not WINDOWS: # XXX + remove(self.file, quiet=True) + +class temp_dir: + def __init__(self, prefix="plano-", suffix="", dir=None): + self.dir = make_temp_dir(prefix=prefix, suffix=suffix, dir=dir) + + def __enter__(self): + return self.dir + + def __exit__(self, exc_type, exc_value, traceback): + remove(self.dir, quiet=True) + +## Time operations + +# Unix time +def get_time(): + return _time.time() + +# Python UTC time +def get_datetime(): + return _datetime.datetime.now(tz=_datetime.timezone.utc) + +def parse_timestamp(timestamp, format="%Y-%m-%dT%H:%M:%SZ"): + if timestamp is None: + return None + + datetime = _datetime.datetime.strptime(timestamp, format) + datetime = datetime.replace(tzinfo=_datetime.timezone.utc) + + return datetime + +def format_timestamp(datetime=None, format="%Y-%m-%dT%H:%M:%SZ"): + if datetime is None: + datetime = get_datetime() + + return datetime.strftime(format) + +def format_date(datetime=None): + if datetime is None: + datetime = get_datetime() + + day = datetime.day + month = datetime.strftime("%B") + year = datetime.strftime("%Y") + + return f"{day} {month} {year}" + +def format_time(datetime=None, precision="second"): + if datetime is None: + datetime = get_datetime() + + assert precision in ("minute", "second"), "Illegal precision value" + + hour = datetime.hour + minute = datetime.strftime("%M") + second = datetime.strftime("%S") + + if precision == "second": + return f"{hour}:{minute}:{second}" + else: + return f"{hour}:{minute}" + +def format_duration(seconds, align=False): + assert seconds >= 0 + + if seconds >= 3600: + value = seconds / 3600 + unit = "h" + elif seconds >= 5 * 60: + value = seconds / 60 + unit = "m" + else: + value = seconds + unit = "s" + + if align: + return "{:.1f}{}".format(value, unit) + elif value > 10: + return "{:.0f}{}".format(value, unit) + else: + return "{:.1f}".format(value).removesuffix(".0") + unit + +def sleep(seconds, quiet=False): + _notice(quiet, "Sleeping for {} {}", seconds, plural("second", seconds)) + + _time.sleep(seconds) + +class Timer: + def __init__(self, timeout=None, timeout_message=None): + self.timeout = timeout + self.timeout_message = timeout_message + + if self.timeout is not None and not hasattr(_signal, "SIGALRM"): # pragma: nocover + self.timeout = None + + self.start_time = None + self.stop_time = None + + def start(self): + self.start_time = get_time() + + if self.timeout is not None: + self.prev_handler = _signal.signal(_signal.SIGALRM, self.raise_timeout) + self.prev_timeout, prev_interval = _signal.setitimer(_signal.ITIMER_REAL, self.timeout) + self.prev_timer_suspend_time = get_time() + + assert prev_interval == 0.0, "This case is not yet handled" + + def stop(self): + self.stop_time = get_time() + + if self.timeout is not None: + assert get_time() - self.prev_timer_suspend_time > 0, "This case is not yet handled" + + _signal.signal(_signal.SIGALRM, self.prev_handler) + _signal.setitimer(_signal.ITIMER_REAL, self.prev_timeout) + + def __enter__(self): + self.start() + return self + + def __exit__(self, exc_type, exc_value, traceback): + self.stop() + + @property + def elapsed_time(self): + assert self.start_time is not None + + if self.stop_time is None: + return get_time() - self.start_time + else: + return self.stop_time - self.start_time + + def raise_timeout(self, *args): + raise PlanoTimeout(self.timeout_message) + +## Unique ID operations + +# Length in bytes, renders twice as long in hex +def get_unique_id(bytes=16): + assert bytes >= 1 + assert bytes <= 16 + + uuid_bytes = _uuid.uuid4().bytes + uuid_bytes = uuid_bytes[:bytes] + + return _binascii.hexlify(uuid_bytes).decode("utf-8") + +## Value operations + +def nvl(value, replacement): + if value is None: + return replacement + + return value + +def is_string(value): + return isinstance(value, str) + +def is_scalar(value): + return value is None or isinstance(value, (str, int, float, complex, bool)) + +def is_empty(value): + return value in (None, "", (), [], {}) + +def pformat(value): + return _pprint.pformat(value, width=120) + +def format_empty(value, replacement): + if is_empty(value): + value = replacement + + return value + +def format_not_empty(value, template=None): + if not is_empty(value) and template is not None: + value = template.format(value) + + return value + +def format_repr(obj, limit=None): + attrs = ["{}={}".format(k, repr(v)) for k, v in obj.__dict__.items()] + return "{}({})".format(obj.__class__.__name__, ", ".join(attrs[:limit])) + +class Namespace: + def __init__(self, **kwargs): + for name in kwargs: + setattr(self, name, kwargs[name]) + + def __eq__(self, other): + return vars(self) == vars(other) + + def __contains__(self, key): + return key in self.__dict__ + + def __repr__(self): + return format_repr(self) + +## YAML operations + +def read_yaml(file): + check_module("yaml", "Python module 'yaml' is not found. To install it, run 'pip install pyyaml'.") + + import yaml as _yaml + + file = expand(file) + + with open(file) as f: + return _yaml.safe_load(f) + +def write_yaml(file, data): + check_module("yaml", "Python module 'yaml' is not found. To install it, run 'pip install pyyaml'.") + + import yaml as _yaml + + file = expand(file) + + make_parent_dir(file, quiet=True) + + with open(file, "w") as f: + _yaml.safe_dump(data, f) + + return file + +def parse_yaml(yaml): + check_module("yaml", "Python module 'yaml' is not found. To install it, run 'pip install pyyaml'.") + + import yaml as _yaml + + return _yaml.safe_load(yaml) + +def emit_yaml(data): + check_module("yaml", "Python module 'yaml' is not found. To install it, run 'pip install pyyaml'.") + + import yaml as _yaml + + return _yaml.safe_dump(data) + +def print_yaml(data, **kwargs): + print(emit_yaml(data), **kwargs) + +if PLANO_DEBUG: # pragma: nocover + enable_logging(level="debug") diff --git a/external/skewer/external/plano/src/plano/test.py b/external/skewer/external/plano/src/plano/test.py new file mode 100644 index 0000000..fb87d8d --- /dev/null +++ b/external/skewer/external/plano/src/plano/test.py @@ -0,0 +1,428 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +from .main import * +from .command import * + +import argparse as _argparse +import asyncio as _asyncio +import fnmatch as _fnmatch +import functools as _functools +import importlib as _importlib +import inspect as _inspect +import sys as _sys +import traceback as _traceback + +class PlanoTestCommand(BaseCommand): + def __init__(self, test_modules=[]): + self.test_modules = test_modules + + if _inspect.ismodule(self.test_modules): + self.test_modules = [self.test_modules] + + self.parser = BaseArgumentParser() + self.parser.add_argument("include", metavar="PATTERN", nargs="*", default=["*"], + help="Run tests with names matching PATTERN (default '*', all tests)") + self.parser.add_argument("-e", "--exclude", metavar="PATTERN", action="append", default=[], + help="Do not run tests with names matching PATTERN (repeatable)") + self.parser.add_argument("-m", "--module", action="append", default=[], + help="Collect tests from MODULE (repeatable)") + self.parser.add_argument("-l", "--list", action="store_true", + help="Print the test names and exit") + self.parser.add_argument("--enable", metavar="PATTERN", action="append", default=[], + help=_argparse.SUPPRESS) + self.parser.add_argument("--unskip", metavar="PATTERN", action="append", default=[], + help="Run skipped tests matching PATTERN (repeatable)") + self.parser.add_argument("--timeout", metavar="SECONDS", type=int, default=300, + help="Fail any test running longer than SECONDS (default 300)") + self.parser.add_argument("--fail-fast", action="store_true", + help="Exit on the first failure encountered in a test run") + self.parser.add_argument("--iterations", metavar="COUNT", type=int, default=1, + help="Run the tests COUNT times (default 1)") + self.parser.add_argument("--verbose", action="store_true", + help="Print detailed logging to the console") + self.parser.add_argument("--quiet", action="store_true", + help="Print no logging to the console") + + def parse_args(self, args): + return self.parser.parse_args(args) + + def configure_logging(self, args): + if args.verbose: + return "notice", None + + if args.quiet: + return "error", None + + return "warning", None + + def init(self, args): + self.list_only = args.list + self.include_patterns = args.include + self.exclude_patterns = args.exclude + self.enable_patterns = args.enable + self.unskip_patterns = args.unskip + self.timeout = args.timeout + self.fail_fast = args.fail_fast + self.iterations = args.iterations + self.verbose = args.verbose + self.quiet = args.quiet + + try: + for name in args.module: + self.test_modules.append(_importlib.import_module(name)) + except ImportError as e: + raise PlanoError(e) + + def run(self): + if self.list_only: + print_tests(self.test_modules) + return + + for i in range(self.iterations): + run_tests(self.test_modules, include=self.include_patterns, + exclude=self.exclude_patterns, + enable=self.enable_patterns, unskip=self.unskip_patterns, + test_timeout=self.timeout, fail_fast=self.fail_fast, + verbose=self.verbose, quiet=self.quiet) + +class PlanoTestSkipped(Exception): + pass + +def test(_function=None, name=None, module=None, timeout=None, disabled=False): + class Test: + def __init__(self, function): + self.function = function + self.name = name + self.module = module + self.timeout = timeout + self.disabled = disabled + + if self.name is None: + self.name = self.function.__name__.strip("_").replace("_", "-") + + if self.module is None: + self.module = _inspect.getmodule(self.function) + + if not hasattr(self.module, "_plano_tests"): + self.module._plano_tests = list() + + self.module._plano_tests.append(self) + + def __call__(self, test_run, unskipped): + try: + ret = self.function() + + if _inspect.iscoroutine(ret): + _asyncio.run(ret) + except SystemExit as e: + error(e) + raise PlanoError("System exit with code {}".format(e)) + + def __repr__(self): + return "test '{}:{}'".format(self.module.__name__, self.name) + + if _function is None: + return Test + else: + return Test(_function) + +def add_test(name, func, *args, **kwargs): + test(_functools.partial(func, *args, **kwargs), name=name, module=_inspect.getmodule(func)) + +def skip_test(reason=None): + if _inspect.stack()[2].frame.f_locals["unskipped"]: + return + + raise PlanoTestSkipped(reason) + +class expect_exception: + def __init__(self, exception_type=Exception, contains=None): + self.exception_type = exception_type + self.contains = contains + + def __enter__(self): + pass + + def __exit__(self, exc_type, exc_value, traceback): + if exc_value is None: + assert False, "Never encountered expected exception {}".format(self.exception_type.__name__) + + if self.contains is None: + return isinstance(exc_value, self.exception_type) + else: + return isinstance(exc_value, self.exception_type) and self.contains in str(exc_value) + +class expect_error(expect_exception): + def __init__(self, contains=None): + super().__init__(PlanoError, contains=contains) + +class expect_timeout(expect_exception): + def __init__(self, contains=None): + super().__init__(PlanoTimeout, contains=contains) + +class expect_system_exit(expect_exception): + def __init__(self, contains=None): + super().__init__(SystemExit, contains=contains) + +class expect_output(temp_file): + def __init__(self, equals=None, contains=None, startswith=None, endswith=None): + super().__init__() + self.equals = equals + self.contains = contains + self.startswith = startswith + self.endswith = endswith + + def __exit__(self, exc_type, exc_value, traceback): + result = read(self.file) + + if self.equals is None: + assert len(result) > 0, result + else: + assert result == self.equals, result + + if self.contains is not None: + assert self.contains in result, result + + if self.startswith is not None: + assert result.startswith(self.startswith), result + + if self.endswith is not None: + assert result.endswith(self.endswith), result + + super().__exit__(exc_type, exc_value, traceback) + +def print_tests(modules): + if _inspect.ismodule(modules): + modules = (modules,) + + for module in modules: + for test in module._plano_tests: + flags = "(disabled)" if test.disabled else "" + print(" ".join((str(test), flags)).strip()) + +def run_tests(modules, include="*", exclude=(), enable=(), unskip=(), test_timeout=300, + fail_fast=False, verbose=False, quiet=False): + if _inspect.ismodule(modules): + modules = (modules,) + + if is_string(include): + include = (include,) + + if is_string(exclude): + exclude = (exclude,) + + if is_string(enable): + enable = (enable,) + + if is_string(unskip): + enable = (unskip,) + + test_run = TestRun(test_timeout=test_timeout, fail_fast=fail_fast, verbose=verbose, quiet=quiet) + + if verbose: + notice("Starting {}", test_run) + elif not quiet: + cprint("=== Configuration ===", color="cyan") + + props = ( + ("Modules", format_empty(", ".join([x.__name__ for x in modules]), "[none]")), + ("Test timeout", format_duration(test_timeout)), + ("Fail fast", fail_fast), + ) + + print_properties(props) + print() + + stop = False + + for module in modules: + if stop: + break + + if verbose: + notice("Running tests from module {} (file {})", repr(module.__name__), repr(module.__file__)) + elif not quiet: + cprint("=== Module {} ===".format(repr(module.__name__)), color="cyan") + + if not hasattr(module, "_plano_tests"): + warning("Module {} has no tests", repr(module.__name__)) + continue + + for test in module._plano_tests: + if stop: + break + + if test.disabled and not any([_fnmatch.fnmatchcase(test.name, x) for x in enable]): + continue + + included = any([_fnmatch.fnmatchcase(test.name, x) for x in include]) + excluded = any([_fnmatch.fnmatchcase(test.name, x) for x in exclude]) + unskipped = any([_fnmatch.fnmatchcase(test.name, x) for x in unskip]) + + if included and not excluded: + test_run.tests.append(test) + stop = _run_test(test_run, test, unskipped) + + if not verbose and not quiet: + print() + + total = len(test_run.tests) + skipped = len(test_run.skipped_tests) + failed = len(test_run.failed_tests) + + if total == 0: + raise PlanoError("No tests ran") + + notes = "" + + if skipped != 0: + notes = "({} skipped)".format(skipped) + + if failed == 0: + result_message = "All tests passed {}".format(notes).strip() + else: + result_message = "{} {} failed {}".format(failed, plural("test", failed), notes).strip() + + if verbose: + if failed == 0: + notice(result_message) + else: + error(result_message) + elif not quiet: + cprint("=== Summary ===", color="cyan") + + props = ( + ("Total", total), + ("Skipped", skipped, format_not_empty(", ".join([x.name for x in test_run.skipped_tests]), "({})")), + ("Failed", failed, format_not_empty(", ".join([x.name for x in test_run.failed_tests]), "({})")), + ) + + print_properties(props) + print() + + cprint("=== RESULT ===", color="cyan") + + if failed == 0: + cprint(result_message, color="green") + else: + cprint(result_message, color="red", bright="True") + + print() + + if failed != 0: + raise PlanoError(result_message) + +def _run_test(test_run, test, unskipped): + if test_run.verbose: + notice("Running {}", test) + elif not test_run.quiet: + print("{:.<65} ".format(test.name + " "), end="") + + timeout = nvl(test.timeout, test_run.test_timeout) + + with temp_file() as output_file: + try: + with Timer(timeout=timeout) as timer: + if test_run.verbose: + test(test_run, unskipped) + else: + with output_redirected(output_file, quiet=True): + test(test_run, unskipped) + except KeyboardInterrupt: + raise + except PlanoTestSkipped as e: + test_run.skipped_tests.append(test) + + if test_run.verbose: + notice("{} SKIPPED ({})", test, format_duration(timer.elapsed_time)) + elif not test_run.quiet: + _print_test_result("SKIPPED", timer, "yellow") + print("Reason: {}".format(str(e))) + except Exception as e: + test_run.failed_tests.append(test) + + if test_run.verbose: + _traceback.print_exc() + + if isinstance(e, PlanoTimeout): + error("{} **FAILED** (TIMEOUT) ({})", test, format_duration(timer.elapsed_time)) + else: + error("{} **FAILED** ({})", test, format_duration(timer.elapsed_time)) + elif not test_run.quiet: + if isinstance(e, PlanoTimeout): + _print_test_result("**FAILED** (TIMEOUT)", timer, color="red", bright=True) + else: + _print_test_result("**FAILED**", timer, color="red", bright=True) + + _print_test_error(e) + _print_test_output(output_file) + + if test_run.fail_fast: + return True + else: + test_run.passed_tests.append(test) + + if test_run.verbose: + notice("{} PASSED ({})", test, format_duration(timer.elapsed_time)) + elif not test_run.quiet: + _print_test_result("PASSED", timer) + +def _print_test_result(status, timer, color="white", bright=False): + cprint("{:<7}".format(status), color=color, bright=bright, end="") + print("{:>6}".format(format_duration(timer.elapsed_time, align=True))) + +def _print_test_error(e): + cprint("--- Error ---", color="yellow") + + if isinstance(e, PlanoProcessError): + print("> {}".format(str(e))) + else: + lines = _traceback.format_exc().rstrip().split("\n") + lines = ["> {}".format(x) for x in lines] + + print("\n".join(lines)) + +def _print_test_output(output_file): + if get_file_size(output_file) == 0: + return + + cprint("--- Output ---", color="yellow") + + with open(output_file, "r") as out: + for line in out: + print("> {}".format(line), end="") + +class TestRun: + def __init__(self, test_timeout=None, fail_fast=False, verbose=False, quiet=False): + self.test_timeout = test_timeout + self.fail_fast = fail_fast + self.verbose = verbose + self.quiet = quiet + + self.tests = list() + self.skipped_tests = list() + self.failed_tests = list() + self.passed_tests = list() + + def __repr__(self): + return format_repr(self) + +def _main(): # pragma: nocover + PlanoTestCommand().main() diff --git a/external/skewer/plano b/external/skewer/plano new file mode 100755 index 0000000..476427d --- /dev/null +++ b/external/skewer/plano @@ -0,0 +1,28 @@ +#!/usr/bin/python3 +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +import sys + +sys.path.insert(0, "python") + +from plano import PlanoCommand + +if __name__ == "__main__": + PlanoCommand().main() diff --git a/external/skewer/python/plano b/external/skewer/python/plano new file mode 120000 index 0000000..e9b6dc5 --- /dev/null +++ b/external/skewer/python/plano @@ -0,0 +1 @@ +../external/plano/src/plano \ No newline at end of file diff --git a/external/skewer/python/skewer/__init__.py b/external/skewer/python/skewer/__init__.py new file mode 100644 index 0000000..3324b21 --- /dev/null +++ b/external/skewer/python/skewer/__init__.py @@ -0,0 +1,20 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +from .main import * diff --git a/external/skewer/python/skewer/main.py b/external/skewer/python/skewer/main.py new file mode 100644 index 0000000..2c0202e --- /dev/null +++ b/external/skewer/python/skewer/main.py @@ -0,0 +1,781 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +import inspect + +from plano import * + +__all__ = [ + "generate_readme", "run_steps", "Minikube", +] + +standard_text = read_yaml(join(get_parent_dir(__file__), "standardtext.yaml")) +standard_steps = read_yaml(join(get_parent_dir(__file__), "standardsteps.yaml")) + +standard_steps_by_old_name = dict() + +for name, data in standard_steps.items(): + if "old_name" in data: + data["new_name"] = name + standard_steps_by_old_name[data["old_name"]] = data + +def check_environment(): + check_program("base64") + check_program("curl") + check_program("kubectl") + check_program("skupper") + +def resource_exists(resource): + return run(f"kubectl get {resource}", output=DEVNULL, check=False, quiet=True).exit_code == 0 + +def get_resource_json(resource, jsonpath=""): + return call(f"kubectl get {resource} -o jsonpath='{{{jsonpath}}}'", quiet=True) + +def await_resource(resource, timeout=300): + assert "/" in resource, resource + + start_time = get_time() + + while True: + notice(f"Waiting for {resource} to become available") + + if resource_exists(resource): + break + + if get_time() - start_time > timeout: + fail(f"Timed out waiting for {resource}") + + sleep(5, quiet=True) + + if resource.startswith("deployment/"): + try: + run(f"kubectl wait --for condition=available --timeout {timeout}s {resource}", quiet=True, stash=True) + except: + run(f"kubectl logs {resource}") + raise + +def await_ingress(service, timeout=300): + assert service.startswith("service/"), service + + start_time = get_time() + + await_resource(service, timeout=timeout) + + while True: + notice(f"Waiting for hostname or IP from {service} to become available") + + json = get_resource_json(service, ".status.loadBalancer.ingress") + + if json != "": + break + + if get_time() - start_time > timeout: + fail(f"Timed out waiting for hostnmae or external IP for {service}") + + sleep(5, quiet=True) + + data = parse_json(json) + + if len(data): + if "hostname" in data[0]: + return data[0]["hostname"] + + if "ip" in data[0]: + return data[0]["ip"] + + fail(f"Failed to get hostname or IP from {service}") + +def await_http_ok(service, url_template, user=None, password=None, timeout=300): + assert service.startswith("service/"), service + + start_time = get_time() + + ip = await_ingress(service, timeout=timeout) + + url = url_template.format(ip) + insecure = url.startswith("https") + + while True: + notice(f"Waiting for HTTP OK from {url}") + + try: + http_get(url, insecure=insecure, user=user, password=password, quiet=True) + except PlanoError: + if get_time() - start_time > timeout: + fail(f"Timed out waiting for HTTP OK from {url}") + + sleep(5, quiet=True) + else: + break + +def await_console_ok(): + await_resource("secret/skupper-console-users") + + password = get_resource_json("secret/skupper-console-users", ".data.admin") + password = base64_decode(password) + + await_http_ok("service/skupper", "https://{}:8010/", user="admin", password=password) + +def run_steps(skewer_file, kubeconfigs=[], work_dir=None, debug=False): + notice(f"Running steps (skewer_file='{skewer_file}')") + + check_environment() + + model = Model(skewer_file, kubeconfigs) + model.check() + + if work_dir is None: + work_dir = join(get_user_temp_dir(), "skewer") + remove(work_dir, quiet=True) + make_dir(work_dir, quiet=True) + + try: + for step in model.steps: + if step.name == "cleaning_up": + continue + + run_step(model, step, work_dir) + + if "SKEWER_DEMO" in ENV: + pause_for_demo(model) + except: + if debug: + print_debug_output(model) + + raise + finally: + for step in model.steps: + if step.name == "cleaning_up": + run_step(model, step, work_dir, check=False) + break + +def run_step(model, step, work_dir, check=True): + if not step.commands: + return + + notice(f"Running {step}") + + for site_name, commands in step.commands: + with dict(model.sites)[site_name] as site: + if site.platform == "kubernetes": + run(f"kubectl config set-context --current --namespace {site.namespace}", stdout=DEVNULL, quiet=True) + + for command in commands: + if command.apply == "readme": + continue + + if command.await_resource: + await_resource(command.await_resource) + + if command.await_ingress: + await_ingress(command.await_ingress) + + if command.await_http_ok: + await_http_ok(*command.await_http_ok) + + if command.await_console_ok: + await_console_ok() + + if command.await_port: + await_port(command.await_port, timeout=300) + + if command.run: + proc = run(command.run.replace("~", work_dir), shell=True, check=False) + + if command.expect_failure: + if proc.exit_code == 0: + fail("A command expected to fail did not fail") + + continue + + if check and proc.exit_code > 0: + raise PlanoProcessError(proc) + +def pause_for_demo(model): + notice("Pausing for demo time") + + first_site = [x for _, x in model.sites][0] + console_url = None + password = None + frontend_url = None + + if first_site.platform == "kubernetes": + with first_site: + if resource_exists("deployment/frontend"): + frontend_url = f"http://localhost:8080/" + + if resource_exists("secret/skupper-console-users"): + console_host = await_ingress("service/skupper") + console_url = f"https://{console_host}:8010/" + + await_resource("secret/skupper-console-users") + password = get_resource_json("secret/skupper-console-users", ".data.admin") + password = base64_decode(password).decode("ascii") + + print() + print("Demo time!") + print() + print("Sites:") + print() + + for _, site in model.sites: + if site.platform == "kubernetes": + kubeconfig = site.env["KUBECONFIG"] + print(f" {site.name}: export KUBECONFIG={kubeconfig}") + elif site.platform == "podman": + print(f" {site.name}: export SKUPPER_PLATFORM=podman") + + print() + + if frontend_url: + print(f"Frontend URL: {frontend_url}") + print() + + if console_url: + print(f"Console URL: {console_url}") + print( "Console user: admin") + print(f"Console password: {password}") + print() + + if "SKEWER_DEMO_NO_WAIT" not in ENV: + while input("Are you done (yes)? ") != "yes": # pragma: nocover + pass + +def print_debug_output(model): + print("TROUBLE!") + print("-- Start of debug output") + + for _, site in model.sites: + print(f"---- Debug output for site '{site.name}'") + + with site: + if site.platform == "kubernetes": + run("kubectl get services", check=False) + run("kubectl get deployments", check=False) + run("kubectl get statefulsets", check=False) + run("kubectl get pods", check=False) + run("kubectl get events", check=False) + + run("skupper version", check=False) + run("skupper site status", check=False) + run("skupper link status", check=False) + run("skupper listener status", check=False) + run("skupper connector status", check=False) + + if site.platform == "kubernetes": + run("kubectl logs deployment/skupper-router", check=False) + # run("kubectl logs deployment/skupper-service-controller", check=False) + + print("-- End of debug output") + +def generate_readme(skewer_file, output_file): + notice(f"Generating the readme (skewer_file='{skewer_file}', output_file='{output_file}')") + + model = Model(skewer_file) + model.check() + + out = list() + + def generate_workflow_url(workflow): + result = parse_url(workflow) + + if result.scheme: + return workflow + + owner, repo = get_github_owner_repo() + + return f"https://github.com/{owner}/{repo}/actions/workflows/{workflow}" + + def generate_step_heading(step): + if step.numbered: + return f"Step {step.number}: {step.title}" + else: + return step.title + + def append_toc_entry(heading, condition=True): + if not condition: + return + + fragment = string_replace_re(heading, r"[ -]", "_") + fragment = string_replace_re(fragment, r"[\W]", "") + fragment = fragment.replace("_", "-") + fragment = fragment.lower() + + out.append(f"* [{heading}](#{fragment})") + + def append_section(heading, text): + if not text: + return + + out.append(f"## {heading}") + out.append("") + out.append(text) + out.append("") + + out.append("") + out.append("") + + out.append(f"# {model.title}") + out.append("") + + if model.workflow: + url = generate_workflow_url(model.workflow) + out.append(f"[![main]({url}/badge.svg)]({url})") + out.append("") + + if model.subtitle: + out.append(f"#### {model.subtitle}") + out.append("") + + out.append(standard_text["example_suite"].strip()) + out.append("") + out.append("#### Contents") + out.append("") + + append_toc_entry("Overview", model.overview) + append_toc_entry("Prerequisites", model.prerequisites) + + for step in model.steps: + append_toc_entry(generate_step_heading(step)) + + append_toc_entry("Summary", model.summary) + append_toc_entry("Next steps", model.next_steps) + append_toc_entry("About this example", model.about_this_example) + + out.append("") + + append_section("Overview", model.overview) + append_section("Prerequisites", model.prerequisites) + + for step in model.steps: + heading = generate_step_heading(step) + text = generate_readme_step(model, step) + + append_section(heading, text) + + append_section("Summary", model.summary) + append_section("Next steps", model.next_steps) + append_section("About this example", model.about_this_example) + + write(output_file, "\n".join(out).strip() + "\n") + +def generate_readme_step(model, step): + notice(f"Generating {step}") + + out = list() + + if step.preamble: + out.append(step.preamble.strip()) + out.append("") + + for site_name, commands in step.commands: + site = dict(model.sites)[site_name] + outputs = list() + + out.append(f"_**{site.title}:**_") + out.append("") + out.append("~~~ shell") + + for command in commands: + if command.apply == "test": + continue + + if command.run: + out.append(command.run) + + if command.output: + assert command.run + + outputs.append((command.run, command.output)) + + out.append("~~~") + out.append("") + + if outputs: + out.append("_Sample output:_") + out.append("") + out.append("~~~ console") + out.append("\n\n".join((f"$ {run}\n{output.strip()}" for run, output in outputs))) + out.append("~~~") + out.append("") + + if step.postamble: + out.append(step.postamble.strip()) + + return "\n".join(out).strip() + +def apply_kubeconfigs(model, kubeconfigs): + kube_sites = [x for _, x in model.sites if x.platform == "kubernetes"] + + if kubeconfigs and len(kubeconfigs) < len(kube_sites): + fail("The provided kubeconfigs are fewer than the number of Kubernetes sites") + + for site, kubeconfig in zip(kube_sites, kubeconfigs): + site.env["KUBECONFIG"] = kubeconfig + +def apply_standard_steps(model): + notice("Applying standard steps") + + for step in model.steps: + if "standard" not in step.data: + continue + + standard_step_name = step.data["standard"] + + try: + standard_step_data = standard_steps[standard_step_name] + except KeyError: + try: + standard_step_data = standard_steps_by_old_name[standard_step_name] + new_name = standard_step_data["new_name"] + + warning(f"Step '{standard_step_name}' has a new name: '{new_name}'") + except KeyError: + fail(f"Standard step '{standard_step_name}' not found") + + del step.data["standard"] + + def apply_attribute(name, default=None): + standard_value = standard_step_data.get(name, default) + value = step.data.get(name, standard_value) + + if is_string(value): + if standard_value is not None: + value = value.replace("@default@", str(nvl(standard_value, "")).strip()) + + for i, site in enumerate([x for _, x in model.sites]): + value = value.replace(f"@site{i}@", site.title) + + if site.namespace: + value = value.replace(f"@namespace{i}@", site.namespace) + + value = value.strip() + + step.data[name] = value + + apply_attribute("name") + apply_attribute("title") + apply_attribute("numbered", True) + apply_attribute("preamble") + apply_attribute("postamble") + + platform = standard_step_data.get("platform") + + if "commands" not in step.data and "commands" in standard_step_data: + step.data["commands"] = dict() + + for i, item in enumerate(dict(model.sites).items()): + site_name, site = item + + if platform and site.platform != platform: + continue + + if str(i) in standard_step_data["commands"]: + # Is a specific index in the standard commands? + commands = standard_step_data["commands"][str(i)] + step.data["commands"][site_name] = resolve_command_variables(commands, site) + elif "*" in standard_step_data["commands"]: + # Is "*" in the standard commands? + commands = standard_step_data["commands"]["*"] + step.data["commands"][site_name] = resolve_command_variables(commands, site) + else: + # Otherwise, omit commands for this site + continue + +def resolve_command_variables(commands, site): + resolved_commands = list() + + for command in commands: + resolved_command = dict(command) + + if "run" in command: + resolved_command["run"] = command["run"] + + if site.platform == "kubernetes": + resolved_command["run"] = resolved_command["run"].replace("@kubeconfig@", site.env["KUBECONFIG"]) + resolved_command["run"] = resolved_command["run"].replace("@namespace@", site.namespace) + + if "output" in command: + resolved_command["output"] = command["output"] + + if site.platform == "kubernetes": + resolved_command["output"] = resolved_command["output"].replace("@kubeconfig@", site.env["KUBECONFIG"]) + resolved_command["output"] = resolved_command["output"].replace("@namespace@", site.namespace) + + resolved_commands.append(resolved_command) + + return resolved_commands + +def get_github_owner_repo(): + check_program("git") + + url = call("git remote get-url origin", quiet=True) + result = parse_url(url) + + if result.scheme == "" and result.path.startswith("git@github.com:"): + path = result.path.removeprefix("git@github.com:") + path = path.removesuffix(".git") + + return path.split("/", 1) + + if result.scheme in ("http", "https") and result.netloc == "github.com": + path = result.path.removeprefix("/") + + return path.split("/", 1) + + fail("Unknown origin URL format") + +def object_property(name, default=None): + def get(obj): + value = obj.data.get(name, default) + + if is_string(value): + value = value.replace("@default@", str(nvl(default, "")).strip()) + value = value.strip() + + return value + + return property(get) + +def check_required_attributes(obj, *names): + for name in names: + if name not in obj.data: + fail(f"{obj} is missing required attribute '{name}'") + +def check_unknown_attributes(obj): + known_attributes = dict(inspect.getmembers(obj.__class__, lambda x: isinstance(x, property))) + + for name in obj.data: + if name not in known_attributes: + fail(f"{obj} has unknown attribute '{name}'") + +class Model: + title = object_property("title") + subtitle = object_property("subtitle") + workflow = object_property("workflow", "main.yaml") + overview = object_property("overview") + prerequisites = object_property("prerequisites", standard_text["prerequisites"]) + summary = object_property("summary") + next_steps = object_property("next_steps", standard_text["next_steps"]) + about_this_example = object_property("about_this_example", standard_text["about_this_example"]) + + def __init__(self, skewer_file, kubeconfigs=[]): + self.skewer_file = skewer_file + self.data = read_yaml(self.skewer_file) + + apply_kubeconfigs(self, kubeconfigs) + apply_standard_steps(self) + + def __repr__(self): + return f"model '{self.skewer_file}'" + + def check(self): + check_required_attributes(self, "title", "sites", "steps") + check_unknown_attributes(self) + + for _, site in self.sites: + site.check() + + for step in self.steps: + step.check() + + @property + def sites(self): + for name, data in self.data["sites"].items(): + yield name, Site(self, data, name) + + @property + def steps(self): + for data in self.data["steps"]: + yield Step(self, data) + +class Site: + platform = object_property("platform") + namespace = object_property("namespace") + env = object_property("env", dict()) + + def __init__(self, model, data, name): + assert name is not None + + self.model = model + self.data = data + self.name = name + + def __repr__(self): + return f"site '{self.name}'" + + def __enter__(self): + self._logging_context = logging_context(self.name) + self._working_env = working_env(**self.env) + + self._logging_context.__enter__() + self._working_env.__enter__() + + return self + + def __exit__(self, exc_type, exc_value, traceback): + self._working_env.__exit__(exc_type, exc_value, traceback) + self._logging_context.__exit__(exc_type, exc_value, traceback) + + def check(self): + check_required_attributes(self, "platform") + check_unknown_attributes(self) + + if self.platform not in ("kubernetes", "podman", None): + fail(f"{self} attribute 'platform' has an illegal value: {self.platform}") + + if self.platform == "kubernetes": + check_required_attributes(self, "namespace") + + if "KUBECONFIG" not in self.env: + fail(f"Kubernetes {self} has no KUBECONFIG environment variable") + + if self.platform == "podman": + if "SKUPPER_PLATFORM" not in self.env: + fail(f"Podman {self} has no SKUPPER_PLATFORM environment variable") + + platform = self.env["SKUPPER_PLATFORM"] + + if platform != "podman": + fail(f"Podman {self} environment variable SKUPPER_PLATFORM has an illegal value: {platform}") + + @property + def title(self): + return self.data.get("title", capitalize(self.name)) + +class Step: + numbered = object_property("numbered", True) + name = object_property("name") + title = object_property("title") + preamble = object_property("preamble") + postamble = object_property("postamble") + + def __init__(self, model, data): + self.model = model + self.data = data + + def __repr__(self): + return f"step {self.number} '{self.title}'" + + def check(self): + check_required_attributes(self, "title") + check_unknown_attributes(self) + + site_names = [x.name for _, x in self.model.sites] + + for site_name, commands in self.commands: + if site_name not in site_names: + fail(f"Unknown site name '{site_name}' in commands for {self}") + + for command in commands: + command.check() + + @property + def number(self): + return self.model.data["steps"].index(self.data) + 1 + + @property + def commands(self): + for site_name, commands in self.data.get("commands", dict()).items(): + yield site_name, [Command(self.model, data) for data in commands] + +class Command: + run = object_property("run") + expect_failure = object_property("expect_failure", False) + apply = object_property("apply") + output = object_property("output") + await_resource = object_property("await_resource") + await_ingress = object_property("await_ingress") + await_http_ok = object_property("await_http_ok") + await_console_ok = object_property("await_console_ok") + await_port = object_property("await_port") + + def __init__(self, model, data): + self.model = model + self.data = data + + def __repr__(self): + if self.run: + return f"command '{self.run.splitlines()[0]}'" + + return "command" + + def check(self): + check_unknown_attributes(self) + +class Minikube: + def __init__(self, skewer_file): + self.skewer_file = skewer_file + self.kubeconfigs = list() + self.work_dir = join(get_user_temp_dir(), "skewer") + + def __enter__(self): + notice("Starting Minikube") + + check_environment() + check_program("minikube") + + profile_data = parse_json(call("minikube profile list --output json", quiet=True)) + + for profile in profile_data.get("valid", []): + if profile["Name"] == "skewer": + fail("A Minikube profile 'skewer' already exists. Delete it using 'minikube delete -p skewer'.") + + remove(self.work_dir, quiet=True) + make_dir(self.work_dir, quiet=True) + + run("minikube start -p skewer --auto-update-drivers false") + + try: + tunnel_output_file = open(f"{self.work_dir}/minikube-tunnel-output", "w") + self.tunnel = start("minikube tunnel -p skewer", output=tunnel_output_file) + + try: + model = Model(self.skewer_file) + model.check() + + kube_sites = [x for _, x in model.sites if x.platform == "kubernetes"] + + for site in kube_sites: + kubeconfig = site.env["KUBECONFIG"] + kubeconfig = kubeconfig.replace("~", self.work_dir) + kubeconfig = expand(kubeconfig) + + site.env["KUBECONFIG"] = kubeconfig + + self.kubeconfigs.append(kubeconfig) + + with site: + run("minikube update-context -p skewer") + check_file(ENV["KUBECONFIG"]) + except: + stop(self.tunnel) + raise + except: + run("minikube delete -p skewer") + raise + + return self + + def __exit__(self, exc_type, exc_value, traceback): + notice("Stopping Minikube") + + stop(self.tunnel) + + run("minikube delete -p skewer") diff --git a/external/skewer/python/skewer/planocommands.py b/external/skewer/python/skewer/planocommands.py new file mode 100644 index 0000000..5a718b3 --- /dev/null +++ b/external/skewer/python/skewer/planocommands.py @@ -0,0 +1,91 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +from plano import * +from plano.github import * +from skewer import * + +_debug_param = CommandParameter("debug", help="Produce extra debug output on failure") + +@command +def generate(output="README.md"): + """ + Generate README.md from the data in skewer.yaml + """ + generate_readme("skewer.yaml", output) + +@command +def render(quiet=False): + """ + Render README.html from README.md + """ + generate() + + markdown = read("README.md") + html = convert_github_markdown(markdown) + + write("README.html", html) + + if not quiet: + print(f"file:{get_real_path('README.html')}") + +@command +def clean(): + remove(find(".", "__pycache__")) + remove("README.html") + +@command(parameters=[_debug_param]) +def run_(*kubeconfigs, debug=False): + """ + Run the example steps + + If no kubeconfigs are provided, Skewer starts a local Minikube + instance and runs the steps using it. + """ + if not kubeconfigs: + with Minikube("skewer.yaml") as mk: + run_steps("skewer.yaml", kubeconfigs=mk.kubeconfigs, work_dir=mk.work_dir, debug=debug) + else: + run_steps("skewer.yaml", kubeconfigs=kubeconfigs, debug=debug) + +@command(parameters=[_debug_param]) +def demo(*kubeconfigs, debug=False): + """ + Run the example steps and pause for a demo before cleaning up + """ + with working_env(SKEWER_DEMO=1): + run_(*kubeconfigs, debug=debug) + +@command(parameters=[_debug_param]) +def test_(debug=False): + """ + Test README generation and run the steps on Minikube + """ + generate(output=make_temp_file()) + run_(debug=debug) + +@command +def update_skewer(): + """ + Update the embedded Skewer repo and GitHub workflow + + This results in local changes to review and commit. + """ + update_external_from_github("external/skewer", "skupperproject", "skewer", "v2") + copy("external/skewer/config/.github/workflows/main.yaml", ".github/workflows/main.yaml") diff --git a/external/skewer/python/skewer/standardsteps.yaml b/external/skewer/python/skewer/standardsteps.yaml new file mode 100644 index 0000000..a277f84 --- /dev/null +++ b/external/skewer/python/skewer/standardsteps.yaml @@ -0,0 +1,361 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +# +# Platform steps +# + +platform/access_your_kubernetes_clusters: + old_name: kubernetes/set_up_your_clusters + title: Access your Kubernetes clusters + platform: kubernetes + preamble: | + Skupper is designed for use with multiple Kubernetes clusters. + The `skupper` and `kubectl` commands use your + [kubeconfig][kubeconfig] and current context to select the cluster + and namespace where they operate. + + [kubeconfig]: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/ + + This example uses multiple cluster contexts at once. The + `KUBECONFIG` environment variable tells `skupper` and `kubectl` + which kubeconfig to use. + + For each cluster, open a new terminal window. In each terminal, + set the `KUBECONFIG` environment variable to a different path and + log in to your cluster. + commands: + "*": + - run: export KUBECONFIG=@kubeconfig@ + - run: "" + apply: readme + postamble: | + **Note:** The login procedure varies by provider. +platform/access_your_kubernetes_cluster: + old_name: kubernetes/set_up_your_kubernetes_cluster + title: Access your Kubernetes cluster + platform: kubernetes + preamble: | + Open a new terminal window and log in to your cluster. + commands: + "*": + - run: "" + apply: readme + postamble: | + **Note:** The login procedure varies by provider. +platform/set_up_your_podman_environments: + title: Set up your Podman environments + platform: podman + preamble: | + For each host with a Podman environment, open a new terminal + window and set the `SKUPPER_PLATFORM` environment variable to + `podman`. This sets the Skupper platform to Podman for this + terminal session. + + Use `systemctl` to enable the Podman API service. + commands: + "*": + - run: export SKUPPER_PLATFORM=podman + - run: | + Want: skupper system install + Want: skupper system start + systemctl --user enable --now podman.socket + postamble: | + If the `systemctl` command doesn't work, you can try the `podman + system service` command instead: + + ~~~ + podman system service --time 0 unix://$XDG_RUNTIME_DIR/podman/podman.sock & + ~~~ +platform/set_up_your_podman_environment: + old_name: podman/set_up_your_podman_environment + title: Set up your Podman environment + platform: podman + preamble: | + Open a new terminal window and set the `SKUPPER_PLATFORM` + environment variable to `podman`. This sets the Skupper platform + to Podman for this terminal session. + + Use `systemctl` to enable the Podman API service. + commands: + "*": + - run: export SKUPPER_PLATFORM=podman + - run: | + Want: skupper system install + Want: skupper system start + systemctl --user enable --now podman.socket + postamble: | + If the `systemctl` command does not work, you can try the `podman + system service` command instead: + + ~~~ + podman system service --time 0 unix://$XDG_RUNTIME_DIR/podman/podman.sock & + ~~~ + +# +# Skupper steps +# + +platform/install_the_skupper_command_line_tool: + old_name: general/install_the_skupper_command_line_tool + title: Install the Skupper command-line tool + preamble: | + This example uses the Skupper command-line tool to create Skupper + resources. You need to install the `skupper` command only once + for each development environment. + + On Linux or Mac, you can use the install script (inspect it + [here][install-script]) to download and extract the command: + + ~~~ shell + curl https://skupper.io/install.sh | sh -s -- --version 2.0.0-preview-2 + ~~~ + + The script installs the command under your home directory. It + prompts you to add the command to your path if necessary. + + For Windows and other installation options, see [Installing + Skupper][install-docs]. + + [install-script]: https://github.com/skupperproject/skupper-website/blob/main/input/install.sh + [install-docs]: https://skupper.io/install/ +platform/install_skupper_on_your_kubernetes_clusters: + old_name: kubernetes/install_skupper_on_your_clusters + title: Install Skupper on your Kubernetes clusters + platform: kubernetes + preamble: | + Using Skupper on Kubernetes requires the installation of the + Skupper custom resource definitions (CRDs) and the Skupper + controller. + + For each cluster, use `kubectl apply` with the Skupper + installation YAML to install the CRDs and controller. + commands: + "*": + - run: kubectl apply -f https://skupper.io/v2/install.yaml +platform/install_skupper_on_your_kubernetes_cluster: + old_name: kubernetes/install_skupper_on_your_cluster + title: Install Skupper on your Kubernetes cluster + platform: kubernetes + preamble: | + Using Skupper on Kubernetes requires the installation of the + Skupper custom resource definitions (CRDs) and the Skupper + controller. + + Use `kubectl apply` with the Skupper installation YAML to install + the CRDs and controller. + commands: + "*": + - run: kubectl apply -f https://skupper.io/v2/install.yaml +skupper/create_your_sites/kubernetes_cli: + old_name: kubernetes/create_your_sites + title: Create your sites + preamble: | + A Skupper _site_ is a location where your application workloads + are running. Sites are linked together to form a network for your + application. + + For each namespace, use `skupper site create` with a site name of + your choice. This creates the site resource and deploys the + Skupper router to the namespace. + + **Note:** If you are using Minikube, you need to [start minikube + tunnel][minikube-tunnel] before you run `skupper site create`. + + + + [minikube-tunnel]: https://skupper.io/start/minikube.html#running-minikube-tunnel + commands: + "0": + - run: skupper site create @namespace@ --enable-link-access --timeout 2m + output: | + Waiting for status... + Site "@namespace@" is configured. Check the status to see when it is ready + "*": + - run: skupper site create @namespace@ --timeout 2m + output: | + Waiting for status... + Site "@namespace@" is configured. Check the status to see when it is ready + postamble: | + You can use `skupper site status` at any time to check the status + of your site. +skupper/link_your_sites/kubernetes_cli: + old_name: kubernetes/link_your_sites + title: Link your sites + platform: kubernetes + preamble: | + A Skupper _link_ is a channel for communication between two sites. + Links serve as a transport for application connections and + requests. + + Creating a link requires the use of two Skupper commands in + conjunction: `skupper token issue` and `skupper token redeem`. + The `skupper token issue` command generates a secret token that + can be transferred to a remote site and redeemed for a link to the + issuing site. The `skupper token redeem` command uses the token + to create the link. + + **Note:** The link token is truly a *secret*. Anyone who has the + token can link to your site. Make sure that only those you trust + have access to it. + + First, use `skupper token issue` in @site0@ to generate the token. + Then, use `skupper token redeem` in @site1@ to link the sites. + commands: + "0": + - run: skupper token issue ~/secret.token + output: | + Waiting for token status ... + + Grant "west-cad4f72d-2917-49b9-ab66-cdaca4d6cf9c" is ready + Token file /run/user/1000/skewer/secret.token created + + Transfer this file to a remote site. At the remote site, + create a link to this site using the "skupper token redeem" command: + + skupper token redeem + + The token expires after 1 use(s) or after 15m0s. + "1": + - run: skupper token redeem ~/secret.token + output: | + Waiting for token status ... + Token "west-cad4f72d-2917-49b9-ab66-cdaca4d6cf9c" has been redeemed + You can now safely delete /run/user/1000/skewer/secret.token + postamble: | + If your terminal sessions are on different machines, you may need + to use `scp` or a similar tool to transfer the token securely. By + default, tokens expire after a single use or 15 minutes after + being issued. +skupper/cleaning_up/kubernetes_cli: + old_name: general/cleaning_up + name: cleaning_up + title: Cleaning up + numbered: false + preamble: | + To remove Skupper and the other resources from this exercise, use + the following commands. + commands: + "*": + - run: skupper delete + +# +# Hello World steps +# + +hello_world/deploy_the_frontend_and_backend/kubernetes_cli: + old_name: hello_world/deploy_the_frontend_and_backend + title: Deploy the frontend and backend + preamble: | + This example runs the frontend and the backend in separate + Kubernetes namespaces, on different clusters. + + For each cluster, use `kubectl create namespace` and `kubectl + config set-context` to create the namespace you wish to use and + set the namespace on your current context. + + Then, use `kubectl create deployment` to deploy the frontend in + @site0@ and the backend in @site1@. + commands: + "0": + - run: kubectl create namespace @namespace@ + apply: readme + - run: kubectl create namespace @namespace@ --dry-run=client -o yaml | kubectl apply -f - + apply: test + - run: kubectl config set-context --current --namespace @namespace@ + - run: kubectl create deployment frontend --image quay.io/skupper/hello-world-frontend + "1": + - run: kubectl create namespace @namespace@ + apply: readme + - run: kubectl create namespace @namespace@ --dry-run=client -o yaml | kubectl apply -f - + apply: test + - run: kubectl config set-context --current --namespace @namespace@ + - run: kubectl create deployment backend --image quay.io/skupper/hello-world-backend --replicas 3 +hello_world/expose_the_backend_service/kubernetes_cli: + old_name: hello_world/expose_the_backend + title: Expose the backend service + preamble: | + We now have our sites linked to form a Skupper network, but no + services are exposed on it. + + Skupper uses _listeners_ and _connectors_ to expose services + across sites inside a Skupper network. A listener is a local + endpoint for client connections, configured with a routing key. A + connector exists in a remote site and binds a routing key to a + particular set of servers. Skupper routers forward client + connections from local listeners to remote connectors with + matching routing keys. + + In @site0@, use the `skupper listener create` command to create a + listener for the backend. In @site1@, use the `skupper connector + create` command to create a matching connector. + commands: + "0": + - run: skupper listener create backend 8080 + output: | + Waiting for create to complete... + Listener "backend" is ready + "1": + - run: skupper connector create backend 8080 + output: | + Waiting for create to complete... + Connector "backend" is ready + postamble: | + The commands shown above use the name argument, `backend`, to also + set the default routing key and pod selector. You can use the + `--routing-key` and `--selector` options to set specific values. + + +hello_world/access_the_frontend_service/kubernetes_cli: + old_name: hello_world/access_the_frontend + title: Access the frontend service + preamble: | + In order to use and test the application, we need external access + to the frontend. + + Use `kubectl port-forward` to make the frontend available at + `localhost:8080`. + commands: + "0": + - await_resource: deployment/frontend + - run: kubectl port-forward deployment/frontend 8080:8080 + apply: readme + - run: kubectl port-forward deployment/frontend 8080:8080 > /dev/null & + apply: test + - await_port: 8080 + - run: curl http://localhost:8080/api/health + apply: test + postamble: | + You can now access the web interface by navigating to + [http://localhost:8080](http://localhost:8080) in your browser. +hello_world/cleaning_up/kubernetes_cli: + old_name: hello_world/cleaning_up + name: cleaning_up + title: Cleaning up + numbered: false + preamble: | + To remove Skupper and the other resources from this exercise, use + the following commands: + commands: + "0": + - run: skupper site delete --all + - run: kubectl delete deployment/frontend + "1": + - run: skupper site delete --all + - run: kubectl delete deployment/backend diff --git a/external/skewer/python/skewer/standardtext.yaml b/external/skewer/python/skewer/standardtext.yaml new file mode 100644 index 0000000..cdd5966 --- /dev/null +++ b/external/skewer/python/skewer/standardtext.yaml @@ -0,0 +1,62 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +example_suite: | + This example is part of a [suite of examples][examples] showing the + different ways you can use [Skupper][website] to connect services + across cloud providers, data centers, and edge sites. + + [website]: https://skupper.io/ + [examples]: https://skupper.io/examples/index.html +prerequisites: | + * Access to at least one Kubernetes cluster, from [any provider you + choose][kube-providers]. + + * The `kubectl` command-line tool, version 1.15 or later + ([installation guide][install-kubectl]). + + * The `skupper` command-line tool, version 2.0 or later. On Linux + or Mac, you can use the install script (inspect it + [here][cli-install-script]) to download and extract the command: + + ~~~ shell + curl https://skupper.io/install.sh | sh -s -- --version 2.0.0-preview-2 + ~~~ + + See [Installing the Skupper CLI][cli-install-docs] for more + information. + + [kube-providers]: https://skupper.io/start/kubernetes.html + [install-kubectl]: https://kubernetes.io/docs/tasks/tools/install-kubectl/ + [cli-install-script]: https://github.com/skupperproject/skupper-website/blob/main/input/install.sh + [cli-install-docs]: https://skupper.io/install/ +next_steps: | + Check out the other [examples][examples] on the Skupper website. +about_this_example: | + This example was produced using [Skewer][skewer], a library for + documenting and testing Skupper examples. + + [skewer]: https://github.com/skupperproject/skewer + + Skewer provides utility functions for generating the README and + running the example steps. Use the `./plano` command in the project + root to see what is available. + + To quickly stand up the example using Minikube, try the `./plano demo` + command. diff --git a/external/skewer/python/skewer/tests.py b/external/skewer/python/skewer/tests.py new file mode 100644 index 0000000..7fa00b6 --- /dev/null +++ b/external/skewer/python/skewer/tests.py @@ -0,0 +1,67 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +from plano import * +from skewer import * + +@test +def plano_commands(): + with working_dir("example"): + run("./plano") + run("./plano generate") + run("./plano render") + run("./plano clean") + +@test +def config_files(): + check_file("config/.github/workflows/main.yaml") + check_file("config/.gitignore") + check_file("config/.plano.py") + + parse_yaml(read("config/.github/workflows/main.yaml")) + +@test +def generate_readme_(): + with working_dir("example"): + generate_readme("skewer.yaml", "README.md") + check_file("README.md") + +@test +def run_steps_(): + with working_dir("example"): + with Minikube("skewer.yaml") as mk: + run_steps("skewer.yaml", kubeconfigs=mk.kubeconfigs, work_dir=mk.work_dir, debug=True) + +@test +def run_steps_demo(): + with working_dir("example"): + with Minikube("skewer.yaml") as mk: + run_steps("skewer.yaml", kubeconfigs=mk.kubeconfigs, work_dir=mk.work_dir, debug=True) + +@test +def run_steps_debug(): + with working_dir("example"): + with expect_error(): + with working_env(SKEWER_FAIL=1): + with Minikube("skewer.yaml") as mk: + run_steps("skewer.yaml", kubeconfigs=mk.kubeconfigs, work_dir=mk.work_dir, debug=True) + +if __name__ == "__main__": + import sys + run_tests(sys.modules[__name__]) diff --git a/plano b/plano new file mode 100755 index 0000000..476427d --- /dev/null +++ b/plano @@ -0,0 +1,28 @@ +#!/usr/bin/python3 +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +import sys + +sys.path.insert(0, "python") + +from plano import PlanoCommand + +if __name__ == "__main__": + PlanoCommand().main() diff --git a/python/plano b/python/plano new file mode 120000 index 0000000..431570b --- /dev/null +++ b/python/plano @@ -0,0 +1 @@ +../external/skewer/python/plano \ No newline at end of file diff --git a/python/skewer b/python/skewer new file mode 120000 index 0000000..0cc66e2 --- /dev/null +++ b/python/skewer @@ -0,0 +1 @@ +../external/skewer/python/skewer \ No newline at end of file diff --git a/resources-a/connectors.yaml b/resources-a/connectors.yaml new file mode 100644 index 0000000..9f477dd --- /dev/null +++ b/resources-a/connectors.yaml @@ -0,0 +1,21 @@ +apiVersion: skupper.io/v2alpha1 +kind: Connector +metadata: + name: productcatalogservice +spec: + port: 3550 + routingKey: productcatalogservice + selector: app=productcatalogservice + type: tcp + +--- +apiVersion: skupper.io/v2alpha1 +kind: Connector +metadata: + name: recommendationservice +spec: + port: 8080 + routingKey: recommendationservice + selector: app=recommendationservice + type: tcp + diff --git a/deployment-ms-a.yaml b/resources-a/deployment-ms-a.yaml similarity index 100% rename from deployment-ms-a.yaml rename to resources-a/deployment-ms-a.yaml diff --git a/resources-a/listeners.yaml b/resources-a/listeners.yaml new file mode 100644 index 0000000..055784b --- /dev/null +++ b/resources-a/listeners.yaml @@ -0,0 +1,70 @@ +--- +apiVersion: skupper.io/v2alpha1 +kind: Listener +metadata: + name: adservice +spec: + host: adservice + port: 9555 + routingKey: adservice + type: tcp +--- +apiVersion: skupper.io/v2alpha1 +kind: Listener +metadata: + name: cartservice +spec: + host: cartservice + port: 7070 + routingKey: cartservice + type: tcp +--- +apiVersion: skupper.io/v2alpha1 +kind: Listener +metadata: + name: checkoutservice +spec: + host: checkoutservice + port: 5050 + routingKey: checkoutservice + type: tcp +--- +apiVersion: skupper.io/v2alpha1 +kind: Listener +metadata: + name: currencyservice +spec: + host: currencyservice + port: 7000 + routingKey: currencyservice + type: tcp +--- +apiVersion: skupper.io/v2alpha1 +kind: Listener +metadata: + name: productcatalogservice +spec: + host: productcatalogservice + port: 3550 + routingKey: productcatalogservice + type: tcp +--- +apiVersion: skupper.io/v2alpha1 +kind: Listener +metadata: + name: recommendationservice +spec: + host: recommendationservice + port: 8080 + routingKey: recommendationservice + type: tcp +--- +apiVersion: skupper.io/v2alpha1 +kind: Listener +metadata: + name: shippingservice +spec: + host: shippingservice + port: 50051 + routingKey: shippingservice + type: tcp diff --git a/resources-a/site.yaml b/resources-a/site.yaml new file mode 100644 index 0000000..2459bd4 --- /dev/null +++ b/resources-a/site.yaml @@ -0,0 +1,7 @@ +apiVersion: skupper.io/v2alpha1 +kind: Site +metadata: + name: grpc-a +spec: + linkAccess: default + diff --git a/resources-b/connectors.yaml b/resources-b/connectors.yaml new file mode 100644 index 0000000..818cca5 --- /dev/null +++ b/resources-b/connectors.yaml @@ -0,0 +1,54 @@ +apiVersion: skupper.io/v2alpha1 +kind: Connector +metadata: + name: checkoutservice +spec: + port: 5050 + routingKey: checkoutservice + selector: app=checkoutservice + type: tcp + +--- +apiVersion: skupper.io/v2alpha1 +kind: Connector +metadata: + name: cartservice +spec: + port: 7070 + routingKey: cartservice + selector: app=cartservice + type: tcp + +--- +apiVersion: skupper.io/v2alpha1 +kind: Connector +metadata: + name: currencyservice +spec: + port: 7000 + routingKey: currencyservice + selector: app=currencyservice + type: tcp + +--- +apiVersion: skupper.io/v2alpha1 +kind: Connector +metadata: + name: adservice +spec: + port: 9555 + routingKey: adservice + selector: app=adservice + type: tcp + +--- +apiVersion: skupper.io/v2alpha1 +kind: Connector +metadata: + name: redis-cart +spec: + port: 6379 + routingKey: redis-cart + selector: app=redis-cart + type: tcp + diff --git a/deployment-ms-b.yaml b/resources-b/deployment-ms-b.yaml similarity index 100% rename from deployment-ms-b.yaml rename to resources-b/deployment-ms-b.yaml diff --git a/resources-b/listeners.yaml b/resources-b/listeners.yaml new file mode 100644 index 0000000..cda0cfd --- /dev/null +++ b/resources-b/listeners.yaml @@ -0,0 +1,70 @@ +--- +apiVersion: skupper.io/v2alpha1 +kind: Listener +metadata: + name: cartservice +spec: + host: cartservice + port: 7070 + routingKey: cartservice + type: tcp +--- +apiVersion: skupper.io/v2alpha1 +kind: Listener +metadata: + name: currencyservice +spec: + host: currencyservice + port: 7000 + routingKey: currencyservice + type: tcp +--- +apiVersion: skupper.io/v2alpha1 +kind: Listener +metadata: + name: emailservice +spec: + host: emailservice + port: 5000 + routingKey: emailservice + type: tcp +--- +apiVersion: skupper.io/v2alpha1 +kind: Listener +metadata: + name: paymentservice +spec: + host: paymentservice + port: 50051 + routingKey: paymentservice + type: tcp +--- +apiVersion: skupper.io/v2alpha1 +kind: Listener +metadata: + name: redis-cart +spec: + host: redis-cart + port: 6379 + routingKey: redis-cart + type: tcp +--- +apiVersion: skupper.io/v2alpha1 +kind: Listener +metadata: + name: shippingservice +spec: + host: shippingservice + port: 50051 + routingKey: shippingservice + type: tcp +--- +apiVersion: skupper.io/v2alpha1 +kind: Listener +metadata: + name: productcatalogservice +spec: + host: productcatalogservice + port: 3550 + routingKey: productcatalogservice + type: tcp diff --git a/resources-b/site.yaml b/resources-b/site.yaml new file mode 100644 index 0000000..34c9532 --- /dev/null +++ b/resources-b/site.yaml @@ -0,0 +1,7 @@ +apiVersion: skupper.io/v2alpha1 +kind: Site +metadata: + name: grpc-b +spec: + linkAccess: default + diff --git a/resources-c/connectors.yaml b/resources-c/connectors.yaml new file mode 100644 index 0000000..01ad5de --- /dev/null +++ b/resources-c/connectors.yaml @@ -0,0 +1,32 @@ +apiVersion: skupper.io/v2alpha1 +kind: Connector +metadata: + name: emailservice +spec: + port: 5000 + routingKey: emailservice + selector: app=emailservice + type: tcp + +--- +apiVersion: skupper.io/v2alpha1 +kind: Connector +metadata: + name: paymentservice +spec: + port: 50051 + routingKey: paymentservice + selector: app=paymentservice + type: tcp + +--- +apiVersion: skupper.io/v2alpha1 +kind: Connector +metadata: + name: shippingservice +spec: + port: 50051 + routingKey: shippingservice + selector: app=shippingservice + type: tcp + diff --git a/deployment-ms-c.yaml b/resources-c/deployment-ms-c.yaml similarity index 100% rename from deployment-ms-c.yaml rename to resources-c/deployment-ms-c.yaml diff --git a/resources-c/site.yaml b/resources-c/site.yaml new file mode 100644 index 0000000..2530aaf --- /dev/null +++ b/resources-c/site.yaml @@ -0,0 +1,7 @@ +apiVersion: skupper.io/v2alpha1 +kind: Site +metadata: + name: grpc-c +spec: + linkAccess: default + diff --git a/skewer.yaml b/skewer.yaml new file mode 100644 index 0000000..dd2cf44 --- /dev/null +++ b/skewer.yaml @@ -0,0 +1,117 @@ +title: Skupper Online Boutique +subtitle: A Cloud-Native gRPC microservice-based application deployed across multiple Kubernetes clusters using Skupper +overview: | + This tutorial demonstrates how to deploy the [Online + Boutique](https://github.com/GoogleCloudPlatform/microservices-demo/) + microservices demo application across multiple Kubernetes clusters that are + located in different public and private cloud providers. This project + contains a 10-tier microservices application developed by Google to + demonstrate the use of technologies like Kubernetes. + + In this tutorial, you will create a Virtual Application Network that enables + communications across the public and private clusters. You will then deploy a + subset of the application's grpc based microservices to each cluster. You + will then access the `Online Boutique` web interface to browse items, add + them to the cart and purchase them. +sites: + grpc-a: + title: gRPC A + platform: kubernetes + namespace: grpc-a + env: + KUBECONFIG: ~/.kube/config-grpc-a + grpc-b: + title: gRPC B + platform: kubernetes + namespace: grpc-b + env: + KUBECONFIG: ~/.kube/config-grpc-b + grpc-c: + title: gRPC C + platform: kubernetes + namespace: grpc-c + env: + KUBECONFIG: ~/.kube/config-grpc-c +steps: + - standard: platform/access_your_kubernetes_clusters + - standard: platform/install_skupper_on_your_kubernetes_clusters + - title: Apply Kubernetes Resources + preamble: | + Apply the application deployment resources alongside the skupper + resources describing the application network. + commands: + grpc-a: + - run: kubectl create namespace grpc-a + - run: kubectl apply -f resources-a + grpc-b: + - run: kubectl create namespace grpc-b + - run: kubectl apply -f resources-b + grpc-c: + - run: kubectl create namespace grpc-c + - run: kubectl apply -f resources-c + - title: Wait for Sites Ready + preamble: | + Before linking sites to form the network, wait for the Sites to be ready. + commands: + grpc-a: + - run: kubectl wait --for condition=Ready site/grpc-a --timeout 240s + grpc-b: + - run: kubectl wait --for condition=Ready site/grpc-b --timeout 120s + grpc-c: + - run: kubectl wait --for condition=Ready site/grpc-c --timeout 120s + - standard: platform/install_the_skupper_command_line_tool + - standard: skupper/link_your_sites/kubernetes_cli + commands: + grpc-a: + - run: skupper token issue ~/grpc-a.token --redemptions-allowed=2 + output: | + Waiting for token status ... + + Grant "grpc-a-cad4f72d-2917-49b9-ab66-cdaca4d6cf9c" is ready + Token file grpc-a.token created + + Transfer this file to a remote site. At the remote site, + create a link to this site using the "skupper token redeem" command: + + skupper token redeem + + The token expires after 1 use(s) or after 15m0s. + grpc-b: + - run: skupper token issue ~/grpc-b.token + - run: skupper token redeem ~/grpc-a.token + output: | + Waiting for token status ... + Token "grpc-a-cad4f72d-2917-49b9-ab66-cdaca4d6cf9c" has been redeemed + You can now safely delete /run/user/1000/skewer/secret.token + grpc-c: + - run: skupper token redeem ~/grpc-a.token + output: | + Waiting for token status ... + Token "grpc-a-cad4f72d-2917-49b9-ab66-cdaca4d6cf9c" has been redeemed + You can now safely delete /run/user/1000/skewer/secret.token + - run: skupper token redeem ~/grpc-b.token + output: | + Waiting for token status ... + Token "grpc-b-cad4f72d-2917-49b9-ab66-cdaca4d6cf9c" has been redeemed + You can now safely delete /run/user/1000/skewer/secret.token + - standard: skupper/cleaning_up/kubernetes_cli + commands: + grpc-a: + - run: kubectl delete -f resources-a + grpc-b: + - run: kubectl delete -f resources-b + grpc-c: + - run: kubectl delete -f resources-c +summary: | + This example locates the many services that make up a microservice + application across three different namespaces on different clusters with no + modifications to the application. Without Skupper, it would normally take + careful network planning to avoid exposing these services over the public + internet. + + Introducing Skupper into each namespace allows us to create a virtual + application network that can connect services in different clusters. Any + service exposed on the application network is represented as a local service in + all of the linked namespaces. + + diff --git a/unexpose-deployments-a.sh b/unexpose-deployments-a.sh deleted file mode 100755 index 314d482..0000000 --- a/unexpose-deployments-a.sh +++ /dev/null @@ -1,3 +0,0 @@ -#!/bin/bash -skupper unexpose deployment productcatalogservice --address productcatalogservice -skupper unexpose deployment recommendationservice --address recommendationservice diff --git a/unexpose-deployments-b.sh b/unexpose-deployments-b.sh deleted file mode 100755 index 82d6b27..0000000 --- a/unexpose-deployments-b.sh +++ /dev/null @@ -1,6 +0,0 @@ -#!/bin/bash -skupper unexpose deployment checkoutservice --address checkoutservice -skupper unexpose deployment cartservice --address cartservice -skupper unexpose deployment currencyservice --address currencyservice -skupper unexpose deployment redis-cart --address redis-cart -skupper unexpose deployment adservice --address adservice diff --git a/unexpose-deployments-c.sh b/unexpose-deployments-c.sh deleted file mode 100755 index 22b9733..0000000 --- a/unexpose-deployments-c.sh +++ /dev/null @@ -1,4 +0,0 @@ -#!/bin/bash -skupper unexpose deployment emailservice --address emailservice -skupper unexpose deployment paymentservice --address paymentservice -skupper unexpose deployment shippingservice --address shippingservice