Test Workflows - Content
Often you need to provide some input data for your tests - whether it's Tests source code, or some fixtures.
All the examples here that are using content
are using it directly under the spec
.
Please note, that you can also use it under a specific step - this way, such files won't be available
in different steps - see Schema Reference.
Remember, that the working directory won't be affected by any of these settings.
If you want to change default working directory, i.e. to the cloned repository,
you can use workingDir
:
spec:
container:
workingDir: /data/repo
Shared Directories
/data
By default, the /data
directory has an empty volume mounted that can be shared across the steps.
Please note, that this directory is shared across steps of a single execution. To share data across multiple executions, see Shared Directory Between Executions.
Custom Volumes
You may create different directories that will share data across steps using Kubernetes. To do so, create and mount Empty Dir volume. This may be useful for sharing a dependencies across steps, for example.
spec:
pod:
volumes:
- name: golang-deps
emptyDir: {}
container:
env:
- name: GOCACHE
value: /go-cache
volumeMounts:
- name: golang-deps
mountPath: /go-cache
Shared Directory Between Executions
We are planning to prepare a built-in caching mechanism that will allow you to share data more easily.
If you want to cache some data between executions, i.e., to speed up execution, you may consider mounting Host's file system with hostPath
.
spec:
pod:
volumes:
- name: golang-deps
hostPath:
path: "/tmp/go-cache-{{ workflow.name }}"
container:
env:
- name: GOCACHE
value: /go-cache
volumeMounts:
- name: golang-deps
mountPath: /go-cache
This way, you will have a cache for each execution of the Test Workflow on the same node.
If you want to cache data across multiple nodes, you may consider:
- using an Object Storage API like Minio, AWS S3 or GCP Cloud Storage, and save/read files using it, or
- creating a Persistent Volume and attaching it with
pod.volumes
andcontainer.volumeMounts
.
Static Files
To mount a static file, you can use content.files
.
spec:
content:
files:
- path: /etc/nginx/nginx.conf
content: |
events {
worker_connections 1024;
}
http {
server {
listen 8888;
location / {
root /www;
}
}
}
- path: /www/index.html
content: "hello-there"
Mounting From a Secret or ConfigMap
If you need to mount the secret as a file, you can use contentFrom
clause in content.files
. It has the same interface as valueFrom
in native EnvVar.
spec:
content:
files:
- path: /some/path/secret.key
contentFrom:
secretKeyRef: # Mount from Secret
name: secret-name
key: key-name-in-secret
- path: /some/path/nginx.conf
contentFrom:
configMapKeyRef: # Mount from ConfigMap
name: secret-name
key: key-name-in-secret
Mounting Secret or ConfigMap as a Directory
As an alternative, you may use native Kubernetes volumes for mounting Secrets or ConfigMaps.
spec:
pod:
volumes:
# Create volume from the Secret
- name: example-secret-volume
secret:
secretName: secret-name
# Create volume from the ConfigMap
- name: example-configmap-volume
configMap:
name: configmap-name
container:
volumeMounts:
# Mount files from the Secret
- name: example-secret-volume
mountPath: /some/secret
# Mount files from the ConfigMap
- name: example-configmap-volume
mountPath: /some/config
Working Directory Resolution
A few useful hints about the workingDir
resolution:
- without
workingDir
(or with empty) in step - it will use the parent's working dir (down the workflow specification tree) - without
workingDir
(or with empty) at all in the workflow specification tree - it will use working dir from the image default - with relative
workingDir
in step - it will go further from parent's working dir (down the workflow specification tree) - with relative
workingDir
in step, but no other in a workflow specification tree - it will go further from the image default - the parent is whatever is found in the workflow specification tree, for example, the Global Template's
spec.container.workingDir
goes down to Test Workflowsspec.container.workingDir
(unless overridden).
Git Repository
Testkube allows you to easily fetch the Git repository using content.git
.
Remember, that the working directory won't be affected by any of these settings.
If you want to change default working directory, i.e. to the cloned repository,
you can use workingDir
:
spec:
container:
workingDir: /data/repo
Custom Mount Path
By default, the Git repository contents are mounted at /data/repo
directory.
If you want to mount it in a different directory, you can use mountPath
property:
spec:
content:
git:
uri: https://github.com/kubeshop/testkube.git
mountPath: /custom/mount/path
steps:
- shell: tree /custom/mount/path
Public Repositories
To use a public repository, simply pass the URL of the repository and it will be automatically mounted in /data/repo
.
spec:
content:
git:
uri: https://github.com/kubeshop/testkube.git
# You may also fetch different revision:
# revision: main
# revision: v1.7.30
# revision: 2457ba80c1e0ade682b202fbec7062d82107e12f
steps:
- shell: tree /data/repo
Using Username and Password/Token
Private repositories may need authentication with username and password/token.
To authenticate that way, you can use username
/usernameFrom
and token
/tokenFrom
properties.
The username
and token
properties take plain-text arguments. If you want to take username and password/token from the Secret or ConfigMap,
you can use usernameFrom
and/or tokenFrom
properties. They have the same interface as valueFrom
in native EnvVar.
spec:
content:
git:
uri: https://github.com/kubeshop/testkube.git
username: my-username
# usernameFrom:
# secretKeyRef:
# name: git-credentials
# key: username
token: my-token
# tokenFrom:
# secretKeyRef:
# name: git-credentials
# key: token
Using Authorization Header
The example above is using Basic Authorization. Some Git providers may require Bearer Authentication instead.
To provide the bearer token, you need to pass authType: header
property.
spec:
content:
git:
uri: https://github.com/kubeshop/testkube.git
authType: header
token: my-token
# tokenFrom:
# secretKeyRef:
# name: git-credentials
# key: token
Using GitHub App
GitHub allows authentication through GitHub Apps. Using a GitHub App simplifies managing tokens and auditing which repositories Testkube has access to. The execution pod will now also receive short-lived tokens from the control plane which further improves security.
You will need to install a GitHub App to Testkube to use this authentication strategy.
spec:
content:
git:
uri: https://github.com/kubeshop/testkube.git
authType: github
Using SSH Key
You can use a private SSH key to access your repository, using either plain-text sshKey
property,
or sshKeyFrom
that has the same interface as valueFrom
in native EnvVar.
spec:
content:
git:
uri: ssh://git@github.com/kubeshop/testkube.git
sshKeyFrom:
secretKeyRef:
name: git-credentials
key: ssh-key
# sshKey: |
# -----BEGIN OPENSSH PRIVATE KEY-----
# b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
# XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
# XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
# XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
# Py55nRNObUBBaG/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX=
# -----END OPENSSH PRIVATE KEY-----
Pulling Selected Files
You may pass file patterns in content.git.paths
which should be fetched from the Git repository.
It will use sparse checkout, so you will avoid transferring unnecessary files.
By default it uses no-cone mode (so file paths needs to be passed as full pattern).
To use cone mode and interpret file patterns as a cone pattern (which became default in Git since v2.37),
add cone: true
property under content.git
.
spec:
content:
git:
uri: https://github.com/kubeshop/testkube.git
# cone: true # if you want to enable cone patterns
paths:
- test/cypress/executor-tests/cypress-12
Using multiple git repositories
There are a few different ways to do it.
Use multiple content.git setups
The content.git clause may be nested to load multiple repositories.
spec:
content:
git:
uri: https://github.com/kubeshop/testkube
steps:
- content:
git:
uri: https://github.com/kubeshop/testkube-operator
mountPath: /operator
steps:
- shell: ls -lah /data/repo && ls -lah /operator
The cloned repositories will be available only for nested steps Explanation: Under the hood, it will create EmptyDir volumes and volume mounts at /data/repo and /operator and clone the repositories there. Potential QoL future improvement: add syntax to clone multiple repositories, like clone.gits.
Custom step to load the selected repositories
You can just have an initial step to load the selected repositories. If you are actually using it in multiple tests, likely it would be best to have it in the TestWorkflowTemplate.
spec:
pod:
volumes:
- name: repos
emptyDir: {}
container:
volumeMounts:
- name: repos
mountPath: /repos
setup:
- name: Load common repositories
run:
image: bitnami/git:2.49.0
shell: |
git clone --depth 1 https://github.com/kubeshop/testkube /repos/testkube
git clone --depth 1 https://github.com/kubeshop/testkube-operator /repos/operator
- shell: tree /repos
Explanation: this would create EmptyDir volume at /repos, load the repositories in there, and keep it available for all other steps.
Clone to common EmptyDir volume
We already have an EmptyDir volume mounted in /data by default, so you can just use it:
spec:
setup:
- name: Load common repositories
run:
image: bitnami/git:2.49.0
shell: |
git clone --depth 1 https://github.com/kubeshop/testkube /data/repos/testkube
git clone --depth 1 https://github.com/kubeshop/testkube-operator /data/repos/operator
- shell: tree /data/repos
Pure Steps
Assuming that you would be i.e. always using the same image, you need to have a user that will have permissions to write in some into container storage, and you can then just clone it as above, but without volume.
spec:
system:
pureByDefault: true
container:
securityContext:
runAsUser: 0 # ensure that you have permissions to write in some random /repos directory
setup:
- name: Load common repositories
run:
image: bitnami/git:2.49.0
shell: |
git clone --depth 1 https://github.com/kubeshop/testkube /repos/testkube
git clone --depth 1 https://github.com/kubeshop/testkube-operator /repos/operator
- shell: tree /repos
Explanation: it will clone the repos into the ephemeral storage where image is unpacked. It's rather not recommended - EmptyDir will be much more resilient, secure, and with a similar performance.
Getting from other place
If you don't actually need a repository, but i.e. some binary that you have available somewhere, there is mechanism to untar these dependencies:
spec:
content:
tarball:
- url: https://github.com/kubeshop/testkube/releases/download/v2.1.131/testkube_2.1.131_Linux_x86_64.tar.gz
path: /testkube-binaries
steps:
- shell: tree /testkube-binaries
Explanation: it will mount an EmptyDir volume if needed, download the tarball and unpack it to selected directory.
Git submodules
Your main repository could have all these repos loaded as a Git submodules.
spec:
content:
git:
uri: https://github.com/robocorp/example-use-git-submodule-for-shared-code
setup:
- name: Load submodules
run:
image: bitnami/git:2.49.0
workingDir: /data/repo
shell: git -c safe.directory=/data/repo submodule update --init --recursive
- shell: tree /data/repo
Explanation: Testkube will load the repository, while the prepared submodules will be available too Potential future QoL improvement: we could add submodules: true option for the content.git, that would basically do it. (edited)
Customizable Git Branch
Often you may want to run the Test Workflow for different code revisions (i.e. running it on Pull Requests with GitHub Action).
You can use Test Workflow Expressions there, so it may be customized with configuration variables:
spec:
config:
branch:
type: string
default: main
content:
git:
uri: https://github.com/kubeshop/testkube.git
revision: "{{ config.branch }}"
Fetching Tarballs
Testkube has a simple built-in way to fetch tarball archives for your tests. You may use content.tarball
to specify the files to download and unpack:
spec:
content:
tarball:
- url: https://github.com/kubeshop/testkube/releases/download/v1.17.53/testkube_1.17.53_Linux_arm64.tar.gz
path: /data/bin
steps:
- shell: tree /data/bin
Fetching OCI Artifacts
Sometimes, especially in air-gapped environment, you may want to provide the test data via OCI Registries.
You may use ORAS to fetch artifacts from the OCI registry.
To do so, run a step with Docker image that has ORAS installed and pull the artifact into /data
(or other known volume):
spec:
steps:
- run:
image: bitnami/oras:latest
workingDir: /data
shell: |
oras pull 12345678901234.dkr.ecr.us-west-2.amazonaws.com/some/artifact:latest
- shell: tree /data
OCI Images
Sometimes the registry won't support the OCI Artifacts, and, instead, they are available as regular images.
To fetch the data from such an image, simply run a step using that image,
and copy all the expected contents into /data
(or other known volume):
spec:
steps:
- run:
image: "12345678901234.dkr.ecr.us-west-2.amazonaws.com/some/image:latest"
shell: "cp -rf /some/test/content /data"
- shell: tree /data
Test Workflows automatically have access to some binaries to use (i.e. shell) even on distroless images.
Thanks to that, you can use general shell commands (i.e. cp
, tar
or tree
) even on images where they are not available.
Object Storage
To download your test data from the Object Storage you may simply use a Docker image with CLI that allows that, and fetch the data.
spec:
steps:
- run:
image: minio/mc:latest
env:
- name: STORAGE_URL
value: https://s3.us-south.cloud-object-storage.appdomain.cloud
- name: STORAGE_ACCESS_KEY
value: AKIAIOSFODNN7EXAMPLE
- name: STORAGE_SECRET_KEY
value: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
- name: STORAGE_BUCKET
value: artifacts
shell: |
/usr/bin/mc config host add storage "$STORAGE_URL" "$STORAGE_ACCESS_KEY" "$STORAGE_SECRET_KEY"
/usr/bin/mc cp "storage/$STORAGE_BUCKET/some-artifact.tar.gz" /data/some-artifact.tar.gz
tar -zxf /data/some-artifact.tar.gz
rm /data/some-artifact.tar.gz
- shell: tree /data
Custom Sources
Similarly to the Object Storage example above, you can perform any other action in first step, copy the results to shared directory, and then perform any work on that.
Reusing Content Between Multiple Test Workflows
You may have multiple Test Workflows that needs access to the same test content. It's a good practice to abstract that with Test Workflow Templates.
- Template
- Test Workflow
apiVersion: testworkflows.testkube.io/v1
kind: TestWorkflowTemplate
metadata:
name: git-testkube
spec:
config:
revision:
type: string
default: main
content:
git:
uri: https://github.com/kubeshop/testkube.git
revision: "{{ config.revision }}"
sshKeyFrom:
secretKeyRef:
name: git-credentials
key: ssh-key
apiVersion: testworkflows.testkube.io/v1
kind: TestWorkflowTemplate
metadata:
name: example-workflow
spec:
use:
- name: git-testkube
config:
revision: develop
steps:
- shell: tree /data/repo