Skip to main content
New to Testkube? Unleash the power of cloud native testing in Kubernetes with Testkube. Get Started >

Artifacts

warning

Features described in this document are being deprecated as part of the transition to Test Workflows - Read More.

Storing Test Artifacts

Testkube allows you to save supported files generated by your tests, which we call Artifacts.

The executor will scrape the files and store them in Minio. The executor will create a bucket named by execution ID and collect all files that are stored in the location specific to each executor.

The available configuration parameters in Helm charts are:

ParameterIs optionalDefaultDefault
testkube-api.storage.endpointyestestkube-minio-service-testkube:9000URL of the S3 bucket
testkube-api.storage.accessKeyIdyesminioAccess Key ID
testkube-api.storage.accessKeyyesminio123Access Key
testkube-api.storage.locationyesRegion
testkube-api.storage.tokenyesS3 Token
testkube-api.storage.SSLyesfalseIndicates whether SSL communication is to be enabled.
testkube-api.storage.scrapperEnabledyestrueIndicates whether executors should scrape artifacts.
testkube-api.storage.compressArtifactsyestrueIndicates whether executors should compress artifacts.

The API Server accepts the following environment variables:

STORAGE_ENDPOINT
STORAGE_BUCKET
STORAGE_ACCESSKEYID
STORAGE_SECRETACCESSKEY
STORAGE_LOCATION
STORAGE_REGION
STORAGE_TOKEN
STORAGE_SSL
SCRAPPERENABLED
COMPRESSARTIFACTS

Which can be set while installing with Helm:

helm install --create-namespace my-testkube kubeshop/testkube --set STORAGE_ENDPOINT=custom_value

Alternatively, these values can be read from Kubernetes secrets and set:

- env:
- name: STORAGE_ENDPOINT
secret:
secretName: test-secret

Collecting Test Artifacts

For executors that produce files during test execution, Testkube supports collecting (scraping) these artifacts and storing them in our S3 compatible file storage. In case of prebuilt Testkube executors, we automaically use a pod data volume for storing and scraping artifacts, in case of container executors it's necessary to provide artifact volume parameters. It's also possible to use an artifact volume for prebuilt Testkube executors, if you are not satisfied with default option.

You can override the default volume mount path in the Test definition:

apiVersion: tests.testkube.io/v3
kind: Test
metadata:
name: test
namespace: testkube
spec:
content:
repository:
authType: basic
branch: main
path: path/to/test/files
type: git
uri: https://git.repo.path.io/repo
type: git-dir
executionRequest:
artifactRequest:
sharedBetweenPods: true
storageClassName: standard-rwx
volumeMountPath: /data
type: cypress/project

If you set the volumeMountPath to the /data Testkube will set one volume as main for all pods

Just keep in mind that all values need to be set:

      sharedBetweenPods: true
storageClassName: standard-rwx
volumeMountPath: /data

storageClassName should be one which can support ReadWriteMany access mode, like NFS or similar one. Consider checking your Kubernetes provider documentation for more information about which storage classes supports it. Keep in mind that the NFS based volumes can be significantly slower than local storage.

You can omit using explicit value of storageClassName in the Test definition and use instead:

      useDefaultStorageClassName: true

In this case it's necessary to supply defaultStorageClassName value in your helm chart values during installation.

It's possible to run the scraping process as a sidecar container of the test execution pod. Then you should avoid using volume parameters and provide sidecarScraper in the Test definition like:

      sidecarScraper: true
dirs:
- /data/test/reports
masks:
- .*\.txt,result.*

You need to save test related files into specified directories on the dynamically created volume. They will be uploaded from there to Testkube file storage and available later for downloading using standard Testkube CLI or Testkube Dashboard commands. For example:

apiVersion: tests.testkube.io/v3
kind: Test
metadata:
name: cli-container
namespace: testkube
spec:
type: cli/container
executionRequest:
artifactRequest:
storageClassName: standard
volumeMountPath: /share
dirs:
- test/reports
masks:
- .*\.txt,result.*
sharedBetweenPods: false

You have to define the storage class name, volume mount path and directories in this volume with test artifacts. Default volume mount path is /data/artifacts and directory is . . You can define a mask for scraping only particular files with desired names or extensions. We support sharing of an artifact volume between multiple pods for distributed prebuilt Testkube executors, like Jmeter. Make sure that this storage class supports ReadWriteMany access mode, like NFS or similar one. Make sure your container executor definition has artifacts feature. For example:

apiVersion: executor.testkube.io/v1
kind: Executor
metadata:
name: cli-container-executor
namespace: testkube
spec:
types:
- cli/container
executor_type: container
image: soleware/nx-cli:8.5.2
command:
- /bin/bash
- -c
- pwd; echo 'Change dir to /share'; cd /share; echo 'create test/reports'; mkdir -p test/reports; echo 'test data' > test/reports/result.txt
features:
- artifacts

Run your test using the CLI command:

testkube run test cli-container

Then get available artifacts for your test execution id:

testkube get artifact 638a08b94ff1d2c694aeebf2
Expected output:
  NAME       | SIZE (KB)
-------------+------------
result.txt | 10