As the architecture diagram above shows, Telescope streamlines the evaluation process through five key steps:
- Provision Resources
- Validate Resources
- Execute Tests
- Cleanup Resources
- Publish Results
The framework consists of three main re-usable components:
- Terraform Modules (
modules/terraform/) - Cloud infrastructure provisioning - Python Modules (
modules/python/) - Test tools integration and execution - Pipeline Templates (
pipelines/,jobs/,steps/) - Pipelines for test automation
The pipelines orchestrate steps to run to conduct the benchmarking. The pipelines are to be run in Azure Devops (ADO) using ADO syntax in YAML format.
The scenarios contains the scenarios for each test case, focusing on a particular setup. Analogously each scenario is an test case corresponding to SCENARIO_NAME used in the pipeline definition.
The steps folder contains reusable templates to be invoked from the pipeline. the steps are organized in a functional way, such as to setup/cleanup infrastructure, setup other resources / testing framework etc.
The modules contains the tailored code to be invoked from the steps. There are two main parts:
- python: python functions to integrate test engines/tools
- terraform: cloud agnostic way to setup test targets/resources
Python code is the entrypoint to test tools such as clusterloader2
The framework integrates with these performance testing tools:
- kperf - API server and ETCD performance testing
- kwok - Kubernetes simulation without kubelet
- clusterloader2 - Kubernetes cluster performance testing
- resource-consumer - Resource utilization testing
- iperf - Network performance testing
- fio - Storage I/O performance testing
CL2 (clusterloader2) uses its own template engine. For benchmarking purpose, the metrics is collected through prometheus and measurement defined by PromQL. There are CL2 out of box measurements and customised kubelet measurement. The metrics to collect are kubernetes metrics and the measurements are usually SLOs and other key performance SLIs.