Skip to content

Latest commit

 

History

History
53 lines (35 loc) · 3.2 KB

File metadata and controls

53 lines (35 loc) · 3.2 KB

Design

Architecture

design As the architecture diagram above shows, Telescope streamlines the evaluation process through five key steps:

  1. Provision Resources
  2. Validate Resources
  3. Execute Tests
  4. Cleanup Resources
  5. Publish Results

The framework consists of three main re-usable components:

  1. Terraform Modules (modules/terraform/) - Cloud infrastructure provisioning
  2. Python Modules (modules/python/) - Test tools integration and execution
  3. Pipeline Templates (pipelines/, jobs/, steps/) - Pipelines for test automation

Implementation

Components

Pipelines

The pipelines orchestrate steps to run to conduct the benchmarking. The pipelines are to be run in Azure Devops (ADO) using ADO syntax in YAML format.

Scenarios

The scenarios contains the scenarios for each test case, focusing on a particular setup. Analogously each scenario is an test case corresponding to SCENARIO_NAME used in the pipeline definition.

Steps

The steps folder contains reusable templates to be invoked from the pipeline. the steps are organized in a functional way, such as to setup/cleanup infrastructure, setup other resources / testing framework etc.

Modules

The modules contains the tailored code to be invoked from the steps. There are two main parts:

  • python: python functions to integrate test engines/tools
  • terraform: cloud agnostic way to setup test targets/resources

Python code is the entrypoint to test tools such as clusterloader2

Tools

The framework integrates with these performance testing tools:

  1. kperf - API server and ETCD performance testing
  2. kwok - Kubernetes simulation without kubelet
  3. clusterloader2 - Kubernetes cluster performance testing
  4. resource-consumer - Resource utilization testing
  5. iperf - Network performance testing
  6. fio - Storage I/O performance testing

CL2 (clusterloader2) uses its own template engine. For benchmarking purpose, the metrics is collected through prometheus and measurement defined by PromQL. There are CL2 out of box measurements and customised kubelet measurement. The metrics to collect are kubernetes metrics and the measurements are usually SLOs and other key performance SLIs.