Skip to content

Latest commit

 

History

History
398 lines (296 loc) · 23 KB

File metadata and controls

398 lines (296 loc) · 23 KB

RHDH Backstage Helm Chart for OpenShift

Version: 5.7.1 Type: application

A Helm chart for deploying Red Hat Developer Hub, which is a Red Hat supported version of Backstage.

The telemetry data collection feature is enabled by default. Red Hat Developer Hub sends telemetry data to Red Hat by using the backstage-plugin-analytics-provider-segment plugin. To disable this and to learn what data is being collected, see https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.6/html-single/telemetry_data_collection_and_analysis/index

Homepage: https://red.ht/rhdh

Productized RHDH

This repository now provides the productized RHDH chart. For the Generally Available version of this chart, see:

Maintainers

Name Email Url
Red Hat https://redhat.com

TL;DR

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add redhat-developer https://redhat-developer.github.io/rhdh-chart

helm install my-backstage redhat-developer/backstage --version 5.7.1

Introduction

This chart bootstraps a Backstage deployment on a Kubernetes cluster using the Helm package manager.

Prerequisites

Usage

Charts are available in the following formats:

Note

Up-to-date instructions on installing RHDH through the chart can be found in the installation docs.

Installing from the Chart Repository

The following command can be used to add the chart repository:

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add redhat-developer https://redhat-developer.github.io/rhdh-chart

Once the chart has been added, install this chart. However before doing so, please review the default values.yaml and adjust as needed.

  • To get proper connection between frontend and backend of Backstage please update the apps.example.com to match your cluster host:

    global:
      clusterRouterBase: apps.example.com

    Tip: you can use helm upgrade -i --set global.clusterRouterBase=apps.example.com ... instead of a value file

  • If your cluster doesn't provide PVCs, you should disable PostgreSQL persistence via:

    upstream:
      postgresql:
        primary:
          persistence:
            enabled: false
helm upgrade -i <release_name> redhat-developer/backstage

Installing from an OCI Registry

Charts are also available in OCI format. The list of available releases can be found here.

Install one of the available versions:

helm upgrade -i <release_name> oci://quay.io/rhdh/chart --version=<version>

Tip: List all releases using helm list

Testing a Release

Once an Helm Release has been deployed, you can test it using the helm test command:

helm test <release_name>

This will run a simple Pod in the cluster to check that the application deployed is up and running.

You can control whether to disable this test pod or you can also customize the image it leverages. See the test.enabled and test.image parameters in the values.yaml file.

Tip: Disabling the test pod will not prevent the helm test command from passing later on. It will simply report that no test suite is available.

Below are a few examples:

Disabling the test pod
helm install <release_name> <repo_or_oci_registry> \
  --set test.enabled=false
Customizing the test pod image
helm install <release_name> <repo_or_oci_registry> \
  --set test.image.repository=curl/curl-base \
  --set test.image.tag=8.11.1

Uninstalling the Chart

To uninstall/delete the my-backstage-release deployment:

helm uninstall my-backstage-release

The command removes all the Kubernetes components associated with the chart and deletes the release.

Requirements

Kubernetes: >= 1.27.0-0

Repository Name Version
file://./vendor/backstage/charts/backstage/ upstream(backstage) 2.6.3
https://charts.bitnami.com/bitnami common 2.38.0

Values

Key Description Type Default
global.auth Enable service authentication within Backstage instance object {"backend":{"enabled":true,"existingSecret":"","value":""}}
global.auth.backend Backend service to service authentication
Ref: https://backstage.io/docs/auth/service-to-service-auth/
object {"enabled":true,"existingSecret":"","value":""}
global.auth.backend.enabled Enable backend service to service authentication, unless configured otherwise it generates a secret value bool true
global.auth.backend.existingSecret Instead of generating a secret value, refer to existing secret string ""
global.auth.backend.value Instead of generating a secret value, use the following value string ""
global.catalogIndex Catalog index configuration for automatic plugin discovery. The install-dynamic-plugins.py script pulls this image if the CATALOG_INDEX_IMAGE environment variable is set. The dynamic-plugins.default.yaml file will be extracted and written to dynamic-plugins-root volume mount. object {"image":{"registry":"quay.io","repository":"rhdh/plugin-catalog-index","tag":"1.10"}}
global.clusterRouterBase Shorthand for users who do not want to specify a custom HOSTNAME. Used ONLY with the DEFAULT upstream.backstage.appConfig value and with OCP Route enabled. string "apps.example.com"
global.dynamic.includes Array of YAML files listing dynamic plugins to include with those listed in the plugins field. Relative paths are resolved from the working directory of the initContainer that will install the plugins (/opt/app-root/src). list ["dynamic-plugins.default.yaml"]
global.dynamic.includes[0] List of dynamic plugins included inside the rhdh container image, some of which are disabled by default. This file ONLY works with the rhdh container image. string "dynamic-plugins.default.yaml"
global.dynamic.plugins List of dynamic plugins, possibly overriding the plugins listed in includes files. Every item defines the plugin package as a NPM package spec, an optional pluginConfig with plugin-specific backstage configuration, and an optional disabled flag to disable/enable a plugin listed in includes files. It also includes an integrity field that is used to verify the plugin package integrity. list []
global.host Custom hostname shorthand, overrides global.clusterRouterBase, upstream.ingress.host, route.host, and url values in upstream.backstage.appConfig. string ""
nameOverride string "developer-hub"
orchestrator.enabled bool false
orchestrator.plugins Orchestrator plugins and their configuration list [{"disabled":false,"package":"oci://registry.access.redhat.com/rhdh/red-hat-developer-hub-backstage-plugin-orchestrator-backend:{{ \"{{inherit}}\" }}"},{"disabled":false,"package":"oci://registry.access.redhat.com/rhdh/red-hat-developer-hub-backstage-plugin-orchestrator-form-widgets:{{ \"{{inherit}}\" }}"},{"disabled":false,"package":"oci://registry.access.redhat.com/rhdh/red-hat-developer-hub-backstage-plugin-orchestrator:{{ \"{{inherit}}\" }}"},{"disabled":false,"package":"oci://registry.access.redhat.com/rhdh/red-hat-developer-hub-backstage-plugin-scaffolder-backend-module-orchestrator:{{ \"{{inherit}}\" }}"}]
orchestrator.serverlessLogicOperator.enabled bool true
orchestrator.serverlessOperator.enabled bool true
orchestrator.sonataflowPlatform.createDBJobImage Image for the container used by the create-db job string "{{ .Values.upstream.postgresql.image.registry }}/{{ .Values.upstream.postgresql.image.repository }}:{{ .Values.upstream.postgresql.image.tag }}"
orchestrator.sonataflowPlatform.dataIndexImage Image for the container used by the sonataflow data index, optional and used for disconnected environments string ""
orchestrator.sonataflowPlatform.eventing.broker.name string ""
orchestrator.sonataflowPlatform.eventing.broker.namespace string ""
orchestrator.sonataflowPlatform.externalDBHost Host for the user-configured external Database string ""
orchestrator.sonataflowPlatform.externalDBName Name for the user-configured external Database string ""
orchestrator.sonataflowPlatform.externalDBPort Port for the user-configured external Database string ""
orchestrator.sonataflowPlatform.externalDBsecretRef Secret name for the user-created secret to connect an external DB string ""
orchestrator.sonataflowPlatform.initContainerImage Image for the init container used by the create-db job string "{{ .Values.upstream.postgresql.image.registry }}/{{ .Values.upstream.postgresql.image.repository }}:{{ .Values.upstream.postgresql.image.tag }}"
orchestrator.sonataflowPlatform.jobServiceImage Image for the container used by the sonataflow jobs service, optional and used for disconnected environments string ""
orchestrator.sonataflowPlatform.monitoring.enabled bool true
orchestrator.sonataflowPlatform.resources.limits.cpu string "500m"
orchestrator.sonataflowPlatform.resources.limits.memory string "1Gi"
orchestrator.sonataflowPlatform.resources.requests.cpu string "250m"
orchestrator.sonataflowPlatform.resources.requests.memory string "64Mi"
route OpenShift Route parameters object {"annotations":{},"enabled":true,"host":"{{ .Values.global.host }}","path":"/","tls":{"caCertificate":"","certificate":"","destinationCACertificate":"","enabled":true,"insecureEdgeTerminationPolicy":"Redirect","key":"","termination":"edge"},"wildcardPolicy":"None"}
route.annotations Route specific annotations object {}
route.enabled Enable the creation of the route resource bool true
route.host Set the host attribute to a custom value. If not set, OpenShift will generate it, please make sure to match your baseUrl string "{{ .Values.global.host }}"
route.path Path that the router watches for, to route traffic for to the service. string "/"
route.tls Route TLS parameters
Ref: https://docs.openshift.com/container-platform/4.9/networking/routes/secured-routes.html
object {"caCertificate":"","certificate":"","destinationCACertificate":"","enabled":true,"insecureEdgeTerminationPolicy":"Redirect","key":"","termination":"edge"}
route.tls.caCertificate Cert authority certificate contents. Optional string ""
route.tls.certificate Certificate contents string ""
route.tls.destinationCACertificate Contents of the ca certificate of the final destination.
When using reencrypt termination this file should be provided in order to have routers use it for health checks on the secure connection. If this field is not specified, the router may provide its own destination CA and perform hostname validation using the short service name (service.namespace.svc), which allows infrastructure generated certificates to automatically verify.
string ""
route.tls.enabled Enable TLS configuration for the host defined at route.host parameter bool true
route.tls.insecureEdgeTerminationPolicy Indicates the desired behavior for insecure connections to a route.
While each router may make its own decisions on which ports to expose, this is normally port 80. The only valid values are None, Redirect, or empty for disabled.
string "Redirect"
route.tls.key Key file contents string ""
route.tls.termination Specify TLS termination. string "edge"
route.wildcardPolicy Wildcard policy if any for the route. Currently only 'Subdomain' or 'None' is allowed. string "None"
test Test pod parameters object {"enabled":true,"image":{"registry":"quay.io","repository":"curl/curl","tag":"latest"},"injectTestNpmrcSecret":false}
test.enabled Whether to enable the test-connection pod used for testing the Release using helm test. bool true
test.image.registry Test connection pod image registry string "quay.io"
test.image.repository Test connection pod image repository. Note that the image needs to have both the sh and curl binaries in it. string "curl/curl"
test.image.tag Test connection pod image tag. Note that the image needs to have both the sh and curl binaries in it. string "latest"
test.injectTestNpmrcSecret Whether to inject a fake dynamic plugins npmrc secret.
See RHDHBUGS-1893 and RHDHBUGS-1464 for the motivation behind this.
This is only used for testing purposes and should not be used in production.
Only relevant when test.enabled field is set to true.
bool false
upstream Upstream Backstage chart configuration object Use Openshift compatible settings
upstream.backstage.extraVolumes[0] Ephemeral volume that will contain the dynamic plugins installed by the initContainer below at start. object {"ephemeral":{"volumeClaimTemplate":{"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"5Gi"}}}}},"name":"dynamic-plugins-root"}
upstream.backstage.extraVolumes[0].ephemeral.volumeClaimTemplate.spec.resources.requests.storage Size of the volume that will contain the dynamic plugins. It should be large enough to contain all the plugins. string "5Gi"
upstream.backstage.extraVolumes[5] Ephemeral volume used by the install-dynamic-plugins init container to extract catalog entities from the catalog index image. Mounted at the /extensions path in the backstage-backend main container for automatic discovery by the extension catalog backend providers. object {"emptyDir":{},"name":"extensions-catalog"}
upstream.backstage.initContainers[0].image Image used by the initContainer to install dynamic plugins into the dynamic-plugins-root volume mount. It could be replaced by a custom image based on this one. string quay.io/rhdh-community/rhdh:next

Opinionated Backstage deployment

This chart defaults to an opinionated deployment of Backstage that provides user with a usable Backstage instance out of the box.

Features enabled by the default chart configuration:

  1. Uses rhdh that pre-loads a lot of useful plugins and features
  2. Exposes a Route for easy access to the instance
  3. Enables OpenShift-compatible PostgreSQL database storage

For additional instance features please consult the documentation for rhdh.

Additional features can be enabled by extending the default configuration at:

upstream:
  backstage:
    appConfig:
      # Inline app-config.yaml for the instance
    extraEnvVars:
      # Additional environment variables

Features

This charts defaults to using the RHDH image that is OpenShift compatible:

quay.io/rhdh-community/rhdh:next

Additionally this chart enhances the upstream Backstage chart with following OpenShift-specific features:

OpenShift Routes

This chart offers a drop-in replacement for the Ingress resource already provided by the upstream chart via an OpenShift Route.

OpenShift routes are enabled by default. In order to use the chart without it, please set route.enabled to false and switch to the Ingress resource via upstream.ingress values.

Routes can be further configured via the route field.

To manually provide the Backstage pod with the right context, please add the following value:

# values.yaml
global:
  clusterRouterBase: apps.example.com

Tip: you can use helm upgrade -i --set global.clusterRouterBase=apps.example.com ... instead of a value file

Custom hosts are also supported via the following shorthand:

# values.yaml
global:
  host: backstage.example.com

Note: Setting either global.host or global.clusterRouterBase will disable the automatic hostname discovery. When both fields are set, global.host will take precedence. These are just templating shorthands. For full manual configuration please pay attention to values under the route key.

Any custom modifications to how backstage is being exposed may require additional changes to the values.yaml:

# values.yaml
upstream:
  backstage:
    appConfig:
      app:
        baseUrl: 'https://{{- include "rhdh.hostname" . }}'
      backend:
        baseUrl: 'https://{{- include "rhdh.hostname" . }}'
        cors:
          origin: 'https://{{- include "rhdh.hostname" . }}'

Catalog Index Configuration

The chart supports automatic plugin discovery through a catalog index OCI image. This is configured via global.catalogIndex.image (with registry, repository, and tag fields) and lets you use a pre-defined set of dynamic plugins.

For detailed information on configuring the catalog index, including how to override the default image or use a private registry, see the Catalog Index Configuration documentation.

Vanilla Kubernetes compatibility mode

To deploy this chart on vanilla Kubernetes or any other non-OCP platform, apply the following changes. Note that further customizations might be required, depending on your exact Kubernetes setup:

# values.yaml
global:
  host: # Specify your own Ingress host
route:
  enabled: false  # OpenShift Routes do not exist on vanilla Kubernetes
upstream:
  ingress:
    enabled: true  # Use Kubernetes Ingress instead of OpenShift Route
  backstage:
    podSecurityContext:  # Vanilla Kubernetes doesn't feature OpenShift default SCCs with dynamic UIDs, adjust accordingly to the deployed image
      runAsUser: 1001
      runAsGroup: 1001
      fsGroup: 1001
  postgresql:
    primary:
      podSecurityContext:
        enabled: true
        fsGroup: 26
        runAsUser: 26
    volumePermissions:
      enabled: true

Installing RHDH with Orchestrator on OpenShift

Orchestrator brings serverless workflows into Backstage, focusing on the journey for application migration to the cloud, onboarding developers, and user-made workflows of Backstage actions or external systems. Orchestrator is a flavor of RHDH, and can be installed alongside RHDH in the same namespace and in the following way:

  1. Have an admin install the orchestrator-infra Helm Chart, which will install the prerequisites required to deploy the Orchestrator-flavored RHDH. This process will include installing cluster-wide resources, so should be done with admin privileges:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add redhat-developer https://redhat-developer.github.io/rhdh-chart

helm install <release_name> redhat-developer/redhat-developer-hub-orchestrator-infra
  1. Manually approve the Install Plans created by the chart, and wait for the Openshift Serverless and Openshift Serverless Logic Operators to be deployed. To do so, follow the post-install notes given by the chart, or see them here
  2. Install the backstage chart with Helm, enabling orchestrator, like so:
helm install <release_name> redhat-developer/backstage --set orchestrator.enabled=true

Note that serverlessLogicOperator, and serverlessOperator are enabled by default. They can be disabled together or seperately by passing the following flags: --set orchestrator.serverlessLogicOperator.enabled=false --set orchestrator.serverlessOperator.enabled=false

Enablement of Notifications Plugin

Workflows running with Orchestrator may use the Notifications plugin. For this, you must enable the Notifications and Signals plugins. To do so, you would need to edit the default Helm values.yaml file, and add the plugins listed below to the global.dynamic.plugins list. Do this before installing the Helm Chart, or upgrade the Helm release with the new values file.

- disabled: false
  package: "./dynamic-plugins/dist/backstage-plugin-notifications"
- disabled: false
  package: "./dynamic-plugins/dist/backstage-plugin-signals"
- disabled: false
  package: "./dynamic-plugins/dist/backstage-plugin-notifications-backend-dynamic"
- disabled: false
  package: "./dynamic-plugins/dist/backstage-plugin-signals-backend-dynamic"

Enabling these plugins will allow you to recieve notifications from workflows running with Orchestrator.

Using Orchestrator while configuring an ExternalDB

To use orchestrator with an external DB, please follow the instructions in our documentation and populate the following values in the values.yaml:

    externalDBsecretRef: <cred-secret>
    externalDBName: ""
    externalDBHost: ""
    externalDBPort: ""

The values for externalDBHost and externalDBPort should match the ones configured in the cred-secret.

Please note that externalDBName is the name of the user-configured existing database, not the database that the orchestrator and sonataflow resources will use. A Job will run to create the 'sonataflow' database in the external database for the workflows to use.

Finally, install the Helm Chart (including setting up the external DB):

helm install <release_name> redhat-developer/backstage \
  --set orchestrator.enabled=true \
  --set orchestrator.sonataflowPlatform.externalDBsecretRef=<cred-secret> \
  --set orchestrator.sonataflowPlatform.externalDBName=example \
  --set orchestrator.sonataflowPlatform.externalDBHost=example \
  --set orchestrator.sonataflowPlatform.externalDBPort=example