|
| 1 | +# Confluent Cloud Setup |
| 2 | + |
| 3 | +This guide walks through setting up Confluent Cloud using Docker containers. |
| 4 | + |
| 5 | +## Prerequisites |
| 6 | + |
| 7 | +- Bash shell |
| 8 | + - Will not work in Cloud Shell or WSL1 |
| 9 | +- Java 11+ ([download](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html)) |
| 10 | +- Maven ([download](https://maven.apache.org/download.cgi)) |
| 11 | +- Docker ([download](https://www.docker.com/products/docker-desktop)) |
| 12 | +- CosmosDB [Setting up an Azure Cosmos DB Instance]<!--(CosmosDB_Setup.md)--> |
| 13 | + |
| 14 | +## Setup |
| 15 | + |
| 16 | +### Create Confluent Cloud Account and Setup Cluster |
| 17 | +Go to [create account](https://www.confluent.io/get-started/) and fill out the appropriate fields. |
| 18 | + |
| 19 | + |
| 20 | + |
| 21 | +--- |
| 22 | + |
| 23 | +Select environments. |
| 24 | + |
| 25 | + |
| 26 | + |
| 27 | +--- |
| 28 | + |
| 29 | +Select default which is an environment automatically setup by confluent. |
| 30 | + |
| 31 | + |
| 32 | + |
| 33 | +--- |
| 34 | + |
| 35 | +- Select add cluster. |
| 36 | + |
| 37 | + |
| 38 | + |
| 39 | +--- |
| 40 | + |
| 41 | +- Select Azure create the cluster and choose the same region as the Cosmos DB instance you created. |
| 42 | + |
| 43 | + |
| 44 | + |
| 45 | +--- |
| 46 | + |
| 47 | +- Name the cluster, and then select launch cluster. |
| 48 | + |
| 49 | + |
| 50 | + |
| 51 | + |
| 52 | +### Create ksqlDB Cluster |
| 53 | +From inside the cluster select ksqlDB. Select add cluster. Select continue, name the cluster, and then select launch. |
| 54 | + |
| 55 | + |
| 56 | + |
| 57 | +### Update Configurations |
| 58 | +- The cluster key and secret can be found under api keys in the cluster. Choose the one for ksqlDB. Or generate client config using the CLI and Tools.  |
| 59 | +- The `BOOTSTRAP_SERVERS` endpoint can be found in the cluster under cluster settings and end endpoints. Or generate client config using the CLI and Tools.  |
| 60 | +- The schema registry key and secret can be found on the bottom of the right panel inside the confluent environment under credentials. |
| 61 | +- The schema registry url can be found on the bottom of the right panel inside the confluent environment under Endpoint. |
| 62 | + |
| 63 | + |
| 64 | + |
| 65 | + |
| 66 | +### Run Integration Tests |
| 67 | +To run the integration tests against a confluent cloud cluster, create ~/kafka-cosmos-local.properties with the following content: |
| 68 | +``` |
| 69 | +ACCOUNT_HOST=[emulator endpoint or you cosmos masterKey] |
| 70 | +ACCOUNT_KEY=[emulator masterKey or your cosmos masterKey] |
| 71 | +ACCOUNT_TENANT_ID=[update if AAD auth is required in the integration tests] |
| 72 | +ACCOUNT_AAD_CLIENT_ID=[update if AAD auth is required in the integration tests] |
| 73 | +ACCOUNT_AAD_CLIENT_SECRET=[update is AAD auth is required in the integration tests] |
| 74 | +SASL_JAAS=[credential configured on the confluent cloud cluster] |
| 75 | +BOOTSTRAP_SERVER=[bootstrap server endpoint of the confluent cloud cluster] |
| 76 | +SCHEMA_REGISTRY_URL=[schema registry url of the cloud cluster] |
| 77 | +SCHEMA_REGISTRY_KEY=[schema registry key of the cloud cluster] |
| 78 | +SCHEMA_REGISTRY_SECRET=[schema registry secret of the cloud cluster] |
| 79 | +CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR=3 |
| 80 | +CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR=3 |
| 81 | +CONNECT_STATUS_STORAGE_REPLICATION_FACTOR=3 |
| 82 | +``` |
| 83 | +Integration tests are having ITest suffix. Use following command to run integration tests([create the topic ahead of time](#create-topic-in-confluent-cloud-ui) ) |
| 84 | +```bash |
| 85 | +mvn -e -Dgpg.skip -Dmaven.javadoc.skip=true -Dcodesnippet.skip=true -Dspotbugs.skip=true -Dcheckstyle.skip=true -Drevapi.skip=true -pl ,azure-cosmos-kafka-connect test package -Pkafka-integration |
| 86 | +``` |
| 87 | + |
| 88 | +### Run a local sink/source workload by using confluent platform locally |
| 89 | +- Following [Install Confluent Platform using ZIP and TAR](https://docs.confluent.io/platform/current/installation/installing_cp/zip-tar.html#prod-kafka-cli-install) to download the library |
| 90 | +- Copy src/docker/resources/sink.example.json to the above unzipped confluent folder |
| 91 | +- Copy src/docker/resources/source.example.json to the above unzipped confluent folder |
| 92 | +- Update the sink.example.json and source.example.json with your cosmos endpoint |
| 93 | +- Build the cosmos kafka connector jar |
| 94 | +```bash |
| 95 | +mvn -e -DskipTests -Dgpg.skip -Dmaven.javadoc.skip=true -Dcodesnippet.skip=true -Dspotbugs.skip=true -Dcheckstyle.skip=true -Drevapi.skip=true -pl ,azure-cosmos,azure-cosmos-tests -am clean install |
| 96 | +mvn -e -DskipTests -Dgpg.skip -Dmaven.javadoc.skip=true -Dcodesnippet.skip=true -Dspotbugs.skip=true -Dcheckstyle.skip=true -Drevapi.skip=true -pl ,azure-cosmos-kafka-connect clean install |
| 97 | +``` |
| 98 | +- Copy the built cosmos kafka connector jar to the plugin path folder (you can find from the etc/distributed.properties plugin.path config) |
| 99 | +- ```cd unzipped confluent folder``` |
| 100 | +- Update the etc/distributed.properties file with your confluent cloud cluster config |
| 101 | +- Run ./bin/connect-distributed ./etc/distributed.properties |
| 102 | +- Start your sink connector or source connector: ```curl -s -H "Content-Type: application/json" -X POST -d @<path-to-JSON-config-file> http://localhost:8083/connectors/ | jq .``` |
| 103 | +- Monitor the logs and check any exceptions, and also monitor the throughput and other metrics from your confluent cloud cluster |
| 104 | + |
| 105 | +> If you want to delete your connector: ```curl -X DELETE http://localhost:8083/connectors/cosmosdb-source-connector-v2```. The connector name should match the one in your json config. |
| 106 | +
|
| 107 | +> If you want to restart your connector: ```curl -s -H "Content-Type: application/json" -X POST http://localhost:8083/connectors/cosmosdb-source-connector-v2/restart | jq .``` |
| 108 | +
|
| 109 | +> Follow [Kafka Connect REST Interface for Confluent Platform](https://docs.confluent.io/platform/current/connect/references/restapi.html) to check other options. |
| 110 | +
|
| 111 | +### Create Topic in Confluent Cloud UI |
| 112 | +For some cluster type, you will need to create the topic ahead of time. You can use the UI or through the [Confluent Cli](https://docs.confluent.io/cloud/current/client-apps/topics/manage.html#:~:text=Confluent%20CLI%20Follow%20these%20steps%20to%20create%20a,aren%E2%80%99t%20any%20topics%20created%20yet%2C%20click%20Create%20topic.) (Requires installing the Confluent Cli first). |
| 113 | + |
| 114 | +Inside the Cluster Overview, scroll down and select topics and partitions. |
| 115 | + |
| 116 | + |
| 117 | + |
| 118 | +--- |
| 119 | + |
| 120 | +Select add topic. |
| 121 | + |
| 122 | + |
| 123 | + |
| 124 | +--- |
| 125 | + |
| 126 | +Name the topic and select create with defaults. Afterward, a prompt will appear about creating a schema. This can be |
| 127 | +skipped as the tests will create the schemas. |
| 128 | + |
| 129 | +## Resources to Improve Infrastructure |
| 130 | +- [Docker Configurations](https://docs.confluent.io/platform/current/installation/docker/config-reference.html) |
| 131 | +- [Configuration Options](https://docs.confluent.io/platform/current/installation/configuration/index.html) |
| 132 | +- [Connect Confluent Platform Components to Confluent Cloud](https://docs.confluent.io/cloud/current/cp-component/index.html) |
0 commit comments