Skip to content

Commit 1d88f82

Browse files
xinlian12annie-mac
andauthored
AddDocs (#42758)
* add docs --------- Co-authored-by: annie-mac <xinlian@microsoft.com>
1 parent 4cf3d9f commit 1d88f82

35 files changed

Lines changed: 938 additions & 7 deletions

eng/ignore-links.txt

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,7 @@
11
https://github.com/Azure/azure-sdk-tools/blob/main/eng/common/testproxy/transition-scripts/generate-assets-json.ps1
2+
3+
# Confluent platform local addresses
4+
http://localhost:9021/
5+
http://localhost:9000/
6+
http://localhost:9001/
7+
http://localhost:9004/
Lines changed: 132 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,132 @@
1+
# Confluent Cloud Setup
2+
3+
This guide walks through setting up Confluent Cloud using Docker containers.
4+
5+
## Prerequisites
6+
7+
- Bash shell
8+
- Will not work in Cloud Shell or WSL1
9+
- Java 11+ ([download](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html))
10+
- Maven ([download](https://maven.apache.org/download.cgi))
11+
- Docker ([download](https://www.docker.com/products/docker-desktop))
12+
- CosmosDB [Setting up an Azure Cosmos DB Instance]<!--(CosmosDB_Setup.md)-->
13+
14+
## Setup
15+
16+
### Create Confluent Cloud Account and Setup Cluster
17+
Go to [create account](https://www.confluent.io/get-started/) and fill out the appropriate fields.
18+
19+
![SignupConfluentCloud](images/SignUpConfluentCloud.png)
20+
21+
---
22+
23+
Select environments.
24+
25+
![EnvironmentClick](images/environment-click.png)
26+
27+
---
28+
29+
Select default which is an environment automatically setup by confluent.
30+
31+
![DefaultClick](images/click-default.png)
32+
33+
---
34+
35+
- Select add cluster.
36+
37+
![Add Cluster](images/click-add-cluster.png)
38+
39+
---
40+
41+
- Select Azure create the cluster and choose the same region as the Cosmos DB instance you created.
42+
43+
![Select Azure](images/select-azure.png)
44+
45+
---
46+
47+
- Name the cluster, and then select launch cluster.
48+
49+
![Name and Launch](images/select-name-launch.png)
50+
51+
52+
### Create ksqlDB Cluster
53+
From inside the cluster select ksqlDB. Select add cluster. Select continue, name the cluster, and then select launch.
54+
55+
![ksqlDB](images/select-ksqlDB.png)
56+
57+
### Update Configurations
58+
- The cluster key and secret can be found under api keys in the cluster. Choose the one for ksqlDB. Or generate client config using the CLI and Tools. ![CLI and Tools](images/cli-and-tools.png)
59+
- The `BOOTSTRAP_SERVERS` endpoint can be found in the cluster under cluster settings and end endpoints. Or generate client config using the CLI and Tools. ![CLI and Tools](images/cli-and-tools.png)
60+
- The schema registry key and secret can be found on the bottom of the right panel inside the confluent environment under credentials.
61+
- The schema registry url can be found on the bottom of the right panel inside the confluent environment under Endpoint.
62+
63+
![Schema Registry url](images/schema-registry.png)
64+
![Schema Registry key and secret](images/schema-key-and-secret.png)
65+
66+
### Run Integration Tests
67+
To run the integration tests against a confluent cloud cluster, create ~/kafka-cosmos-local.properties with the following content:
68+
```
69+
ACCOUNT_HOST=[emulator endpoint or you cosmos masterKey]
70+
ACCOUNT_KEY=[emulator masterKey or your cosmos masterKey]
71+
ACCOUNT_TENANT_ID=[update if AAD auth is required in the integration tests]
72+
ACCOUNT_AAD_CLIENT_ID=[update if AAD auth is required in the integration tests]
73+
ACCOUNT_AAD_CLIENT_SECRET=[update is AAD auth is required in the integration tests]
74+
SASL_JAAS=[credential configured on the confluent cloud cluster]
75+
BOOTSTRAP_SERVER=[bootstrap server endpoint of the confluent cloud cluster]
76+
SCHEMA_REGISTRY_URL=[schema registry url of the cloud cluster]
77+
SCHEMA_REGISTRY_KEY=[schema registry key of the cloud cluster]
78+
SCHEMA_REGISTRY_SECRET=[schema registry secret of the cloud cluster]
79+
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR=3
80+
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR=3
81+
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR=3
82+
```
83+
Integration tests are having ITest suffix. Use following command to run integration tests([create the topic ahead of time](#create-topic-in-confluent-cloud-ui) )
84+
```bash
85+
mvn -e -Dgpg.skip -Dmaven.javadoc.skip=true -Dcodesnippet.skip=true -Dspotbugs.skip=true -Dcheckstyle.skip=true -Drevapi.skip=true -pl ,azure-cosmos-kafka-connect test package -Pkafka-integration
86+
```
87+
88+
### Run a local sink/source workload by using confluent platform locally
89+
- Following [Install Confluent Platform using ZIP and TAR](https://docs.confluent.io/platform/current/installation/installing_cp/zip-tar.html#prod-kafka-cli-install) to download the library
90+
- Copy src/docker/resources/sink.example.json to the above unzipped confluent folder
91+
- Copy src/docker/resources/source.example.json to the above unzipped confluent folder
92+
- Update the sink.example.json and source.example.json with your cosmos endpoint
93+
- Build the cosmos kafka connector jar
94+
```bash
95+
mvn -e -DskipTests -Dgpg.skip -Dmaven.javadoc.skip=true -Dcodesnippet.skip=true -Dspotbugs.skip=true -Dcheckstyle.skip=true -Drevapi.skip=true -pl ,azure-cosmos,azure-cosmos-tests -am clean install
96+
mvn -e -DskipTests -Dgpg.skip -Dmaven.javadoc.skip=true -Dcodesnippet.skip=true -Dspotbugs.skip=true -Dcheckstyle.skip=true -Drevapi.skip=true -pl ,azure-cosmos-kafka-connect clean install
97+
```
98+
- Copy the built cosmos kafka connector jar to the plugin path folder (you can find from the etc/distributed.properties plugin.path config)
99+
- ```cd unzipped confluent folder```
100+
- Update the etc/distributed.properties file with your confluent cloud cluster config
101+
- Run ./bin/connect-distributed ./etc/distributed.properties
102+
- Start your sink connector or source connector: ```curl -s -H "Content-Type: application/json" -X POST -d @<path-to-JSON-config-file> http://localhost:8083/connectors/ | jq .```
103+
- Monitor the logs and check any exceptions, and also monitor the throughput and other metrics from your confluent cloud cluster
104+
105+
> If you want to delete your connector: ```curl -X DELETE http://localhost:8083/connectors/cosmosdb-source-connector-v2```. The connector name should match the one in your json config.
106+
107+
> If you want to restart your connector: ```curl -s -H "Content-Type: application/json" -X POST http://localhost:8083/connectors/cosmosdb-source-connector-v2/restart | jq .```
108+
109+
> Follow [Kafka Connect REST Interface for Confluent Platform](https://docs.confluent.io/platform/current/connect/references/restapi.html) to check other options.
110+
111+
### Create Topic in Confluent Cloud UI
112+
For some cluster type, you will need to create the topic ahead of time. You can use the UI or through the [Confluent Cli](https://docs.confluent.io/cloud/current/client-apps/topics/manage.html#:~:text=Confluent%20CLI%20Follow%20these%20steps%20to%20create%20a,aren%E2%80%99t%20any%20topics%20created%20yet%2C%20click%20Create%20topic.) (Requires installing the Confluent Cli first).
113+
114+
Inside the Cluster Overview, scroll down and select topics and partitions.
115+
116+
![topic-partition](images/Topics-Partitions.png)
117+
118+
---
119+
120+
Select add topic.
121+
122+
![add-topic](images/add-topic.png)
123+
124+
---
125+
126+
Name the topic and select create with defaults. Afterward, a prompt will appear about creating a schema. This can be
127+
skipped as the tests will create the schemas.
128+
129+
## Resources to Improve Infrastructure
130+
- [Docker Configurations](https://docs.confluent.io/platform/current/installation/docker/config-reference.html)
131+
- [Configuration Options](https://docs.confluent.io/platform/current/installation/configuration/index.html)
132+
- [Connect Confluent Platform Components to Confluent Cloud](https://docs.confluent.io/cloud/current/cp-component/index.html)
Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,68 @@
1+
# Confluent Platform Setup
2+
3+
This guide walks through setting up Confluent Platform using Docker containers.
4+
5+
## Prerequisites
6+
7+
- Bash shell
8+
- Will not work in Cloud Shell or WSL1
9+
- Java 11+ ([download](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html))
10+
- Maven ([download](https://maven.apache.org/download.cgi))
11+
- Docker ([download](https://www.docker.com/products/docker-desktop))
12+
- Powershell (optional) ([download](https://learn.microsoft.com/powershell/scripting/install/installing-powershell))
13+
14+
### Startup
15+
16+
> Running either script for the first time may take several minutes to run in order to download docker images for the Confluent platform components.
17+
18+
```bash
19+
20+
cd $REPO_ROOT/src/docker
21+
22+
# Option 1: Use the bash script to setup
23+
./startup.sh
24+
25+
# Option 2: Use the powershell script to setup
26+
pwsh startup.ps1
27+
28+
# verify the services are up and running
29+
docker-compose ps
30+
31+
```
32+
33+
> You may need to increase the memory allocation for Docker to 3 GB or more.
34+
>
35+
> Rerun the startup script to reinitialize the docker containers.
36+
37+
Your Confluent Platform setup is now ready to use!
38+
39+
### Running Kafka Connect standalone mode
40+
41+
The Kafka Connect container that is included with the Confluent Platform setup runs as Kafka connect as `distributed mode`. Using Kafka Connect as `distributed mode` is *recommended* since you can interact with connectors using the Control Center UI.
42+
43+
If you instead would like to run Kafka Connect as `standalone mode`, which is useful for quick testing, continue through this section. For more information on Kafka Connect standalone and distributed modes, refer to these [Confluent docs](https://docs.confluent.io/home/connect/userguide.html#standalone-vs-distributed-mode).
44+
45+
### Access Confluent Platform components
46+
47+
| Name | Address | Description |
48+
| --- |-------------------------| --- |
49+
| Control Center | <http://localhost:9021> | The main webpage for all Confluent services where you can create topics, configure connectors, interact with the Connect cluster (only for distributed mode) and more. |
50+
| Kafka Topics UI | <http://localhost:9000> | Useful to viewing Kafka topics and the messages within them. |
51+
| Schema Registry UI | <http://localhost:9001> | Can view and create new schemas, ideal for interacting with Avro data. |
52+
| ZooNavigator | <http://localhost:9004> | Web interface for Zookeeper. Refer to the [docs](https://zoonavigator.elkozmon.com/en/stable/) for more information. |
53+
54+
### Cleanup
55+
56+
Tear down the Confluent Platform setup and cleanup any unneeded resources
57+
58+
```bash
59+
60+
cd $REPO_ROOT/src/docker
61+
62+
# bring down all docker containers
63+
docker-compose down
64+
65+
# remove dangling volumes and networks
66+
docker system prune -f --volumes --filter "label=io.confluent.docker"
67+
68+
```
Lines changed: 95 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,95 @@
1+
# Setting up an Azure Cosmos DB Instance
2+
3+
## Prerequisites
4+
5+
- Azure subscription with permissions to create:
6+
- Resource Groups, Cosmos DB
7+
- Bash shell (tested on Visual Studio Codespaces, Cloud Shell, Mac, Ubuntu, Windows with WSL2)
8+
- Will not work with WSL1
9+
- Azure CLI ([download](https://learn.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest))
10+
11+
## Create Azure Cosmos DB Instance, Database and Container
12+
13+
Login to Azure and select subscription.
14+
15+
```bash
16+
17+
az login
18+
19+
# show your Azure accounts
20+
az account list -o table
21+
22+
# select the Azure subscription if necessary
23+
az account set -s {subscription name or Id}
24+
25+
```
26+
27+
Create a new Azure Resource Group for this quickstart, then add to it a Cosmos DB Account, Database and Container using the Azure CLI.
28+
29+
> The `az cosmosdb sql` extension is currently in preview and is subject to change
30+
31+
```bash
32+
33+
# replace with a unique name
34+
# do not use punctuation or uppercase (a-z, 0-9)
35+
export Cosmos_Name={your Cosmos DB name}
36+
37+
## if true, change name to avoid DNS failure on create
38+
az cosmosdb check-name-exists -n ${Cosmos_Name}
39+
40+
# set environment variables
41+
export Cosmos_Location="centralus"
42+
export Cosmos_Database="kafkaconnect"
43+
export Cosmos_Container="kafka"
44+
45+
# Resource Group Name
46+
export Cosmos_RG=${Cosmos_Name}-rg-cosmos
47+
48+
# create a new resource group
49+
az group create -n $Cosmos_RG -l $Cosmos_Location
50+
51+
# create the Cosmos DB server
52+
# this command takes several minutes to run
53+
az cosmosdb create -g $Cosmos_RG -n $Cosmos_Name
54+
55+
# create the database
56+
# 400 is the minimum --throughput (RUs)
57+
az cosmosdb sql database create -a $Cosmos_Name -n $Cosmos_Database -g $Cosmos_RG --throughput 400
58+
59+
# create the container
60+
# /id is the partition key (case sensitive)
61+
az cosmosdb sql container create -p /id -g $Cosmos_RG -a $Cosmos_Name -d $Cosmos_Database -n $Cosmos_Container
62+
63+
# OPTIONAL: Enable Time to Live (TTL) on the container
64+
export Cosmos_Container_TTL=1000
65+
az cosmosdb sql container update -g $Cosmos_RG -a $Cosmos_Name -d $Cosmos_Database -n $Cosmos_Container --ttl=$Cosmos_Container_TTL
66+
67+
```
68+
69+
With the Azure Cosmos DB instance setup, you will need to get the Cosmos DB endpoint URI and primary connection key. These values will be used to setup the Cosmos DB Source and Sink connectors.
70+
71+
```bash
72+
73+
# Keep note of both of the following values as they will be used later
74+
75+
# get Cosmos DB endpoint URI
76+
echo https://${Cosmos_Name}.documents.azure.com:443/
77+
78+
# get Cosmos DB primary connection key
79+
az cosmosdb keys list -n $Cosmos_Name -g $Cosmos_RG --query primaryMasterKey -o tsv
80+
81+
```
82+
83+
### Cleanup
84+
85+
Remove the Cosmos DB instance and the associated resource group
86+
87+
```bash
88+
89+
# delete Cosmos DB instance
90+
az cosmosdb delete -g $Cosmos_RG -n $Cosmos_Name
91+
92+
# delete Cosmos DB resource group
93+
az group delete --no-wait -y -n $Cosmos_RG
94+
95+
```

0 commit comments

Comments
 (0)