Skip to content
This repository was archived by the owner on Mar 8, 2022. It is now read-only.

Added schema and tests for LogStreams#270

Merged
alexkappa merged 35 commits intoalexkappa:masterfrom
mcalster:feature/log-stream
Nov 16, 2020
Merged

Added schema and tests for LogStreams#270
alexkappa merged 35 commits intoalexkappa:masterfrom
mcalster:feature/log-stream

Conversation

@mcalster
Copy link
Copy Markdown
Contributor

@mcalster mcalster commented Sep 27, 2020

Proposed Changes

			resource "auth0_log_stream" "my_log_stream" {
				name = "Acceptance-Test-LogStream-http"
				type = "http"
				sink { 
				  http_endpoint = "https://example.com/webhook/logs"
				  http_content_type = "application/json"
				  http_content_format = "JSONLINES"
				  http_authorization = "AKIAXXXXXXXXXXXXXXXX"
				}
			}
  • Add new LogStream Resource
  • Update auth0 client version.

Fixes #264
Blocked by go-auth0/auth0#144

Acceptance Test Output

$ make testacc TESTS=TestAccLogStream*
==> Checking that code complies with gofmt requirements...
?       github.com/alexkappa/terraform-provider-auth0   [no test files]
=== RUN   TestAccLogStreamHttp
--- PASS: TestAccLogStreamHttp (2.60s)
=== RUN   TestAccLogStreamEventBridge
--- PASS: TestAccLogStreamEventBridge (3.43s)
=== RUN   TestAccLogStreamEventGrid
    resource_auth0_log_stream_test.go:100: this test requires an active subscription
--- SKIP: TestAccLogStreamEventGrid (0.00s)
=== RUN   TestAccLogStreamDatadog
--- PASS: TestAccLogStreamDatadog (1.50s)
=== RUN   TestAccLogStreamSplunk
--- PASS: TestAccLogStreamSplunk (1.80s)
PASS
coverage: 10.2% of statements
ok      github.com/alexkappa/terraform-provider-auth0/auth0     7.991s  coverage: 10.2% of statements
?       github.com/alexkappa/terraform-provider-auth0/auth0/internal/debug      [no test files]
testing: warning: no tests to run
PASS
coverage: 0.0% of statements
ok      github.com/alexkappa/terraform-provider-auth0/auth0/internal/random     0.222s  coverage: 0.0% of statements [no tests to run]
testing: warning: no tests to run
PASS
coverage: 0.0% of statements
ok      github.com/alexkappa/terraform-provider-auth0/auth0/internal/validation 0.108s  coverage: 0.0% of statements [no tests to run]
?       github.com/alexkappa/terraform-provider-auth0/version   [no test files]

Community Note

  • Please vote on this pull request by adding a 👍 reaction to the original pull request comment to help the community and maintainers prioritize this request
  • Please do not leave "+1" comments, they generate extra noise for pull request followers and do not help prioritize the request

Kim Alster Glimberg added 5 commits September 27, 2020 23:27
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
@mcalster
Copy link
Copy Markdown
Contributor Author

@alexkappa I think this is good to go

@kpurdon
Copy link
Copy Markdown
Contributor

kpurdon commented Nov 3, 2020

Just came here looking for this feature. Things look stalled from a review standpoint. Any update @alexkappa, thanks for your work!

@abulford
Copy link
Copy Markdown
Contributor

abulford commented Nov 6, 2020

@mcalster thanks for this, I'm also interested in getting this functionality in the provider.

I've tried this locally and it seems to work well for me! I'm using the Amazon EventBridge type, so can't comment on the other types. The build I used was slightly different from this branch because I merged your OAuth2 PR into this branch locally, and updated your auth0 PR with v4.7.0 locally, because I also need the OAuth2 connection functionality. Those merges went fine with no conflicts.

I've tried creating, updating (by changing the AWS region, which actually results in a re-create), deleting and creating again and all looks good.

The only strange thing I noticed is that when I modified the region it told me that the http_custom_headers and splunk_secure properties would be deleted, but I never had those set:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # auth0_log_stream.log_stream must be replaced
-/+ resource "auth0_log_stream" "log_stream" {
      ~ aws_partner_event_source = (sensitive value)
      ~ id                       = "<REDACTED>" -> (known after apply)
        name                     = "Auth0Events"
      ~ status                   = "active" -> (known after apply)
        type                     = "eventbridge"

      ~ sink {
            aws_account_id      = (sensitive value)
          ~ aws_region          = (sensitive value)
          - http_custom_headers = [] -> null
          - splunk_secure       = false -> null
        }
    }

Plan: 1 to add, 0 to change, 1 to destroy.

It didn't cause any problems but maybe means something isn't quite set up correctly?

Also, I need the aws_partner_event_source property as an output so I can use it to set up the event bus in AWS. Locally I've added this as a top-level computed property which returns the sink's AWSPartnerEventSource property if the sink is of type *management.EventBridgeSink, or nil otherwise. This seems to work for me though I've not tried it with other types.

I'm not sure what best to do with this - I could submit it as a PR to your PR, or wait until this gets merged then create a new PR in this repo. Or if you're happy to add it into this PR yourself that's great too, I can send you a diff or something if you like.

@mcalster
Copy link
Copy Markdown
Contributor Author

mcalster commented Nov 6, 2020

@abulford Thanks for looking into it! I've only tested it with Azure Event grid. You are more than welcome to PR my PR. I will look at it and merge it ASAP. (propbably tonight or over the weekend).
It will make it much faster for me :-)

@abulford
Copy link
Copy Markdown
Contributor

abulford commented Nov 6, 2020

@mcalster no problem, thanks for creating it! OK, I'll try to get that done today.

If you're using Azure, would you like me to add in the azure_partner_topic property as well? Would that be considered sensitive?

@mcalster
Copy link
Copy Markdown
Contributor Author

mcalster commented Nov 7, 2020

@abulford let me know if you are close with a PR. Otherwise I'll probably look at it tonight. So about 10 hours from know 🙂

@abulford
Copy link
Copy Markdown
Contributor

abulford commented Nov 7, 2020

@mcalster sorry, I didn't get to this last night but submitted the PR now.

There were lots of different ways I could have done this and I'm not sure what I went with was necessarily best, so feel free to merge and change as you see fit, or if you let me know what you think and I can make modifications within the PR.

Thanks!

Kim Alster Glimberg added 9 commits November 7, 2020 17:25
…omputed attributes

Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
…omputed attributes

Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
@mcalster
Copy link
Copy Markdown
Contributor Author

mcalster commented Nov 7, 2020

@abulford thx for the PR, it was much easier for me to get going. I copied some of your changes instead of merging the PR.

The azure_partner_topic and aws_partner_event_source were already mapped in the specific flatten methods, so I didn't move those. They were missing in the shcema though, so I took used those parts. Could I ask you to take another round of review?

@mcalster
Copy link
Copy Markdown
Contributor Author

mcalster commented Nov 7, 2020

The only strange thing I noticed is that when I modified the region it told me that the http_custom_headers and splunk_secure properties would be deleted, but I never had those set:

I can't see how this can happen, can you reproduce with the latest version of this PR?

@abulford
Copy link
Copy Markdown
Contributor

abulford commented Nov 8, 2020

@mcalster thanks for incorporating some of my changes.

Sorry, I should have explained the rationale for my change better. I did initially just expose the existing properties by including them in the schema, but found that I then had to access the property in my Terraform code like this:

auth0_log_stream.log_stream.sink[0].aws_partner_event_source

I wasn't particularly keen on the need to access the first element of the list of sinks, and generally it didn't feel consistent with how I'm used to accessing properties in other resources. Given that this resource represents a single log stream in Auth0, and there's no option to provide multiple sinks within a single log stream resource, it felt appropriate to expose these properties as top level elements which will be set depending on the type specified.

So I instead declared the properties at the top level of the resource, which meant I couldn't take advantage of the work already done in the flatten and expand functions, but means I can access the properties directly in my Terraform code, like this:

auth0_log_stream.log_stream.aws_partner_event_source

This is how I'd instinctively expect to be able to access this property, though maybe others would expect something different.

Thinking about it, might it make sense to do this with all the sink properties...? Currently the Terraform resource directly reflects the structure of the data used by the Auth0 management API, but does it need to? It might make the resource easier to use if all properties were at the top level, with properties conflicting with each other so only valid combinations can be set.

I don't really mind which way this is implemented and am happy to update my Terraform code to match, but wanted to explain my rationale in case you hadn't realised how it would mean the property needs to be accessed. What do you think?

@abulford
Copy link
Copy Markdown
Contributor

abulford commented Nov 8, 2020

The only strange thing I noticed is that when I modified the region it told me that the http_custom_headers and splunk_secure properties would be deleted, but I never had those set:

I can't see how this can happen, can you reproduce with the latest version of this PR?

I'm afraid the issue still exists, and actually causes more of a problem than I'd realised. Previously I tested updates by modifying the AWS region, which actually resulted in a re-creation of the resource. This time I modified the name which allowed the provider to attempt an update rather than a re-creation, which fails.

So, the plan shows:

Terraform will perform the following actions:

  # auth0_log_stream.log_stream will be updated in-place
  ~ resource "auth0_log_stream" "log_stream" {
        id     = "<REDACTED>"
      ~ name   = "Auth0Events" -> "Auth0Events1"
        status = "active"
        type   = "eventbridge"

        sink {
            aws_account_id           = (sensitive value)
            aws_partner_event_source = "<REDACTED>"
            aws_region               = "eu-west-1"
            http_custom_headers      = []
            splunk_secure            = false
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

When attempting to apply the update, it fails:

Error: 400 Bad Request: Payload validation error: 'Data does not match any schemas from 'oneOf'' on property sink.

  on auth0_log_stream.tf line 1, in resource "auth0_log_stream" "log_stream":
   1: resource "auth0_log_stream" "log_stream" {

Which I expect is down to the unexpected properties being present.

mcalster and others added 6 commits November 13, 2020 23:43
Co-authored-by: Alex Kalyvitis <alex.kalyvitis@gmail.com>
Co-authored-by: Alex Kalyvitis <alex.kalyvitis@gmail.com>
Co-authored-by: Alex Kalyvitis <alex.kalyvitis@gmail.com>
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
@mcalster
Copy link
Copy Markdown
Contributor Author

@alexkappa I got a bit lost in the changes you made when just looking in the github interface. But now I think I got them all.

@abulford I think the issue you experienced should be solved now, Azure and Amazon Sinks will trigger a new Resource when changed. (Updates are not supported for these two in the API)

@abulford
Copy link
Copy Markdown
Contributor

I've changed the sensitivity as requested :-)

@mcalster thanks :)

@abulford I think the issue you experienced should be solved now, Azure and Amazon Sinks will trigger a new Resource when changed. (Updates are not supported for these two in the API)

Thank you! I've tried this out and updates do indeed seem to be fixed - I can update the name of the log stream and the update takes place successfully, I can update the AWS region (not tried account ID but expect it to behave the same) and the re-creation is successful, too.

I don't know how important it is, but something does still seem to be up with the other properties.
Here's how it looks when I create the resource:

  # auth0_log_stream.log_stream will be created
  + resource "auth0_log_stream" "log_stream" {
      + id     = (known after apply)
      + name   = "Auth0Events"
      + status = (known after apply)
      + type   = "eventbridge"

      + sink {
          + aws_account_id           = (sensitive value)
          + aws_partner_event_source = (sensitive value)
          + aws_region               = "us-east-1"
          + azure_partner_topic      = (sensitive value)
          + splunk_secure            = false
        }
    }

All looks good, though azure_partner_topic and splunk_secure are unexpectedly shown.

Once created, the state looks like this:

{
  "mode": "managed",
  "type": "auth0_log_stream",
  "name": "log_stream",
  "provider": "provider.auth0",
  "instances": [
    {
      "schema_version": 0,
      "attributes": {
        "id": "<REDACTED>",
        "name": "Auth0Events",
        "sink": [
          {
            "aws_account_id": "<REDACTED>",
            "aws_partner_event_source": "<REDACTED>",
            "aws_region": "us-east-1",
            "azure_partner_topic": "",
            "azure_region": "",
            "azure_resource_group": "",
            "azure_subscription_id": "",
            "datadog_api_key": "",
            "datadog_region": "",
            "http_authorization": "",
            "http_content_format": "",
            "http_content_type": "",
            "http_custom_headers": null,
            "http_endpoint": "",
            "splunk_domain": "",
            "splunk_port": "",
            "splunk_secure": false,
            "splunk_token": ""
          }
        ],
        "status": "active",
        "type": "eventbridge"
      },
      "private": "<REDACTED>"
    }
  ]
}

Changing the name causes an update, as expected:

  # auth0_log_stream.log_stream will be updated in-place
  ~ resource "auth0_log_stream" "log_stream" {
        id     = "<REDACTED>"
      ~ name   = "Auth0Events" -> "Auth0Events1"
        status = "active"
        type   = "eventbridge"

        sink {
            aws_account_id           = (sensitive value)
            aws_partner_event_source = (sensitive value)
            aws_region               = "us-east-1"
            http_custom_headers      = []
            splunk_secure            = false
        }
    }

We're now seeing http_custom_headers as well as splunk_secure.

Looking at the updated state, as well as the name having changed as we'd expect, the http_custom_headers is now set to [] instead of null:

11c11
<         "name": "Auth0Events",
---
>         "name": "Auth0Events1",
26c26
<             "http_custom_headers": null,
---
>             "http_custom_headers": [],

Changing the region causes a re-create, as we'd expect:

  # auth0_log_stream.log_stream must be replaced
-/+ resource "auth0_log_stream" "log_stream" {
      ~ id     = "<REDACTED>" -> (known after apply)
        name   = "Auth0Events1"
      ~ status = "active" -> (known after apply)
        type   = "eventbridge"

      ~ sink {
            aws_account_id           = (sensitive value)
          ~ aws_partner_event_source = (sensitive value)
          ~ aws_region               = "us-east-1" -> "eu-west-1" # forces replacement
          + azure_partner_topic      = (sensitive value)
          - http_custom_headers      = [] -> null
            splunk_secure            = false
        }
    }

Though again the plan is unexpectedly showing azure_partner_topic, http_custom_headers and splunk_secure.

Finally, destroying the resource works fine:

  # auth0_log_stream.log_stream will be destroyed
  - resource "auth0_log_stream" "log_stream" {
      - id     = "<REDACTED>" -> null
      - name   = "Auth0Events1" -> null
      - status = "active" -> null
      - type   = "eventbridge" -> null

      - sink {
          - aws_account_id           = (sensitive value)
          - aws_partner_event_source = (sensitive value)
          - aws_region               = "eu-west-1" -> null
          - http_custom_headers      = [] -> null
          - splunk_secure            = false -> null
        }
    }

I don't know if this causes any more harm than a bit of confusion when looking at the diffs, so maybe not worth holding up the PR for, but I thought it worth documenting the behaviour I'm currently seeing. While this seems fine with the AWS EventBridge type, I don't know if it could cause issues with other types.

We could work around this by defining a default value for the field (i.e. false).

@alexkappa false feels like an unsafe default for this property, given that this tells Auth0 whether to verify TLS or not. If we can't validate this easily on the client side it might be preferable to let Auth0 return an error if the user hasn't set this. Alternatively could it default to true? Or is there any way we can avoid setting a default?

Another option could be to structure the schema a bit differently, so there are eventbridge_sink, eventgrid_sink, etc blocks rather than a single sink block. This way those blocks could conflict with each other but within the blocks all the properties could be set as required or optional as necessary, if I'm understanding things right.

@mcalster
Copy link
Copy Markdown
Contributor Author

All looks good, though azure_partner_topic and splunk_secure are unexpectedly shown.

I think the splunk_secure appears because of the default value of a Boolean.
I tried changing splunk_secure to a string that only accepts true/false. That seems to clean up the state. But I don't really like that solution. Another approach is to use a pointer but that doesn't make sense in the schema. Any suggestions are welcome.

The azure_partner_topic is weird though, I will try to look into that.

@abulford
Copy link
Copy Markdown
Contributor

Another approach is to use a pointer but that doesn't make sense in the schema.

Yeah, I think ultimately this would ideally be stored as a pointer to a bool, so it could be nil, true or false, but like you say, that doesn't seem supported by the schema.

I'm experimenting with the alternative format and it seems to work quite nicely, I'll submit a PR shortly to show you what I mean.

@abulford
Copy link
Copy Markdown
Contributor

@mcalster and @alexkappa I've created a PR with the modified layout I was proposing, I've put the detail in there so won't repeat it all here.

It's closer than my suggestion to flatten all the properties, but admittedly still not completely consistent with the rest of the provider. It does seem to bring a lot of benefit in validation and simplifying the situation with these unexpected properties. If you're happy to go ahead I'll try to solve the not so nice situation where the type property doesn't need to match the sink.

@alexkappa
Copy link
Copy Markdown
Owner

alexkappa commented Nov 15, 2020

@alexkappa how should this be implemented? It seems the all changes should result in a new change in this case then? By setting ForceNew on Azure and Amazon sink values?

That seems like a reasonable thing to do.

@alexkappa false feels like an unsafe default for this property, given that this tells Auth0 whether to verify TLS or not. If we can't validate this easily on the client side it might be preferable to let Auth0 return an error if the user hasn't set this. Alternatively could it default to true? Or is there any way we can avoid setting a default?

Thats a fair point on having a default value or not. Having a default may not be the best idea as it will have the side effect of showing up when we don't need it.

Perhaps RequiredWith would be better in the way that we can mark it as required conditionally?

Edit: here's an example from the aws provider doing something similar.

@mcalster and @alexkappa I've created a PR with the modified layout I was proposing, I've put the detail in there so won't repeat it all here.

I would prefer to keep things consistent as much as possible. I do appreciate the power it gives us to define schema per sink type, but its inconsistent with the provider at this stage. Thanks for suggesting it though, as this approach hasn't crossed my mind before.

@abulford
Copy link
Copy Markdown
Contributor

Thats a fair point on having a default value or not. Having a default may not be the best idea as it will have the side effect of showing up when we don't need it.

It seems that this property is showing up in the state whether there's a default value or not, and I don't know if setting validation will make a difference either, though I don't think I'll get chance to experiment with that today. Ideally no default would be set (though we'll end up with false in the state) in the case that the property isn't going to be relevant, but in the case that it is relevant, the default would be true. Could DefaultFunc be used for this? If not then maybe just defaulting to true would be OK, to prevent accidentally disabling TLS verification when using the log stream with Splunk.

Perhaps RequiredWith would be better in the way that we can mark it as required conditionally?

Edit: here's an example from the aws provider doing something similar.

Ooh, I was about to reply to say that I'd tried RequiredWith for some other properties but it didn't seem to work within the nested structure, but I had been trying sink.aws_account_id rather than sink.0.aws_account_id. That opens up more validation options than I'd realised were possible with the existing structure 👍.

I would prefer to keep things consistent as much as possible. I do appreciate the power it gives us to define schema per sink type, but its inconsistent with the provider at this stage. Thanks for suggesting it though, as this approach hasn't crossed my mind before.

Fair enough :), I agree it would be a departure from the existing convention. It might be worth considering something like this if there's any plan to refactor for a major release as I think it could improve usability and address some state migration issues I've found previously, but yeah, this PR probably isn't the place for it!

Kim Alster Glimberg and others added 4 commits November 15, 2020 12:06
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
Co-authored-by: Alex Kalyvitis <alex.kalyvitis@gmail.com>
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
@mcalster
Copy link
Copy Markdown
Contributor Author

@alexkappa I've added RequiredWith to the sink values, and removed default value from the slunk_secure

@abulford
Copy link
Copy Markdown
Contributor

@mcalster I've tested your latest changes and all looks good! I created, updated, recreated and deleted an AWS EventBridge resource and it all worked fine. The splunk_secure and http_custom_headers properties are still showing up but it doesn't actually break anything.

I also tried removing the aws_region property and validation correctly picked that up. I tried adding an unexpected property, aws_partner_event_source, and validation didn't pick that up, so Auth0 ends up returning a 400 Bad request, with Payload validation error: 'Data does not match any schemas from 'oneOf''. This could be addressed with ConflictsWith, or maybe some fancy validation checking the type property, but maybe this extra validation could be added in a later PR.

I'm also unsure if all the RequiredWith constraints are necessary in the HTTP and Splunk streams - are some of those optional properties in the Auth0 management API? But again maybe not worth worrying about now as it's a minor detail.

@alexkappa
Copy link
Copy Markdown
Owner

@mcalster thanks for making the changes, I would say this is good to merge. Last thing before I do that, why did you decide to define aws_account_id, aws_partner_event_source, azure_subscription_id, azure_partner_topic and datadog_region as sensitive? If I understand correctly these aren't particularly sensitive values right?

@mcalster
Copy link
Copy Markdown
Contributor Author

mcalster commented Nov 16, 2020 via email

@alexkappa
Copy link
Copy Markdown
Owner

Great, I will make the change and merge it in then!

@abulford
Copy link
Copy Markdown
Contributor

why did you decide to define aws_account_id, aws_partner_event_source, azure_subscription_id, azure_partner_topic and datadog_region as sensitive? If I understand correctly these aren't particularly sensitive values right?

I started out with having them as non sensitive, but got some review comments on that approach.

My feedback was to say that I wasn't sure that the aws_account_id or aws_region should be sensitive. On the basis that the aws_account_id remained marked as sensitive I suggested erring on the side of sensitive for the aws_partner_event_source and azure_partner_topic properties as well, but not especially convinced either way. I'm sorry if I didn't explain myself properly.

Would certainly be happy for all those properties mentioned to not be marked as sensitive.

@alexkappa alexkappa merged commit 6fe3c2a into alexkappa:master Nov 16, 2020
@mcalster mcalster deleted the feature/log-stream branch March 31, 2021 21:17
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add support for LogStream configuration

4 participants