Added schema and tests for LogStreams#270
Conversation
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
|
@alexkappa I think this is good to go |
|
Just came here looking for this feature. Things look stalled from a review standpoint. Any update @alexkappa, thanks for your work! |
|
@mcalster thanks for this, I'm also interested in getting this functionality in the provider. I've tried this locally and it seems to work well for me! I'm using the Amazon EventBridge type, so can't comment on the other types. The build I used was slightly different from this branch because I merged your OAuth2 PR into this branch locally, and updated your auth0 PR with I've tried creating, updating (by changing the AWS region, which actually results in a re-create), deleting and creating again and all looks good. The only strange thing I noticed is that when I modified the region it told me that the It didn't cause any problems but maybe means something isn't quite set up correctly? Also, I need the I'm not sure what best to do with this - I could submit it as a PR to your PR, or wait until this gets merged then create a new PR in this repo. Or if you're happy to add it into this PR yourself that's great too, I can send you a diff or something if you like. |
|
@abulford Thanks for looking into it! I've only tested it with Azure Event grid. You are more than welcome to PR my PR. I will look at it and merge it ASAP. (propbably tonight or over the weekend). |
|
@mcalster no problem, thanks for creating it! OK, I'll try to get that done today. If you're using Azure, would you like me to add in the |
|
@abulford let me know if you are close with a PR. Otherwise I'll probably look at it tonight. So about 10 hours from know 🙂 |
|
@mcalster sorry, I didn't get to this last night but submitted the PR now. There were lots of different ways I could have done this and I'm not sure what I went with was necessarily best, so feel free to merge and change as you see fit, or if you let me know what you think and I can make modifications within the PR. Thanks! |
…omputed attributes Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
…omputed attributes Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
|
@abulford thx for the PR, it was much easier for me to get going. I copied some of your changes instead of merging the PR. The azure_partner_topic and aws_partner_event_source were already mapped in the specific flatten methods, so I didn't move those. They were missing in the shcema though, so I took used those parts. Could I ask you to take another round of review? |
I can't see how this can happen, can you reproduce with the latest version of this PR? |
|
@mcalster thanks for incorporating some of my changes. Sorry, I should have explained the rationale for my change better. I did initially just expose the existing properties by including them in the schema, but found that I then had to access the property in my Terraform code like this: I wasn't particularly keen on the need to access the first element of the list of sinks, and generally it didn't feel consistent with how I'm used to accessing properties in other resources. Given that this resource represents a single log stream in Auth0, and there's no option to provide multiple sinks within a single log stream resource, it felt appropriate to expose these properties as top level elements which will be set depending on the type specified. So I instead declared the properties at the top level of the resource, which meant I couldn't take advantage of the work already done in the flatten and expand functions, but means I can access the properties directly in my Terraform code, like this: This is how I'd instinctively expect to be able to access this property, though maybe others would expect something different. Thinking about it, might it make sense to do this with all the sink properties...? Currently the Terraform resource directly reflects the structure of the data used by the Auth0 management API, but does it need to? It might make the resource easier to use if all properties were at the top level, with properties conflicting with each other so only valid combinations can be set. I don't really mind which way this is implemented and am happy to update my Terraform code to match, but wanted to explain my rationale in case you hadn't realised how it would mean the property needs to be accessed. What do you think? |
I'm afraid the issue still exists, and actually causes more of a problem than I'd realised. Previously I tested updates by modifying the AWS region, which actually resulted in a re-creation of the resource. This time I modified the name which allowed the provider to attempt an update rather than a re-creation, which fails. So, the plan shows: When attempting to apply the update, it fails: Which I expect is down to the unexpected properties being present. |
Co-authored-by: Alex Kalyvitis <alex.kalyvitis@gmail.com>
Co-authored-by: Alex Kalyvitis <alex.kalyvitis@gmail.com>
Co-authored-by: Alex Kalyvitis <alex.kalyvitis@gmail.com>
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
|
@alexkappa I got a bit lost in the changes you made when just looking in the github interface. But now I think I got them all. @abulford I think the issue you experienced should be solved now, Azure and Amazon Sinks will trigger a new Resource when changed. (Updates are not supported for these two in the API) |
@mcalster thanks :)
Thank you! I've tried this out and updates do indeed seem to be fixed - I can update the name of the log stream and the update takes place successfully, I can update the AWS region (not tried account ID but expect it to behave the same) and the re-creation is successful, too. I don't know how important it is, but something does still seem to be up with the other properties. All looks good, though Once created, the state looks like this: Changing the name causes an update, as expected: We're now seeing Looking at the updated state, as well as the Changing the region causes a re-create, as we'd expect: Though again the plan is unexpectedly showing Finally, destroying the resource works fine: I don't know if this causes any more harm than a bit of confusion when looking at the diffs, so maybe not worth holding up the PR for, but I thought it worth documenting the behaviour I'm currently seeing. While this seems fine with the AWS EventBridge type, I don't know if it could cause issues with other types.
@alexkappa false feels like an unsafe default for this property, given that this tells Auth0 whether to verify TLS or not. If we can't validate this easily on the client side it might be preferable to let Auth0 return an error if the user hasn't set this. Alternatively could it default to true? Or is there any way we can avoid setting a default? Another option could be to structure the schema a bit differently, so there are |
I think the splunk_secure appears because of the default value of a Boolean. The azure_partner_topic is weird though, I will try to look into that. |
Yeah, I think ultimately this would ideally be stored as a pointer to a bool, so it could be I'm experimenting with the alternative format and it seems to work quite nicely, I'll submit a PR shortly to show you what I mean. |
|
@mcalster and @alexkappa I've created a PR with the modified layout I was proposing, I've put the detail in there so won't repeat it all here. It's closer than my suggestion to flatten all the properties, but admittedly still not completely consistent with the rest of the provider. It does seem to bring a lot of benefit in validation and simplifying the situation with these unexpected properties. If you're happy to go ahead I'll try to solve the not so nice situation where the |
That seems like a reasonable thing to do.
Thats a fair point on having a default value or not. Having a default may not be the best idea as it will have the side effect of showing up when we don't need it. Perhaps Edit: here's an example from the aws provider doing something similar.
I would prefer to keep things consistent as much as possible. I do appreciate the power it gives us to define schema per sink type, but its inconsistent with the provider at this stage. Thanks for suggesting it though, as this approach hasn't crossed my mind before. |
It seems that this property is showing up in the state whether there's a default value or not, and I don't know if setting validation will make a difference either, though I don't think I'll get chance to experiment with that today. Ideally no default would be set (though we'll end up with
Ooh, I was about to reply to say that I'd tried
Fair enough :), I agree it would be a departure from the existing convention. It might be worth considering something like this if there's any plan to refactor for a major release as I think it could improve usability and address some state migration issues I've found previously, but yeah, this PR probably isn't the place for it! |
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
Co-authored-by: Alex Kalyvitis <alex.kalyvitis@gmail.com>
Signed-off-by: Kim Alster Glimberg <kagli@kims-mbp.lan>
|
@alexkappa I've added RequiredWith to the sink values, and removed default value from the slunk_secure |
|
@mcalster I've tested your latest changes and all looks good! I created, updated, recreated and deleted an AWS EventBridge resource and it all worked fine. The I also tried removing the I'm also unsure if all the |
|
@mcalster thanks for making the changes, I would say this is good to merge. Last thing before I do that, why did you decide to define |
|
I started out with having them as non sensitive, but got some review
comments on that approach.
It will be fine by me to remove it.
man. 16. nov. 2020 kl. 14.06 skrev Alex Kalyvitis <notifications@github.com
:
@mcalster <https://github.com/mcalster> thanks for making the changes, I
would say this is good to merge. Last thing before I do that, why did you
decide to define aws_account_id, aws_partner_event_source,
azure_subscription_id, azure_partner_topic and datadog_region as
sensitive? If I understand correctly these aren't particularly sensitive
values right?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#270 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAX7II5KQ4DZW3MB4VDHL4LSQEPURANCNFSM4R3ZKSOQ>
.
--
-- Venlig Hilsen
Kim Alster
|
|
Great, I will make the change and merge it in then! |
My feedback was to say that I wasn't sure that the Would certainly be happy for all those properties mentioned to not be marked as sensitive. |
Proposed Changes
Fixes #264
Blocked by go-auth0/auth0#144
Acceptance Test Output
Community Note