Using version 1.0.4 with the nexus_373_plus changes and an S3 repository defined as
nexus3_blobstore { 'yum-proxied':
ensure => 'present',
type => 'S3',
bucket => $s3_yum_proxied_bucket,
access_key_id => $s3_blob_access_key,
secret_access_key => $s3_blob_secret_key,
region => 'us-west-1',
endpoint => $s3_endpoint,
forcepathstyle => true,
}
every run results in two things:
- session_token is set (it should be omitted) and is constantly re-set every run
- secret_access_key is constantly re-set every run
Notice: /Stage[main]/Profiles::Nexus::Repository/Nexus3_blobstore[yum-proxied]/secret_access_key: secret_access_key changed '_14' to 'REDACTED-SECRET-ACCESS-KEY-HERE' (corrective)
Notice: /Stage[main]/Profiles::Nexus::Repository/Nexus3_blobstore[yum-proxied]/session_token: session_token changed '_15' to '' (corrective)
The "_14" and "_15" increment every run. The best I can come up with is that these are "sensitive" parameters and the "get" is just returning dummy data which compares as not what is in the manifest and thus attempts to fix it. This creates a bit of a headache for doing alerting/auditing on "Puppet made a corrective change, why is the environment not already in the intended state" since that's every run.
Using version 1.0.4 with the nexus_373_plus changes and an S3 repository defined as
every run results in two things:
The "_14" and "_15" increment every run. The best I can come up with is that these are "sensitive" parameters and the "get" is just returning dummy data which compares as not what is in the manifest and thus attempts to fix it. This creates a bit of a headache for doing alerting/auditing on "Puppet made a corrective change, why is the environment not already in the intended state" since that's every run.