diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index ddec2039bad..d81dce63bfe 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -9,7 +9,7 @@ requirements and recommendations.
## Sign the CLA
Before you can contribute, you will need to sign the [Contributor License
-Agreement](https://identity.linuxfoundation.org/projects/cncf).
+Agreement](https://easycla.lfx.linuxfoundation.org/).
## Proposing a change
@@ -56,7 +56,7 @@ Be sure to clearly define the specification requirements using appropriate
keywords as defined in [Notation Conventions and
Compliance](./specification/README.md#notation-conventions-and-compliance),
while making sure to heed the guidance laid out in
-[RFC2119](https://tools.ietf.org/html/rfc2119) about the sparing use of
+[RFC2119](https://datatracker.ietf.org/doc/html/rfc2119) about the sparing use of
imperatives:
> Imperatives of the type defined in this memo must be used with care
@@ -211,7 +211,7 @@ Everyone is welcome to contribute to the OpenTelemetry specification via GitHub
pull requests (PRs).
To [create a new
-PR](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request),
+PR](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request),
fork the project in GitHub and clone the upstream repo:
```sh
@@ -236,7 +236,7 @@ $ git push fork feature
Open a pull request against the main `opentelemetry-specification` repo.
If the PR is not ready for review, please mark it as
-[`draft`](https://github.blog/2019-02-14-introducing-draft-pull-requests/).
+[`draft`](https://github.blog/news-insights/product-news/introducing-draft-pull-requests/).
For non-trivial changes, please update the [CHANGELOG](./CHANGELOG.md).
diff --git a/README.md b/README.md
index 4e261a0ea00..13e973b64d1 100644
--- a/README.md
+++ b/README.md
@@ -22,7 +22,7 @@ For details, see [CONTRIBUTING.md](CONTRIBUTING.md), in particular read
Questions that need additional attention can be brought to the regular
specifications meeting. EU and US timezone friendly meeting is held every
Tuesday at 8 AM Pacific time. Meeting notes are held in the [Google
-doc](https://docs.google.com/document/d/1pdvPeKjA8v8w_fGKAN68JjWBmVJtPCpqdi9IZrd6eEo).
+doc](https://docs.google.com/document/d/1pdvPeKjA8v8w_fGKAN68JjWBmVJtPCpqdi9IZrd6eEo/edit).
APAC timezone friendly meetings are held on request. See
[OpenTelemetry calendar](https://github.com/open-telemetry/community#calendar).
diff --git a/development/trace/zpages.md b/development/trace/zpages.md
index f47afc057e1..8e5f8ce2b22 100644
--- a/development/trace/zpages.md
+++ b/development/trace/zpages.md
@@ -24,7 +24,7 @@
zPages are an in-process alternative to external exporters. When included, they collect and aggregate tracing and metrics information in the background; this data is served on web pages when requested.
-The idea of "zPages" originates from one of OpenTelemetry's predecessors, [OpenCensus](https://opencensus.io/). You can read more about zPages from the OpenCensus docs [here](https://opencensus.io/zpages) or the OTEP [here](../../oteps/0110-z-pages.md). OpenCensus has different zPage implementations in [Java](https://opencensus.io/zpages/java/), [Go](https://opencensus.io/zpages/go/), and [Node](https://opencensus.io/zpages/node/) and there has been similar internal solutions developed at companies like Uber. Within OpenTelemetry, zPages are available in Go and Rust. The OTel Collector also has [an implementation](https://github.com/open-telemetry/opentelemetry-collector/tree/master/extension/zpagesextension) of zPages.
+The idea of "zPages" originates from one of OpenTelemetry's predecessors, [OpenCensus](https://opencensus.io/). You can read more about zPages from the OpenCensus docs [here](https://opencensus.io/zpages/) or the OTEP [here](../../oteps/0110-z-pages.md). OpenCensus has different zPage implementations in [Java](https://opencensus.io/zpages/java/), [Go](https://opencensus.io/zpages/go/), and [Node](https://opencensus.io/zpages/node/) and there has been similar internal solutions developed at companies like Uber. Within OpenTelemetry, zPages are available in Go and Rust. The OTel Collector also has [an implementation](https://github.com/open-telemetry/opentelemetry-collector/tree/main/extension/zpagesextension) of zPages.
zPages are uniquely useful in a couple of different ways. One is that they're more lightweight and quicker compared to installing external tracing systems like Jaeger and Zipkin, yet they still share useful ways to debug and gain insight into instrumented applications; these uses depend on the type of zPage, which is detailed below. For high throughput applications, zPages can also analyze more telemetry with the limited set of supported scenarios than external exporters; this is because zPages are in-memory while external exporters are typically configured to send a subset of telemetry for reach analysis to save costs.
diff --git a/oteps/0001-telemetry-without-manual-instrumentation.md b/oteps/0001-telemetry-without-manual-instrumentation.md
index 8e4d2d9b1bc..e02e2a0fe3e 100644
--- a/oteps/0001-telemetry-without-manual-instrumentation.md
+++ b/oteps/0001-telemetry-without-manual-instrumentation.md
@@ -89,7 +89,7 @@ There are some languages that will have OpenTelemetry support before they have D
### Governance of the auto-instrumentation libraries
-Each `auto-instr-foo` repository must have at least one [Maintainer](https://github.com/open-telemetry/community/blob/master/community-membership.md#maintainer) in common with the main `opentelemetry-foo` language repository. There are no other requirements or constraints about the set of maintainers/approvers for the main language repository and the respective auto-instrumentation repository; in particular, there may be maintainers/approvers of the main language repository that are not maintainers/approvers for the auto-instrumentation repository, and vice versa.
+Each `auto-instr-foo` repository must have at least one [Maintainer](https://github.com/open-telemetry/community/blob/main/community-membership.md#maintainer) in common with the main `opentelemetry-foo` language repository. There are no other requirements or constraints about the set of maintainers/approvers for the main language repository and the respective auto-instrumentation repository; in particular, there may be maintainers/approvers of the main language repository that are not maintainers/approvers for the auto-instrumentation repository, and vice versa.
### Mini-FAQ about this proposal
diff --git a/oteps/0007-no-out-of-band-reporting.md b/oteps/0007-no-out-of-band-reporting.md
index 0addd963e0c..6b38527db66 100644
--- a/oteps/0007-no-out-of-band-reporting.md
+++ b/oteps/0007-no-out-of-band-reporting.md
@@ -57,5 +57,5 @@ instance configured with it's own `Resource`.
* [opentelemetry-specification/62](https://github.com/open-telemetry/opentelemetry-specification/issues/62)
* [opentelemetry-specification/61](https://github.com/open-telemetry/opentelemetry-specification/issues/61)
-[otelsvc-receiver]: https://github.com/open-telemetry/opentelemetry-service#config-receivers
-[create-metric]: https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/metrics/api.md#create-metric
+[otelsvc-receiver]: https://github.com/open-telemetry/opentelemetry-collector#config-receivers
+[create-metric]: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#create-metric
diff --git a/oteps/0016-named-tracers.md b/oteps/0016-named-tracers.md
index a6e918f261a..d25734f2489 100644
--- a/oteps/0016-named-tracers.md
+++ b/oteps/0016-named-tracers.md
@@ -18,7 +18,7 @@ For an operator of an application using OpenTelemetry, there is currently no way
### Instrumentation library identification
-If an instrumentation library hasn't implemented [semantic conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/overview.md#semantic-conventions) correctly or those conventions change over time, it's currently hard to interpret and sanitize data produced by it selectively. The produced Spans or Metrics cannot later be associated with the library which reported them, either in the processing pipeline or the backend.
+If an instrumentation library hasn't implemented [semantic conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/overview.md#semantic-conventions) correctly or those conventions change over time, it's currently hard to interpret and sanitize data produced by it selectively. The produced Spans or Metrics cannot later be associated with the library which reported them, either in the processing pipeline or the backend.
### Disable instrumentation of pre-instrumented libraries
@@ -50,7 +50,7 @@ Meter meter = OpenTelemetry.getMeterProvider().getMeter("io.opentelemetry.contri
These factories (`TracerProvider` and `MeterProvider`) replace the global `Tracer` / `Meter` singleton objects as ubiquitous points to request Tracer and Meter instances.
- The _name_ used to create a Tracer or Meter must identify the _instrumentation_ libraries (also referred to as _integrations_) and not the library being instrumented. These instrumentation libraries could be libraries developed in an OpenTelemetry repository, a 3rd party implementation, or even auto-injected code (see [Open Telemetry Without Manual Instrumentation OTEP](https://github.com/open-telemetry/oteps/blob/master/text/0001-telemetry-without-manual-instrumentation.md)). See also the examples for identifiers at the end.
+ The _name_ used to create a Tracer or Meter must identify the _instrumentation_ libraries (also referred to as _integrations_) and not the library being instrumented. These instrumentation libraries could be libraries developed in an OpenTelemetry repository, a 3rd party implementation, or even auto-injected code (see [Open Telemetry Without Manual Instrumentation OTEP](https://github.com/open-telemetry/oteps/blob/main/text/0001-telemetry-without-manual-instrumentation.md)). See also the examples for identifiers at the end.
If a library (or application) has instrumentation built-in, it is both the instrumenting and instrumented library and should pass its own name here. In all other cases (and to distinguish them from that case), the distinction between instrumenting and instrumented library is very important. For example, if an HTTP library `com.example.http` is instrumented by either `io.opentelemetry.contrib.examplehttp`, then it is important that the Tracer is not named `com.example.http`, but `io.opentelemetry.contrib.examplehttp` after the actual instrumentation library.
If no name (null or empty string) is specified, following the suggestions in ["error handling proposal"](https://github.com/open-telemetry/opentelemetry-specification/pull/153), a "smart default" will be applied and a default Tracer / Meter implementation is returned.
diff --git a/oteps/0035-opentelemetry-protocol.md b/oteps/0035-opentelemetry-protocol.md
index c3578849c50..946be06c267 100644
--- a/oteps/0035-opentelemetry-protocol.md
+++ b/oteps/0035-opentelemetry-protocol.md
@@ -45,7 +45,7 @@ OTLP is a general-purpose telemetry data delivery protocol designed in the scope
OTLP defines the encoding of telemetry data and the protocol used to exchange data between the client and the server.
-This specification defines how OTLP is implemented over [gRPC](https://grpc.io/) and specifies corresponding [Protocol Buffers](https://developers.google.com/protocol-buffers/docs/overview) schema. Future extensions to OTLP may define implementations over other transports. For details of gRPC service definition see section [gRPC Transport](#grpc-service-definition).
+This specification defines how OTLP is implemented over [gRPC](https://grpc.io/) and specifies corresponding [Protocol Buffers](https://protobuf.dev/overview/) schema. Future extensions to OTLP may define implementations over other transports. For details of gRPC service definition see section [gRPC Transport](#grpc-service-definition).
OTLP is a request/response style protocols: the clients send requests, the server replies with corresponding responses. This document defines one requests and response type: `Export`.
@@ -93,7 +93,7 @@ When an error is returned by the server it falls into 2 broad categories: retrya
- Not-retryable errors indicate that processing of telemetry data failed and the client MUST NOT retry sending the same telemetry data. The telemetry data MUST be dropped. This can happen, for example, when the request contains bad data and cannot be deserialized or otherwise processed by the server. The client SHOULD maintain a counter of such dropped data.
-When using gRPC transport the server SHOULD indicate retryable errors using code [Unavailable](https://godoc.org/google.golang.org/grpc/codes) and MAY supply additional [details via status](https://godoc.org/google.golang.org/grpc/status#Status.WithDetails) using [RetryInfo](https://github.com/googleapis/googleapis/blob/6a8c7914d1b79bd832b5157a09a9332e8cbd16d4/google/rpc/error_details.proto#L40) containing 0 value of RetryDelay. Here is a sample Go code to illustrate:
+When using gRPC transport the server SHOULD indicate retryable errors using code [Unavailable](https://pkg.go.dev/google.golang.org/grpc/codes) and MAY supply additional [details via status](https://pkg.go.dev/google.golang.org/grpc/status#Status.WithDetails) using [RetryInfo](https://github.com/googleapis/googleapis/blob/6a8c7914d1b79bd832b5157a09a9332e8cbd16d4/google/rpc/error_details.proto#L40) containing 0 value of RetryDelay. Here is a sample Go code to illustrate:
```go
// Do this on server side.
@@ -106,7 +106,7 @@ When using gRPC transport the server SHOULD indicate retryable errors using code
return st.Err()
```
-To indicate not-retryable errors the server is recommended to use code [InvalidArgument](https://godoc.org/google.golang.org/grpc/codes) and MAY supply additional [details via status](https://godoc.org/google.golang.org/grpc/status#Status.WithDetails) using [BadRequest](https://github.com/googleapis/googleapis/blob/6a8c7914d1b79bd832b5157a09a9332e8cbd16d4/google/rpc/error_details.proto#L119). Other gRPC status code may be used if it is more appropriate. Here is a sample Go code to illustrate:
+To indicate not-retryable errors the server is recommended to use code [InvalidArgument](https://pkg.go.dev/google.golang.org/grpc/codes) and MAY supply additional [details via status](https://pkg.go.dev/google.golang.org/grpc/status#Status.WithDetails) using [BadRequest](https://github.com/googleapis/googleapis/blob/6a8c7914d1b79bd832b5157a09a9332e8cbd16d4/google/rpc/error_details.proto#L119). Other gRPC status code may be used if it is more appropriate. Here is a sample Go code to illustrate:
```go
// Do this on server side.
@@ -148,7 +148,7 @@ OTLP allows backpressure signalling.
If the server is unable to keep up with the pace of data it receives from the client then it SHOULD signal that fact to the client. The client MUST then throttle itself to avoid overwhelming the server.
-To signal backpressure when using gRPC transport, the server SHOULD return an error with code [Unavailable](https://godoc.org/google.golang.org/grpc/codes) and MAY supply additional [details via status](https://godoc.org/google.golang.org/grpc/status#Status.WithDetails) using [RetryInfo](https://github.com/googleapis/googleapis/blob/6a8c7914d1b79bd832b5157a09a9332e8cbd16d4/google/rpc/error_details.proto#L40). Here is a sample Go code to illustrate:
+To signal backpressure when using gRPC transport, the server SHOULD return an error with code [Unavailable](https://pkg.go.dev/google.golang.org/grpc/codes) and MAY supply additional [details via status](https://pkg.go.dev/google.golang.org/grpc/status#Status.WithDetails) using [RetryInfo](https://github.com/googleapis/googleapis/blob/6a8c7914d1b79bd832b5157a09a9332e8cbd16d4/google/rpc/error_details.proto#L40). Here is a sample Go code to illustrate:
```go
// Do this on server side.
@@ -265,7 +265,7 @@ Both FlatBuffers and Capnproto are worth to be re-evaluated for future versions
It is also worth researching transports other than gRPC. Other transports are not included in this RFC due to time limitations.
-Experimental implementation of OTLP over WebSockets exists and was researched as an alternate. WebSockets were not chosen as the primary transport for OTLP due to lack or immaturity of certain capabilities (such as [lack of universal support](https://github.com/gorilla/websocket#gorilla-websocket-compared-with-other-packages) for [RFC 7692](https://tools.ietf.org/html/rfc7692) message compression extension). Despite limitations the experimental implementation demonstrated good performance and WebSocket transport will be considered for inclusion in a future OTLP Extensions RFC.
+Experimental implementation of OTLP over WebSockets exists and was researched as an alternate. WebSockets were not chosen as the primary transport for OTLP due to lack or immaturity of certain capabilities (such as [lack of universal support](https://github.com/gorilla/websocket#gorilla-websocket-compared-with-other-packages) for [RFC 7692](https://datatracker.ietf.org/doc/html/rfc7692) message compression extension). Despite limitations the experimental implementation demonstrated good performance and WebSocket transport will be considered for inclusion in a future OTLP Extensions RFC.
## Open Questions
diff --git a/oteps/0038-version-semantic-attribute.md b/oteps/0038-version-semantic-attribute.md
index 9c7f28797b4..0eac5c00bb4 100644
--- a/oteps/0038-version-semantic-attribute.md
+++ b/oteps/0038-version-semantic-attribute.md
@@ -29,4 +29,4 @@ to construct schema-appropriate values.
## Prior art and alternatives
-Tagging service resources with their version is generally suggested by analysis tools -- see [JAEGER_TAGS](https://www.jaegertracing.io/docs/1.8/client-features/) for an example -- but lacks standardization.
+Tagging service resources with their version is generally suggested by analysis tools -- see [JAEGER_TAGS](https://www.jaegertracing.io/docs/1.8/client-libraries/client-features/) for an example -- but lacks standardization.
diff --git a/oteps/0099-otlp-http.md b/oteps/0099-otlp-http.md
index bbeeb571e72..6a18571843f 100644
--- a/oteps/0099-otlp-http.md
+++ b/oteps/0099-otlp-http.md
@@ -91,7 +91,7 @@ specific failure cases and HTTP status codes that should be used.
Response body for all `HTTP 4xx` and `HTTP 5xx` responses MUST be a
ProtoBuf-encoded
-[Status](https://godoc.org/google.golang.org/genproto/googleapis/rpc/status#Status)
+[Status](https://pkg.go.dev/google.golang.org/genproto/googleapis/rpc/status#Status)
message that describes the problem.
This specification does not use `Status.code` field and the server MAY omit
@@ -121,7 +121,7 @@ response.
If the server receives more requests than the client is allowed or the server is
overloaded the server SHOULD respond with `HTTP 429 Too Many Requests` or
`HTTP 503 Service Unavailable` and MAY include
-["Retry-After"](https://tools.ietf.org/html/rfc7231#section-7.1.3) header with a
+["Retry-After"](https://datatracker.ietf.org/doc/html/rfc7231#section-7.1.3) header with a
recommended time interval in seconds to wait before retrying.
The client SHOULD honour the waiting interval specified in "Retry-After" header
diff --git a/oteps/0111-auto-resource-detection.md b/oteps/0111-auto-resource-detection.md
index c4fad09e728..8af2c14e88a 100644
--- a/oteps/0111-auto-resource-detection.md
+++ b/oteps/0111-auto-resource-detection.md
@@ -33,7 +33,7 @@ A default implementation of a detector that reads resource data from the
`OTEL_RESOURCE` environment variable will be included in the SDK. The
environment variable will contain of a list of key value pairs, and these are
expected to be represented in a format similar to the [W3C
-Baggage](https://github.com/w3c/baggage/blob/master/baggage/HTTP_HEADER_FORMAT.md#header-content),
+Baggage](https://github.com/w3c/baggage/blob/main/baggage/HTTP_HEADER_FORMAT.md#header-content),
except that additional semi-colon delimited metadata is not supported, i.e.:
`key1=value1,key2=value2`. If the user does not specify any resource, this
detector will be run by default.
@@ -149,7 +149,7 @@ specification](https://github.com/census-instrumentation/opencensus-specs/blob/m
### Existing OpenTelemetry implementations
- Resource detection implementation in JS SDK
- [here](https://github.com/open-telemetry/opentelemetry-js/tree/master/packages/opentelemetry-resources):
+ [here](https://github.com/open-telemetry/opentelemetry-js/tree/main/packages/opentelemetry-resources):
The JS implementation is very similar to this proposal. This proposal states
that the SDK will allow detectors to be passed into telemetry providers
directly instead of just having a global `DetectResources` function which the
diff --git a/oteps/0119-standard-system-metrics.md b/oteps/0119-standard-system-metrics.md
index 5e79f8c38be..4ca2e3f2887 100644
--- a/oteps/0119-standard-system-metrics.md
+++ b/oteps/0119-standard-system-metrics.md
@@ -21,10 +21,10 @@ There are already a few implementations of system and/or runtime metric collecti
* Collects system metrics for CPU, memory, swap, disks, filesystems, network, and load.
* There are plans to collect process metrics for CPU, memory, and disk I/O.
* Makes good use of labels rather than defining individual metrics.
- * [Overview of collected metrics](https://docs.google.com/spreadsheets/d/11qSmzD9e7PnzaJPYRFdkkKbjTLrAKmvyQpjBjpJsR2s).
+ * [Overview of collected metrics](https://docs.google.com/spreadsheets/d/11qSmzD9e7PnzaJPYRFdkkKbjTLrAKmvyQpjBjpJsR2s/edit).
- **Go**
- * Go [has instrumentation](https://github.com/open-telemetry/opentelemetry-go-contrib/tree/master/instrumentation/runtime) to collect runtime metrics for GC, heap use, and goroutines.
+ * Go [has instrumentation](https://github.com/open-telemetry/opentelemetry-go-contrib/tree/main/instrumentation/runtime) to collect runtime metrics for GC, heap use, and goroutines.
* This package does not export metrics with labels, instead exporting individual metrics.
* [Overview of collected metrics](https://docs.google.com/spreadsheets/d/1r50cC9ass0A8SZIg2ZpLdvZf6HmQJsUSXFOu-rl4yaY/edit#gid=0).
- **Python**
diff --git a/oteps/0122-otlp-http-json.md b/oteps/0122-otlp-http-json.md
index fe0b10cadd0..1a655e379cb 100644
--- a/oteps/0122-otlp-http-json.md
+++ b/oteps/0122-otlp-http-json.md
@@ -27,7 +27,7 @@ OTLP/HTTP+JSON will be consistent with the [OTLP/HTTP](0099-otlp-http.md) specif
### JSON Mapping
-Use proto3 standard defined [JSON Mapping](https://developers.google.com/protocol-buffers/docs/proto3#json) for mapping between protobuf and json. `trace_id` and `span_id` is base64 encoded in OTLP/HTTP+JSON, not hex.
+Use proto3 standard defined [JSON Mapping](https://protobuf.dev/programming-guides/proto3/#json) for mapping between protobuf and json. `trace_id` and `span_id` is base64 encoded in OTLP/HTTP+JSON, not hex.
### Request
@@ -78,7 +78,7 @@ specific failure cases and HTTP status codes that should be used.
Response body for all `HTTP 4xx` and `HTTP 5xx` responses MUST be a
JSON-encoded
-[Status](https://godoc.org/google.golang.org/genproto/googleapis/rpc/status#Status)
+[Status](https://pkg.go.dev/google.golang.org/genproto/googleapis/rpc/status#Status)
message that describes the problem.
This specification does not use `Status.code` field and the server MAY omit
@@ -108,7 +108,7 @@ response.
If the server receives more requests than the client is allowed or the server is
overloaded the server SHOULD respond with `HTTP 429 Too Many Requests` or
`HTTP 503 Service Unavailable` and MAY include
-["Retry-After"](https://tools.ietf.org/html/rfc7231#section-7.1.3) header with a
+["Retry-After"](https://datatracker.ietf.org/doc/html/rfc7231#section-7.1.3) header with a
recommended time interval in seconds to wait before retrying.
The client SHOULD honour the waiting interval specified in "Retry-After" header
diff --git a/oteps/0149-exponential-histogram.md b/oteps/0149-exponential-histogram.md
index 5a76ed129ce..48a057f9f07 100644
--- a/oteps/0149-exponential-histogram.md
+++ b/oteps/0149-exponential-histogram.md
@@ -78,14 +78,14 @@ For exponential histograms, if base1 = base2 ^ N, where N is an integer, the two
base = referenceBase ^ (2 ^ baseScale)
```
-Any two histograms using bases from the series can be merged without artifact. This approach is well known and in use in multiple vendors, including [Google internal use](https://github.com/open-telemetry/opentelemetry-proto/pull/226#issuecomment-737496026), [New Relic Distribution Metric](https://docs.newrelic.com/docs/telemetry-data-platform/ingest-manage-data/understand-data/metric-data-type). It is also described in the [UDDSketch paper](https://arxiv.org/pdf/2004.08604.pdf).
+Any two histograms using bases from the series can be merged without artifact. This approach is well known and in use in multiple vendors, including [Google internal use](https://github.com/open-telemetry/opentelemetry-proto/pull/226#issuecomment-737496026), [New Relic Distribution Metric](https://docs.newrelic.com/docs/data-apis/understand-data/metric-data/metric-data-type/). It is also described in the [UDDSketch paper](https://arxiv.org/pdf/2004.08604).
Such "2 to 1" binary merge has the following benefits:
* Any 2 histograms in the series can be merged without artifacts. This is a very attractive property.
* A single histogram may be shrunk by 2x using a 2 to 1 merge, at the cost of increasing base to base^2. When facing the choice between "reduced histogram resolution" and "blowing up application memory", shrinking is the obvious choice.
-A histogram producer may implement "auto scale" to control memory cost. With a reasonable default config on target relative error and max number of buckets, the producer could operate in an "automagic" fashion. The producer can start with a base at target resolution, and dynamically change the scale if incoming data's range would make the histogram exceed the memory limit. [New Relic](https://docs.newrelic.com/docs/telemetry-data-platform/ingest-manage-data/understand-data/metric-data-type) and [Google](https://github.com/open-telemetry/opentelemetry-proto/pull/226#issuecomment-737496026) have implemented such logic for internal use. Open source versions from these companies are in plan.
+A histogram producer may implement "auto scale" to control memory cost. With a reasonable default config on target relative error and max number of buckets, the producer could operate in an "automagic" fashion. The producer can start with a base at target resolution, and dynamically change the scale if incoming data's range would make the histogram exceed the memory limit. [New Relic](https://docs.newrelic.com/docs/data-apis/understand-data/metric-data/metric-data-type/) and [Google](https://github.com/open-telemetry/opentelemetry-proto/pull/226#issuecomment-737496026) have implemented such logic for internal use. Open source versions from these companies are in plan.
The main disadvantage of scaled exponential histogram is not supporting arbitrary base. The base can only increase by square, or decrease by square root. Unless a user's target relative error is exactly on the series, they have to choose the next smaller base, which costs more space for the target. But in return, you get universally mergeable histograms, which seems like a reasonable trade off. As shown in discussions below, typically, the user has the choice around 1%, 2%, or 4% errors. Since error target is rarely precise science, choosing from the limited menu does not add much burden to the user.
diff --git a/oteps/0155-external-modules.md b/oteps/0155-external-modules.md
index 1a045d892b0..b24007f9690 100644
--- a/oteps/0155-external-modules.md
+++ b/oteps/0155-external-modules.md
@@ -13,7 +13,7 @@ while still providing our end-users with some way to discover all available Open
## Explanation
-The [OpenTelemetry Registry](https://opentelemetry.io/registry/) serves as a central catalogue of all known OpenTelemetry components,
+The [OpenTelemetry Registry](https://opentelemetry.io/ecosystem/registry/) serves as a central catalogue of all known OpenTelemetry components,
both provided by core maintainers of the project and any third party.
In order for a component to be included into Registry its authors have to fill a [self-assessment form](#registry-self-assessment-form).
diff --git a/oteps/0156-columnar-encoding.md b/oteps/0156-columnar-encoding.md
index 0186b5f9625..10c6f58b51f 100644
--- a/oteps/0156-columnar-encoding.md
+++ b/oteps/0156-columnar-encoding.md
@@ -12,7 +12,7 @@ in instances when one of the endpoints does not support the OTelArrow protocol.
**Reference implementation**: The [OTel Arrow Adapter](https://github.com/f5/otel-arrow-adapter) Go library specifies
the protobuf spec, and implements the OTel Arrow Encoder/Decoder (main contributor [Laurent Querel](https://github.com/lquerel)).
-An [experimental OTel Collector](https://github.com/open-telemetry/experimental-arrow-collector) has been implemented to
+An [experimental OTel Collector](https://github.com/open-telemetry/otel-arrow-collector) has been implemented to
expose the new gRPC endpoint and to provide OTel Arrow support via the previous library (main contributor [Joshua MacDonald](https://github.com/jmacd)).
## Table of contents
@@ -646,7 +646,7 @@ exactly as the OTLP exporter.
The mechanism as described is vulnerable to partial failure scenarios. When some of the streams are succeeding but some
have failed with Arrow unsupported, the collector performance will be degraded because callers are blocked waiting for
available streams. The exact signal used to signal that Arrow and downgrade mechanism is seen as an area for future
-development. [See the prototype's test for whether to downgrade.](https://github.com/open-telemetry/experimental-arrow-collector/blob/30e0ffb230d3d2f1ad9645ec54a90bbb7b9878c2/exporter/otlpexporter/internal/arrow/stream.go#L152)
+development. [See the prototype's test for whether to downgrade.](https://github.com/open-telemetry/otel-arrow-collector/blob/30e0ffb230d3d2f1ad9645ec54a90bbb7b9878c2/exporter/otlpexporter/internal/arrow/stream.go#L152)
### Batch ID Generation
@@ -779,7 +779,7 @@ in order to include OTel-Arrow in standard regression testing of the Collector.
### Extending into other parts of the Arrow ecosystem
A SQL support for telemetry data processing remains an open question in the current Go collector. The main OTelArrow query
-engine [Datafusion](https://github.com/apache/arrow-datafusion) is implemented in Rust. Several solutions can be
+engine [Datafusion](https://github.com/apache/datafusion) is implemented in Rust. Several solutions can be
considered: 1) create a Go wrapper on top of Datafusion, 2) implement a Rust collector dedicated to the end-to-end
support of OTel Arrow, 3) implement a SQL/Arrow engine in Go (big project). A proof of concept using Datafusion has been
implemented in Rust and has shown very good results.
diff --git a/oteps/0178-mapping-to-otlp-anyvalue.md b/oteps/0178-mapping-to-otlp-anyvalue.md
index 705b2fa6008..d6101813750 100644
--- a/oteps/0178-mapping-to-otlp-anyvalue.md
+++ b/oteps/0178-mapping-to-otlp-anyvalue.md
@@ -91,9 +91,9 @@ field.
Values that represent ordered sequences of other values (such as
[arrays](https://docs.oracle.com/javase/specs/jls/se7/html/jls-10.html),
-[vectors](https://en.cppreference.com/w/cpp/container/vector), ordered
+[vectors](https://en.cppreference.com/w/cpp/container/vector.html), ordered
[lists](https://docs.python.org/3/tutorial/datastructures.html#more-on-lists),
-[slices](https://golang.org/ref/spec#Slice_types)) SHOULD be converted to
+[slices](https://go.dev/ref/spec#Slice_types)) SHOULD be converted to
AnyValue's
[array_value](https://github.com/open-telemetry/opentelemetry-proto/blob/38b5b9b6e5257c6500a843f7fdacf89dd95833e8/opentelemetry/proto/common/v1/common.proto#L35)
field. String Values and Byte Sequences are an exception from this rule (see
@@ -180,7 +180,7 @@ AnyValue{
Unordered collections of non-duplicate values (such as
[Java Sets](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/Set.html),
-[C++ sets](https://en.cppreference.com/w/cpp/container/set),
+[C++ sets](https://en.cppreference.com/w/cpp/container/set.html),
[Python Sets](https://docs.python.org/3/tutorial/datastructures.html#sets)) SHOULD be
converted to AnyValue's
[array_value](https://github.com/open-telemetry/opentelemetry-proto/blob/38b5b9b6e5257c6500a843f7fdacf89dd95833e8/opentelemetry/proto/common/v1/common.proto#L35)
diff --git a/oteps/0199-support-elastic-common-schema-in-opentelemetry.md b/oteps/0199-support-elastic-common-schema-in-opentelemetry.md
index 484d0f27944..930b78f1388 100644
--- a/oteps/0199-support-elastic-common-schema-in-opentelemetry.md
+++ b/oteps/0199-support-elastic-common-schema-in-opentelemetry.md
@@ -2,7 +2,7 @@
## Introduction
-This proposal is to merge the Elastic Common Schema (ECS) with the OpenTelemetry Semantic Conventions (SemConv) and provide full interoperability in OpenTelemetry component implementations. We propose to implement this by aligning the OpenTelemetry Semantic Conventions with [ECS FieldSets](https://www.elastic.co/guide/en/ecs/current/ecs-field-reference.html#ecs-fieldsets) and vice versa where feasible. The long-term goal is to achieve convergence of ECS and OTel Semantic Conventions into a single open schema so that OpenTelemetry Semantic Conventions truly is a successor of the Elastic Common Schema.
+This proposal is to merge the Elastic Common Schema (ECS) with the OpenTelemetry Semantic Conventions (SemConv) and provide full interoperability in OpenTelemetry component implementations. We propose to implement this by aligning the OpenTelemetry Semantic Conventions with [ECS FieldSets](https://www.elastic.co/docs/reference/ecs/ecs-field-reference#ecs-fieldsets) and vice versa where feasible. The long-term goal is to achieve convergence of ECS and OTel Semantic Conventions into a single open schema so that OpenTelemetry Semantic Conventions truly is a successor of the Elastic Common Schema.
## The Goal
@@ -19,8 +19,8 @@ ECS and OTel SemConv have some overlap today, but also significant areas of mutu
-1. `A`: ECS comes with a rich set of fields that cover broad logging, observability and security use cases. Many fields are additive to the OTel SemConv and would enrich the OTel SemConv without major conflicts. Examples are [Geo information fields](https://www.elastic.co/guide/en/ecs/current/ecs-geo.html), [Threat Fields](https://www.elastic.co/guide/en/ecs/current/ecs-threat.html), and many others.
-2. `B`: Conversely, there are attributes in the OTel SemConv that do not exist in ECS and would be an enrichment to ECS. Examples are the [Messaging semantic conventions](https://opentelemetry.io/docs/reference/specification/trace/semantic_conventions/messaging/) or technology-specific conventions, such as the [AWS SDK conventions](https://opentelemetry.io/docs/reference/specification/trace/semantic_conventions/instrumentation/aws-sdk/).
+1. `A`: ECS comes with a rich set of fields that cover broad logging, observability and security use cases. Many fields are additive to the OTel SemConv and would enrich the OTel SemConv without major conflicts. Examples are [Geo information fields](https://www.elastic.co/docs/reference/ecs/ecs-geo), [Threat Fields](https://www.elastic.co/docs/reference/ecs/ecs-threat), and many others.
+2. `B`: Conversely, there are attributes in the OTel SemConv that do not exist in ECS and would be an enrichment to ECS. Examples are the [Messaging semantic conventions](https://opentelemetry.io/docs/specs/semconv/registry/attributes/messaging/) or technology-specific conventions, such as the [AWS SDK conventions](https://opentelemetry.io/docs/specs/semconv/registry/attributes/aws/).
3. `C`: There is some significant area of overlap between ECS and OTel SemConv. The are `C` represents overlapping fields/attributes that are very similar in ECS and OTel SemConv. The field conflicts in `C` can be resolved through simple field renames and simple transformations.
4. `D`: For some of the fields and attributes there will be conflicts that cannot be resolved through simple renaming or transformation and would require introducing breaking changes on ECS or OTel SemConv side for the purpose of merging the schemas.
@@ -64,9 +64,9 @@ OpenTelemetry has the potential to grow exponentially if the data from these oth
### What is ECS?
-The [Elastic Common Schema (ECS)](https://github.com/elastic/ecs) is an open source specification, developed with support from Elastic's user community. ECS defines a common set of fields to be used when storing data in Elasticsearch, such as logs, metrics, and security and audit events. The goal of ECS is to enable and encourage users of Elasticsearch to normalize their event data, so that they can better analyze, visualize, and correlate the data represented in their events. Learn more at: [https://www.elastic.co/guide/en/ecs/current/ecs-reference.html](https://www.elastic.co/guide/en/ecs/current/ecs-reference.html)
+The [Elastic Common Schema (ECS)](https://github.com/elastic/ecs) is an open source specification, developed with support from Elastic's user community. ECS defines a common set of fields to be used when storing data in Elasticsearch, such as logs, metrics, and security and audit events. The goal of ECS is to enable and encourage users of Elasticsearch to normalize their event data, so that they can better analyze, visualize, and correlate the data represented in their events. Learn more at: [https://www.elastic.co/docs/reference/ecs](https://www.elastic.co/docs/reference/ecs)
-The coverage of ECS is very broad including in depth support for logs, security, and network events such as "[logs.* fields](https://www.elastic.co/guide/en/ecs/current/ecs-log.html)" , "[geo.* fields](https://www.elastic.co/guide/en/ecs/current/ecs-geo.html)", "[tls.* fields](https://www.elastic.co/guide/en/ecs/current/ecs-tls.html)", "[dns.* fields](https://www.elastic.co/guide/en/ecs/current/ecs-dns.html)", or "[vulnerability.* fields](https://www.elastic.co/guide/en/ecs/current/ecs-vulnerability.html)".
+The coverage of ECS is very broad including in depth support for logs, security, and network events such as "[logs.* fields](https://www.elastic.co/docs/reference/ecs/ecs-log)" , "[geo.* fields](https://www.elastic.co/docs/reference/ecs/ecs-geo)", "[tls.* fields](https://www.elastic.co/docs/reference/ecs/ecs-tls)", "[dns.* fields](https://www.elastic.co/docs/reference/ecs/ecs-dns)", or "[vulnerability.* fields](https://www.elastic.co/docs/reference/ecs/ecs-vulnerability)".
ECS has the following guiding principles:
@@ -147,25 +147,25 @@ Example of a Nginx Access Log entry structured with ECS
## Principles
-| Description | [OTel Logs and Event Record](../specification/logs/data-model.md#log-and-event-record-definition) | [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current/ecs-reference.html) |
+| Description | [OTel Logs and Event Record](../specification/logs/data-model.md#log-and-event-record-definition) | [Elastic Common Schema (ECS)](https://www.elastic.co/docs/reference/ecs) |
|-------------|-------------|--------|
| Metadata shared by all the Log Messages / Spans / Metrics of an application instance | Resource Attributes | ECS fields |
| Metadata specific to each Log Message / Span / Metric data point | Attributes | ECS Fields |
-| Message of log events | Body | [message field](https://www.elastic.co/guide/en/ecs/current/ecs-base.html#field-message) |
+| Message of log events | Body | [message field](https://www.elastic.co/docs/reference/ecs/ecs-base#field-message) |
| Naming convention | Dotted names | Dotted names |
| Reusability of namespaces | Namespaces are intended to be composed | Namespaces are intended to be composed |
| Extensibility | Attributes can be extended by either adding a user defined field to an existing namespaces or introducing new namespaces. | Extra attributes can be added in each namespace and users can create their own namespaces |
## Data Types
-| Category | OTel Logs and Event Record (all or a subset of GRPC data types) | ECS Data Types |
+| Category | OTel Logs and Event Record (all or a subset of GRPC data types) | ECS Data Types |
|---|---|---|
-| Text | string | text, match_only_text, keyword constant_keyword, wildcard |
-| Dates | uint64 nanoseconds since Unix epoch | date, date_nanos |
-| Numbers | number | long, double, scaled_float, boolean⦠|
-| Objects | uint32, uint64β¦ | object (JSON object), flattened (An entire JSON object as a single field value) |
-| Structured Objects | No complex semantic data type specified for the moment (e.g. string is being used for ip addresses rather than having an "ip" data structure in OTel).
Note that OTel supports arrays and nested objects. | ip, geo_point, geo_shape, version, long_range, date_range, ip_range |
-| Binary data | Byte sequence | binary |
+| Text | string | text, match_only_text, keyword constant_keyword, wildcard |
+| Dates | uint64 nanoseconds since Unix epoch | date, date_nanos |
+| Numbers | number | long, double, scaled_float, boolean⦠|
+| Objects | uint32, uint64β¦ | object (JSON object), flattened (An entire JSON object as a single field value) |
+| Structured Objects | No complex semantic data type specified for the moment (e.g. string is being used for ip addresses rather than having an "ip" data structure in OTel).
Note that OTel supports arrays and nested objects. | ip, geo_point, geo_shape, version, long_range, date_range, ip_range |
+| Binary data | Byte sequence | binary |
## Known Differences
@@ -179,7 +179,7 @@ As the markdown code of the tables is hard to read and maintain with very long l
| OTel Logs and Event Record
|
- Elastic Common Schema (ECS)
+ | Elastic Common Schema (ECS)
|
Description
|
@@ -187,7 +187,7 @@ As the markdown code of the tables is hard to read and maintain with very long l
| Timestamp (uint64 nanoseconds since Unix epoch)
|
- @timestamp (date)
+ | @timestamp (date)
|
|
@@ -195,7 +195,7 @@ As the markdown code of the tables is hard to read and maintain with very long l
| TraceId (byte sequence), SpanId (byte sequence)
|
- trace.id (keyword), span.id (keyword)
+ | trace.id (keyword), span.id (keyword)
|
|
@@ -203,7 +203,7 @@ As the markdown code of the tables is hard to read and maintain with very long l
| N/A
|
- Transaction.id (keyword)
+ | Transaction.id (keyword)
|
|
@@ -211,7 +211,7 @@ As the markdown code of the tables is hard to read and maintain with very long l
| SeverityText (string)
|
- log.syslog.severity.name (keyword), log.level (keyword)
+ | log.syslog.severity.name (keyword), log.level (keyword)
|
|
@@ -219,7 +219,7 @@ As the markdown code of the tables is hard to read and maintain with very long l
| SeverityNumber (number)
|
- log.syslog.severity.code
+ | log.syslog.severity.code
|
|
@@ -227,7 +227,7 @@ As the markdown code of the tables is hard to read and maintain with very long l
| Body (any)
|
- message (match_only_text)
+ | message (match_only_text)
|
|
@@ -239,7 +239,7 @@ As the markdown code of the tables is hard to read and maintain with very long l
system.cpu.utilization
- host.cpu.usage (scaled_float) with a slightly different measurement than what OTel metrics measure
+ | host.cpu.usage (scaled_float) with a slightly different measurement than what OTel metrics measure
|
Note that most metrics have slightly different names and semantics between ECS and OpenTelemetry
|
@@ -260,8 +260,8 @@ While ECS covers many different use cases and scenarios, in the following, we ou
The author of the "OTel Collector Access logs file receiver for web server XXX" would find in the OTel Semantic Convention specifications all
the guidance to map the fields of the web server logs, not only the attributes that the OTel Semantic Conventions has specified today for
[HTTP calls](https://github.com/open-telemetry/opentelemetry-specification/blob/v1.9.0/specification/trace/semantic_conventions/http.md),
-but also attributes for the [User Agent](https://www.elastic.co/guide/en/ecs/current/ecs-user_agent.html)
-or the [Geo Data](https://www.elastic.co/guide/en/ecs/current/ecs-geo.html).
+but also attributes for the [User Agent](https://www.elastic.co/docs/reference/ecs/ecs-user_agent)
+or the [Geo Data](https://www.elastic.co/docs/reference/ecs/ecs-geo).
This completeness of the mapping will help the author of the integration to produce OTel Log messages that will be compatible with access logs
of other web components (web servers, load balancers, L7 firewalls...) allowing turnkey integration with observability solutions
@@ -270,7 +270,7 @@ and enabling richer correlations.
### Other Examples
- [Logs with sessions (VPN Logs, Network Access Sessions, RUM sessions, etc.)](https://github.com/elastic/ecs/blob/main/rfcs/text/0004-session.md#usage)
-- [Logs from systems processing files](https://www.elastic.co/guide/en/ecs/current/ecs-file.html)
+- [Logs from systems processing files](https://www.elastic.co/docs/reference/ecs/ecs-file)
## Alternatives / Discussion
diff --git a/oteps/0225-configuration.md b/oteps/0225-configuration.md
index c48d49c264d..dfb4042b640 100644
--- a/oteps/0225-configuration.md
+++ b/oteps/0225-configuration.md
@@ -299,7 +299,7 @@ In choosing to recommend JSON schema, the working group looked at the following
* [Cue](https://cuelang.org/) - A promising simpler language to define a schema, the working group decided against CUE because:
* Tooling available for validating CUE files in languages outside of Go were limited.
* Familiarity and learning curve would create problems for both users and contributors of OpenTelemetry.
-* [Protobuf](https://developers.google.com/protocol-buffers) - With protobuf already used heavily in OpenTelemetry, the format was worth investigating as an option to define the schema. The working group decided against Protobuf because:
+* [Protobuf](https://protobuf.dev) - With protobuf already used heavily in OpenTelemetry, the format was worth investigating as an option to define the schema. The working group decided against Protobuf because:
* Validation errors are the result of serlization errors which can be difficult to interpret.
* Limitations in the schema definition language result in poor ergonomics if type safety is to be retained.
diff --git a/oteps/0232-maturity-of-otel.md b/oteps/0232-maturity-of-otel.md
index 3ce9726f756..660406fccfc 100644
--- a/oteps/0232-maturity-of-otel.md
+++ b/oteps/0232-maturity-of-otel.md
@@ -2,7 +2,7 @@
On 08 Mar 2023, the OpenTelemetry GC and TC held an OpenTelemetry Leadership summit, discussing various topics. One of the themes we discussed was establishing standard rules for describing the maturity of the OpenTelemetry project. This OTEP summarizes what was discussed there and is intended to have the wider community provide feedback.
-This OTEP builds on what was previously communicated by the project, especially the [Versioning and stability for OpenTelemetry clients](https://opentelemetry.io/docs/reference/specification/versioning-and-stability).
+This OTEP builds on what was previously communicated by the project, especially the [Versioning and stability for OpenTelemetry clients](https://opentelemetry.io/docs/specs/otel/versioning-and-stability/).
The Collector's [stability levels](https://github.com/open-telemetry/opentelemetry-collector#stability-levels) inspired the maturity levels.
@@ -66,7 +66,7 @@ This OTEP allows SIG maintainers to declare the maturity of the SIG's deliverabl
## Open questions
-* Should SDKs be required to fully implement the specification before they can be marked as stable? See [open-telemetry/opentelemetry-specification#3673](https://github.com/open-telemetry/opentelemetry-specification/issues/3673)
+* Should SDKs be required to fully implement the specification before they can be marked as stable? See [open-telemetry/community#2097](https://github.com/open-telemetry/community/issues/2097)
* Should this OTEP define a file name to be adopted by all repositories to declare their deliverables and their maturity levels?
## Future possibilities
diff --git a/oteps/0243-app-telemetry-schema-vision-roadmap.md b/oteps/0243-app-telemetry-schema-vision-roadmap.md
index 355a9ea66d2..e3dbfe04ba0 100644
--- a/oteps/0243-app-telemetry-schema-vision-roadmap.md
+++ b/oteps/0243-app-telemetry-schema-vision-roadmap.md
@@ -97,7 +97,7 @@ Examples of use cases include:
* Triggering schema-driven transformations or processing in stream processors.
* And more.
-This recent [paper](https://arxiv.org/pdf/2311.07509.pdf#:~:text=The%20results%20of%20the%20benchmark%20provide%20evidence%20that%20supports%20our,LLM%20without%20a%20Knowledge%20Graph)
+This recent [paper](https://arxiv.org/pdf/2311.07509#:~:text=The%20results%20of%20the%20benchmark%20provide%20evidence%20that%20supports%20our,LLM%20without%20a%20Knowledge%20Graph)
from [data.world](https://data.world/home/), along with
the [MetricFlow framework](https://docs.getdbt.com/docs/build/about-metricflow)
which underpins the [dbt Semantic Layer](https://www.getdbt.com/product/semantic-layer),
@@ -382,5 +382,5 @@ of the Resolved Telemetry Schema.
## Links
- [Positional Paper: Schema-First Application Telemetry](https://research.facebook.com/publications/positional-paper-schema-first-application-telemetry/)
-- [A benchmark to understand the role of knowledge graphs on Large Language Model's accuracy for question answering on enterprise sql databases](https://arxiv.org/pdf/2311.07509.pdf#:~:text=The%20results%20of%20the%20benchmark%20provide%20evidence%20that%20supports%20our,LLM%20without%20a%20Knowledge%20Graph)
+- [A benchmark to understand the role of knowledge graphs on Large Language Model's accuracy for question answering on enterprise sql databases](https://arxiv.org/pdf/2311.07509#:~:text=The%20results%20of%20the%20benchmark%20provide%20evidence%20that%20supports%20our,LLM%20without%20a%20Knowledge%20Graph)
- [MetricFlow framework](https://docs.getdbt.com/docs/build/about-metricflow)
diff --git a/oteps/README.md b/oteps/README.md
index be1bd33da22..fe0578b8794 100644
--- a/oteps/README.md
+++ b/oteps/README.md
@@ -55,7 +55,7 @@ For example, an OTEP proposing configurable sampling *and* various samplers shou
### Writing an OTEP
-- First, [fork](https://help.github.com/en/articles/fork-a-repo) this repo.
+- First, [fork](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo) this repo.
- Copy [`0000-template.md`](./0000-template.md) to `0000-my-OTEP.md`, where `my-OTEP` is a title relevant to your proposal, and `0000` is the OTEP ID.
Leave the number as is for now. Once a Pull Request is made, update this ID to match the PR ID.
- Fill in the template. Put care into the details: It is important to present convincing motivation, demonstrate an understanding of the design's impact, and honestly assess the drawbacks and potential alternatives.
@@ -87,7 +87,7 @@ Have suggestions? Concerns? Questions? **Please** raise an issue or raise the ma
## Background on the OpenTelemetry OTEP process
-Our OTEP process borrows from the [Rust RFC](https://github.com/rust-lang/rfcs) and [Kubernetes Enhancement Proposal](https://github.com/kubernetes/enhancements) processes, the former also being [very influential](https://github.com/kubernetes/enhancements/tree/master/keps/sig-architecture/0000-kep-process#prior-art) on the latter; as well as the [OpenTracing OTEP](https://github.com/opentracing/specification/tree/master/rfc_process.md) process. Massive kudos and thanks to the respective authors and communities for providing excellent prior art π
+Our OTEP process borrows from the [Rust RFC](https://github.com/rust-lang/rfcs) and [Kubernetes Enhancement Proposal](https://github.com/kubernetes/enhancements) processes, the former also being [very influential](https://github.com/kubernetes/enhancements/tree/master/keps/sig-architecture/0000-kep-process#prior-art) on the latter; as well as the [OpenTracing OTEP](https://github.com/opentracing/specification/blob/master/rfc_process.md) process. Massive kudos and thanks to the respective authors and communities for providing excellent prior art π
[slack-image]: https://img.shields.io/badge/Slack-4A154B?style=for-the-badge&logo=slack&logoColor=white
[slack-url]: https://cloud-native.slack.com/archives/C01N7PP1THC
diff --git a/oteps/logs/0091-logs-vocabulary.md b/oteps/logs/0091-logs-vocabulary.md
index b9934a3c363..4cb9aeb6719 100644
--- a/oteps/logs/0091-logs-vocabulary.md
+++ b/oteps/logs/0091-logs-vocabulary.md
@@ -46,7 +46,7 @@ Key/value pairs contained in a `Log Record`.
Logs that are recorded in a format which has a well-defined structure that allows
to differentiate between different elements of a Log Record (e.g. the Timestamp,
-the Attributes, etc). The _Syslog protocol_ ([RFC 5425](https://tools.ietf.org/html/rfc5424)),
+the Attributes, etc). The _Syslog protocol_ ([RFC 5425](https://datatracker.ietf.org/doc/html/rfc5424)),
for example, defines a `structured-data` format.
### Flat File Logs
diff --git a/oteps/logs/0097-log-data-model.md b/oteps/logs/0097-log-data-model.md
index 5457e92bee7..a5936524ac0 100644
--- a/oteps/logs/0097-log-data-model.md
+++ b/oteps/logs/0097-log-data-model.md
@@ -61,9 +61,9 @@ record is, what data needs to be recorded, transferred, stored and interpreted
by a logging system.
This proposal defines a data model for [Standalone
-Logs](https://github.com/open-telemetry/oteps/blob/master/text/logs/0091-logs-vocabulary.md#standalone-log).
+Logs](https://github.com/open-telemetry/oteps/blob/main/text/logs/0091-logs-vocabulary.md#standalone-log).
Relevant parts of it may be adopted for
-[Embedded Logs](https://github.com/open-telemetry/oteps/blob/master/text/logs/0091-logs-vocabulary.md#embedded-log)
+[Embedded Logs](https://github.com/open-telemetry/oteps/blob/main/text/logs/0091-logs-vocabulary.md#embedded-log)
in a future OTEP.
## Design Notes
@@ -544,7 +544,7 @@ better than this one.
### RFC5424 Syslog
-[RFC5424](https://tools.ietf.org/html/rfc5424) defines structured log data
+[RFC5424](https://datatracker.ietf.org/doc/html/rfc5424) defines structured log data
format and protocol. The protocol is ubiquitous (although unfortunately many
implementations donβt follow structured data recommendations). Here are some
drawbacks that do not make Syslog a serious contender for a data model:
@@ -1327,7 +1327,7 @@ It may contain what hostname returns on Unix systems, the fully qualified, or a
[OpenTelemetry resource semantic convention](https://github.com/open-telemetry/semantic-conventions/tree/main/docs/resource).
This is a selection of the most relevant fields. See
-[for the full reference](https://www.elastic.co/guide/en/ecs/current/ecs-field-reference.html)
+[for the full reference](https://www.elastic.co/docs/reference/ecs/ecs-field-reference)
for an exhaustive list.
## Appendix B: `SeverityNumber` example mappings
diff --git a/oteps/metrics/0049-metric-label-set.md b/oteps/metrics/0049-metric-label-set.md
index b2c7096aa69..185d718dc34 100644
--- a/oteps/metrics/0049-metric-label-set.md
+++ b/oteps/metrics/0049-metric-label-set.md
@@ -98,7 +98,7 @@ A key distinction between `LabelSet` and similar concepts in existing metrics li
## Prior art and alternatives
-Some existing metrics APIs support this concept. For example, see `Scope` in the [Tally metric API for Go](https://godoc.org/github.com/uber-go/tally#Scope).
+Some existing metrics APIs support this concept. For example, see `Scope` in the [Tally metric API for Go](https://pkg.go.dev/github.com/uber-go/tally#Scope).
Some libraries take `LabelSet` one step further. In the future, we may add to the the `LabelSet` API a method to extend the label set with additional labels. For example:
diff --git a/oteps/metrics/0072-metric-observer.md b/oteps/metrics/0072-metric-observer.md
index eeecb8537ab..9b0d42aff70 100644
--- a/oteps/metrics/0072-metric-observer.md
+++ b/oteps/metrics/0072-metric-observer.md
@@ -44,7 +44,7 @@ purpose. If the simpler alternative suggested above--registering
non-instrument-specific callbacks--were implemented instead, callers
would demand a way to ask whether an instrument was "recording" or
not, similar to the [`Span.IsRecording`
-API](https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/trace/api.md#isrecording).
+API](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/api.md#isrecording).
Observer instruments are semantically equivalent to gauge instruments,
except they support callbacks instead of a `Set()` operation.
diff --git a/oteps/metrics/0088-metric-instrument-optional-refinements.md b/oteps/metrics/0088-metric-instrument-optional-refinements.md
index bc82a594609..cc623cc7f2b 100644
--- a/oteps/metrics/0088-metric-instrument-optional-refinements.md
+++ b/oteps/metrics/0088-metric-instrument-optional-refinements.md
@@ -331,7 +331,7 @@ No API changes are called for in this proposal.
#### Prometheus
The Prometheus system defines four kinds of [synchronous metric
-instrument](https://prometheus.io/docs/concepts/metric_types).
+instrument](https://prometheus.io/docs/concepts/metric_types/).
| System | Metric Kind | Operation | Aggregation | Notes |
| ---------- | ------------ | ------------------- | -------------------- | ------------------- |
diff --git a/oteps/metrics/0131-otlp-export-behavior.md b/oteps/metrics/0131-otlp-export-behavior.md
index 29aeedeb5b8..4788ccfd153 100644
--- a/oteps/metrics/0131-otlp-export-behavior.md
+++ b/oteps/metrics/0131-otlp-export-behavior.md
@@ -36,7 +36,7 @@ A discussed solution is to convert deltas to cumulative in the Collector both as
As stated in the previous section, delta to cumulative conversion in the Collector is needed to support Prometheus type backends. This may be necessary in the Collector in the future because the Collector may also accept metrics from other sources that report delta values. On the other hand, if sources are reporting cumulative values, cumulative to delta conversion is needed to support Statsd type backends.
-The future implementation for conversions in the Collector is still under discussion. There is a proposal is to add a [Metric Aggregation Processor](https://github.com/open-telemetry/opentelemetry-collector/issues/1422) in the Collector which recommends a solution for delta to cumulative conversion.
+The future implementation for conversions in the Collector is still under discussion. There is a proposal is to add a [Metric Aggregation Processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/4968) in the Collector which recommends a solution for delta to cumulative conversion.
## Future possibilities
diff --git a/oteps/profiles/0212-profiling-vision.md b/oteps/profiles/0212-profiling-vision.md
index 783340856bc..3cc35abb8c0 100644
--- a/oteps/profiles/0212-profiling-vision.md
+++ b/oteps/profiles/0212-profiling-vision.md
@@ -36,7 +36,7 @@ terms as follows:
## How profiling aligns with the OpenTelemetry vision
The [OpenTelemetry
-vision](https://opentelemetry.io/mission/#vision-mdash-the-world-we-imagine-for-otel-end-users)
+vision](https://opentelemetry.io/community/mission/#vision-mdash-the-world-we-imagine-for-otel-end-users)
states:
_Effective observability is powerful because it enables developers to innovate
diff --git a/oteps/trace/0006-sampling.md b/oteps/trace/0006-sampling.md
index 92ec4c9573d..b1bb9ca020a 100644
--- a/oteps/trace/0006-sampling.md
+++ b/oteps/trace/0006-sampling.md
@@ -341,6 +341,6 @@ recommend the trace be sampled or not sampled until mid-way through execution;
* [opentelemetry-specification/32](https://github.com/open-telemetry/opentelemetry-specification/issues/32)
* [opentelemetry-specification/31](https://github.com/open-telemetry/opentelemetry-specification/issues/31)
-[trace-flags]: https://github.com/w3c/trace-context/blob/master/spec/20-http_request_header_format.md#trace-flags
+[trace-flags]: https://github.com/w3c/trace-context/blob/main/spec/20-http_request_header_format.md#trace-flags
[specs-pr-216]: https://github.com/open-telemetry/opentelemetry-specification/pull/216
-[span-creation]: https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/trace/api.md#span-creation
+[span-creation]: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/api.md#span-creation
diff --git a/oteps/trace/0136-error_flagging.md b/oteps/trace/0136-error_flagging.md
index d55d928bf7b..8929820e883 100644
--- a/oteps/trace/0136-error_flagging.md
+++ b/oteps/trace/0136-error_flagging.md
@@ -2,7 +2,7 @@
This proposal reduces the number of status codes to three, adds a new field to identify status codes set by application developers and operators, and adds a mapping of semantic conventions to status codes. This clarifies how error reporting should work in OpenTelemetry.
-Note: The term **end user** in this document is defined as the application developers and operators of the system running OpenTelemetry. The term **instrumentation** refers to [instrumentation libraries](https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/glossary.md#instrumentation-library) for common code shared between different systems, such as web frameworks and database clients.
+Note: The term **end user** in this document is defined as the application developers and operators of the system running OpenTelemetry. The term **instrumentation** refers to [instrumentation libraries](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/glossary.md#instrumentation-library) for common code shared between different systems, such as web frameworks and database clients.
## Motivation
diff --git a/oteps/trace/0168-sampling-propagation.md b/oteps/trace/0168-sampling-propagation.md
index 00d470c5967..5711280ca8e 100644
--- a/oteps/trace/0168-sampling-propagation.md
+++ b/oteps/trace/0168-sampling-propagation.md
@@ -39,7 +39,7 @@ This proposal uses 6 bits of information to propagate each of these
and does not depend on built-in TraceID randomness, which is not
sufficiently specified for probability sampling at this time. This
proposal closely follows [research by Otmar
-Ertl](https://arxiv.org/pdf/2107.07703.pdf).
+Ertl](https://arxiv.org/pdf/2107.07703).
### Adjusted count
@@ -248,7 +248,7 @@ The reasoning behind restricting the set of sampling rates is that it:
- Makes math involving partial traces tractable.
[An algorithm for making statistical inference from partially-sampled
-traces has been published](https://arxiv.org/pdf/2107.07703.pdf) that
+traces has been published](https://arxiv.org/pdf/2107.07703) that
explains how to work with a limited number of power-of-2 sampling rates.
### Behavior of the `TraceIDRatioBased` Sampler
diff --git a/oteps/trace/0170-sampling-probability.md b/oteps/trace/0170-sampling-probability.md
index 8f325323291..38e1e05a2cf 100644
--- a/oteps/trace/0170-sampling-probability.md
+++ b/oteps/trace/0170-sampling-probability.md
@@ -489,7 +489,7 @@ coordinated decision ensures that some traces will be complete.
Traces are complete when the TraceID ratio falls below the minimum
Sampler probability across the whole trace. Techniques have been
developed for [analysis of partial traces that are compatible with
-TraceID ratio sampling](https://arxiv.org/pdf/2107.07703.pdf).
+TraceID ratio sampling](https://arxiv.org/pdf/2107.07703).
The `TraceIDRatio` Sampler has another difficulty with testing for
completeness. It is impossible to know whether there are missing leaf
@@ -507,7 +507,7 @@ named `sampler.adjusted_count`.
##### Dapper's "Inflationary" Sampler
-Google's [Dapper](https://research.google/pubs/pub36356/) tracing
+Google's [Dapper](https://research.google/pubs/dapper-a-large-scale-distributed-systems-tracing-infrastructure/) tracing
system describes the use of sampling to control the cost of trace
collection at scale. Dapper's early Sampler algorithm, referred to as
an "inflationary" approach (although not published in the paper), is
@@ -872,7 +872,7 @@ K. Thompson](https://www.wiley.com/en-us/Sampling%2C+3rd+Edition-p-9780470402313
[Stream sampling for variance-optimal estimation of subset sums](https://arxiv.org/abs/0803.0473).
-[Estimation from Partially Sampled Distributed Traces](https://arxiv.org/pdf/2107.07703.pdf), 2021 Dynatrace Research report, Otmar Ertl
+[Estimation from Partially Sampled Distributed Traces](https://arxiv.org/pdf/2107.07703), 2021 Dynatrace Research report, Otmar Ertl
## Acknowledgements
diff --git a/oteps/trace/0174-http-semantic-conventions.md b/oteps/trace/0174-http-semantic-conventions.md
index 9d18833562f..e1b913421a2 100644
--- a/oteps/trace/0174-http-semantic-conventions.md
+++ b/oteps/trace/0174-http-semantic-conventions.md
@@ -149,7 +149,7 @@ There is a lot of user feedback that they want it, but
* Reading/writing body may happen outside of HTTP client API (e.g. through
network streams) β how users can track it too?
-Related issue: [open-telemetry/opentelemetry-specification#1284](https://github.com/open-telemetry/opentelemetry-specification/issues/1284).
+Related issue: [open-telemetry/semantic-conventions#1219](https://github.com/open-telemetry/semantic-conventions/issues/1219).
## Out of scope
diff --git a/oteps/trace/0220-messaging-semantic-conventions-span-structure.md b/oteps/trace/0220-messaging-semantic-conventions-span-structure.md
index e9fa9c736a1..8c159117243 100644
--- a/oteps/trace/0220-messaging-semantic-conventions-span-structure.md
+++ b/oteps/trace/0220-messaging-semantic-conventions-span-structure.md
@@ -409,7 +409,7 @@ OTEP.
However, interest was expressed from many sides to also achieve some
consistency for the instrumentation of "Process" operations. Therefore,
-[#3395](https://github.com/open-telemetry/opentelemetry-specification/issues/3395)
+[open-telemetry/semantic-conventions#657](https://github.com/open-telemetry/semantic-conventions/issues/657)
covers the effort to define conventions for "Process" operations, which will
build on the foundation that this OTEP lays.
diff --git a/oteps/trace/0250-Composite_Samplers.md b/oteps/trace/0250-Composite_Samplers.md
index bc1a519d4af..39b18643008 100644
--- a/oteps/trace/0250-Composite_Samplers.md
+++ b/oteps/trace/0250-Composite_Samplers.md
@@ -37,7 +37,7 @@ Also see Draft PR 3910 [Probability Samplers based on W3C Trace Context Level 2]
## Motivation
-The need for configuring head sampling has been explicitly or implicitly indicated in several discussions, both within the [Sampling SIG](https://docs.google.com/document/d/1gASMhmxNt9qCa8czEMheGlUW2xpORiYoD7dBD7aNtbQ) and in the wider community.
+The need for configuring head sampling has been explicitly or implicitly indicated in several discussions, both within the [Sampling SIG](https://docs.google.com/document/d/1gASMhmxNt9qCa8czEMheGlUW2xpORiYoD7dBD7aNtbQ/edit) and in the wider community.
Some of the discussions are going back a number of years, see for example
- issue [173](https://github.com/open-telemetry/opentelemetry-specification/issues/173): Way to ignore healthcheck traces when using automatic tracer across all languages?
@@ -316,6 +316,6 @@ A number of composite samplers are already available as independent contribution
[Stratified Sampling](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/trace/stratified-sampling-example),
LinksBasedSampler [for Java](https://github.com/open-telemetry/opentelemetry-java-contrib/blob/main/samplers/src/main/java/io/opentelemetry/contrib/sampler/LinksBasedSampler.java)
and [for DOTNET](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/trace/links-based-sampler)).
-Also, historically, some Span categorization was introduced by [JaegerRemoteSampler](https://www.jaegertracing.io/docs/1.54/sampling/#remote-sampling).
+Also, historically, some Span categorization was introduced by [JaegerRemoteSampler](https://www.jaegertracing.io/docs/1.54/architecture/sampling/#remote-sampling).
This proposal aims at generalizing these ideas, and at providing a bit more formal specification for the behavior of the composite samplers.
diff --git a/specification/README.md b/specification/README.md
index 5d587fdf99f..d37f22e457d 100644
--- a/specification/README.md
+++ b/specification/README.md
@@ -57,9 +57,9 @@ path_base_for_github_subdir:
The keywords "MUST", "MUST NOT", "REQUIRED", "SHOULD",
"SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in the
[specification][] are to be interpreted as described in [BCP
-14](https://tools.ietf.org/html/bcp14)
-[[RFC2119](https://tools.ietf.org/html/rfc2119)]
-[[RFC8174](https://tools.ietf.org/html/rfc8174)] when, and only when, they
+14](https://www.rfc-editor.org/info/bcp14)
+[[RFC2119](https://datatracker.ietf.org/doc/html/rfc2119)]
+[[RFC8174](https://datatracker.ietf.org/doc/html/rfc8174)] when, and only when, they
appear in all capitals, as shown here.
An implementation of the [specification][] is not compliant if it fails to
diff --git a/specification/baggage/api.md b/specification/baggage/api.md
index 67d4aa74afb..3246319c7bd 100644
--- a/specification/baggage/api.md
+++ b/specification/baggage/api.md
@@ -41,7 +41,7 @@ specific `Propagator`s that are used to transmit baggage entries across
component boundaries may impose their own restrictions on baggage names.
For example, the [W3C Baggage specification](https://www.w3.org/TR/baggage/#key)
restricts the baggage keys to strings that satisfy the `token` definition
-from [RFC7230, Section 3.2.6](https://tools.ietf.org/html/rfc7230#section-3.2.6).
+from [RFC7230, Section 3.2.6](https://datatracker.ietf.org/doc/html/rfc7230#section-3.2.6).
For maximum compatibility, alpha-numeric names are strongly recommended
to be used as baggage names.
@@ -202,4 +202,4 @@ If a new name/value pair is added and its name is the same as an existing name,
than the new pair MUST take precedence. The value is replaced with the added
value (regardless if it is locally generated or received from a remote peer).
-[w3c]: https://www.w3.org/TR/baggage
+[w3c]: https://www.w3.org/TR/baggage/
diff --git a/specification/common/attribute-type-mapping.md b/specification/common/attribute-type-mapping.md
index 851d386158c..e936eb1695c 100644
--- a/specification/common/attribute-type-mapping.md
+++ b/specification/common/attribute-type-mapping.md
@@ -125,9 +125,9 @@ field.
Values that represent ordered sequences of other values (such as
[arrays](https://docs.oracle.com/javase/specs/jls/se7/html/jls-10.html),
-[vectors](https://en.cppreference.com/w/cpp/container/vector), ordered
+[vectors](https://en.cppreference.com/w/cpp/container/vector.html), ordered
[lists](https://docs.python.org/3/tutorial/datastructures.html#more-on-lists),
-[slices](https://golang.org/ref/spec#Slice_types)) SHOULD be converted to
+[slices](https://go.dev/ref/spec#Slice_types)) SHOULD be converted to
AnyValue's
[array_value](https://github.com/open-telemetry/opentelemetry-proto/blob/38b5b9b6e5257c6500a843f7fdacf89dd95833e8/opentelemetry/proto/common/v1/common.proto#L35)
field. String values and byte sequences are an exception from this rule (see
@@ -223,7 +223,7 @@ of the associative array.
Unordered collections of unique values (such as
[Java Sets](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/Set.html),
-[C++ sets](https://en.cppreference.com/w/cpp/container/set),
+[C++ sets](https://en.cppreference.com/w/cpp/container/set.html),
[Python Sets](https://docs.python.org/3/tutorial/datastructures.html#sets)) SHOULD be
converted to AnyValue's
[array_value](https://github.com/open-telemetry/opentelemetry-proto/blob/38b5b9b6e5257c6500a843f7fdacf89dd95833e8/opentelemetry/proto/common/v1/common.proto#L35)
diff --git a/specification/configuration/data-model.md b/specification/configuration/data-model.md
index 84d04c05b89..e6b4dad8183 100644
--- a/specification/configuration/data-model.md
+++ b/specification/configuration/data-model.md
@@ -55,7 +55,7 @@ YAML configuration files MUST use file extensions `.yaml` or `.yml`.
Configuration files support environment variables substitution for references,
defined using
-the [Augmented Backus-Naur Form](https://tools.ietf.org/html/rfc5234):
+the [Augmented Backus-Naur Form](https://datatracker.ietf.org/doc/html/rfc5234):
```abnf
SUBSTITUTION-REF = "${" ["env:"] ENV-NAME [":-" DEFAULT-VALUE] "}"; valid substitution reference
diff --git a/specification/configuration/sdk-environment-variables.md b/specification/configuration/sdk-environment-variables.md
index 27bd8e6cc82..67fe69feaa1 100644
--- a/specification/configuration/sdk-environment-variables.md
+++ b/specification/configuration/sdk-environment-variables.md
@@ -201,7 +201,7 @@ Depending on the value of `OTEL_TRACES_SAMPLER`, `OTEL_TRACES_SAMPLER_ARG` may b
- For `traceidratio` and `parentbased_traceidratio` samplers: Sampling probability, a number in the [0..1] range, e.g. "0.25". Default is 1.0 if unset.
- For `jaeger_remote` and `parentbased_jaeger_remote`: The value is a comma separated list:
- - `endpoint`: the endpoint in form of `scheme://host:port` of gRPC server that serves the sampling strategy for the service ([sampling.proto](https://github.com/jaegertracing/jaeger-idl/blob/master/proto/api_v2/sampling.proto)).
+ - `endpoint`: the endpoint in form of `scheme://host:port` of gRPC server that serves the sampling strategy for the service ([sampling.proto](https://github.com/jaegertracing/jaeger-idl/blob/main/proto/api_v2/sampling.proto)).
- `pollingIntervalMs`: in milliseconds indicating how often the sampler will poll the backend for updates to sampling strategy.
- `initialSamplingRate`: in the [0..1] range, which is used as the sampling probability when the backend cannot be reached to retrieve a sampling strategy. This value stops having an effect once a sampling strategy is retrieved successfully, as the remote strategy will be used until a new update is retrieved.
- Example: `endpoint=http://localhost:14250,pollingIntervalMs=5000,initialSamplingRate=0.25`
diff --git a/specification/context/api-propagators.md b/specification/context/api-propagators.md
index fe6e73d5974..5e173ccfa74 100644
--- a/specification/context/api-propagators.md
+++ b/specification/context/api-propagators.md
@@ -120,7 +120,7 @@ The carrier of propagated data on both the client (injector) and server (extract
usually an HTTP request.
In order to increase compatibility, the key/value pairs MUST only consist of US-ASCII characters
-that make up valid HTTP header fields as per [RFC 9110](https://tools.ietf.org/html/rfc9110/#name-fields).
+that make up valid HTTP header fields as per [RFC 9110](https://datatracker.ietf.org/doc/html/rfc9110/#name-fields).
`Getter` and `Setter` are optional helper components used for extraction and injection respectively,
and are defined as separate objects from the carrier to avoid runtime allocations,
@@ -352,9 +352,9 @@ Required parameters:
The official list of propagators that MUST be maintained by the OpenTelemetry
organization and MUST be distributed as OpenTelemetry extension packages:
-* [W3C TraceContext](https://www.w3.org/TR/trace-context). MAY alternatively
+* [W3C TraceContext](https://www.w3.org/TR/trace-context/). MAY alternatively
be distributed as part of the OpenTelemetry API.
-* [W3C Baggage](https://www.w3.org/TR/baggage). MAY alternatively
+* [W3C Baggage](https://www.w3.org/TR/baggage/). MAY alternatively
be distributed as part of the OpenTelemetry API.
* [B3](https://github.com/openzipkin/b3-propagation).
* [Jaeger](https://www.jaegertracing.io/sdk-migration/#propagation-format).
diff --git a/specification/context/env-carriers.md b/specification/context/env-carriers.md
index 188c2881b87..275be713a07 100644
--- a/specification/context/env-carriers.md
+++ b/specification/context/env-carriers.md
@@ -85,7 +85,7 @@ Environment variable names used for context propagation:
Environment variable values used for context propagation:
- MUST only use characters that are valid in HTTP header fields per [RFC
- 9110](https://tools.ietf.org/html/rfc9110)
+ 9110](https://datatracker.ietf.org/doc/html/rfc9110)
- MUST follow the format requirements of the specific propagation protocol
(e.g., W3C Trace Context specification for `TRACEPARENT` values)
- SHOULD NOT contain sensitive information
@@ -96,7 +96,7 @@ Implementations SHOULD follow platform-specific environment variable size
limitations:
- Windows: Maximum 32,767 characters for name=value pairs according to
- [Microsoft Documentation](https://docs.microsoft.com/windows/win32/api/winbase/nf-winbase-setenvironmentvariable)
+ [Microsoft Documentation](https://learn.microsoft.com/windows/win32/api/winbase/nf-winbase-setenvironmentvariable)
- UNIX: System-dependent limits exist and are typically lower than Windows.
When truncation is required due to size limitations, implementations MUST
diff --git a/specification/glossary.md b/specification/glossary.md
index b29b6f67e35..35b9d58b95b 100644
--- a/specification/glossary.md
+++ b/specification/glossary.md
@@ -210,7 +210,7 @@ Key/value pairs contained in a `Log Record`.
Logs that are recorded in a format which has a well-defined structure that allows
to differentiate between different elements of a Log Record (e.g. the Timestamp,
-the Attributes, etc). The *Syslog protocol* ([RFC 5424](https://tools.ietf.org/html/rfc5424)),
+the Attributes, etc). The *Syslog protocol* ([RFC 5424](https://datatracker.ietf.org/doc/html/rfc5424)),
for example, defines a `structured-data` format.
### Flat File Logs
diff --git a/specification/logs/README.md b/specification/logs/README.md
index 990a1ef3dab..65eb4e2e23d 100644
--- a/specification/logs/README.md
+++ b/specification/logs/README.md
@@ -107,7 +107,7 @@ data models, send the data through OpenTelemetry Collector, where it can be
enriched and processed in a uniform manner. For example, Collector can add to
all telemetry data coming from a Kubernetes Pod several attributes that describe
the pod and it can be done automatically using
-[k8sattributesprocessor](https://pkg.go.dev/github.com/open-telemetry/opentelemetry-collector-contrib/processor/k8sattributesprocessor?tab=doc)
+[k8sattributesprocessor](https://pkg.go.dev/github.com/open-telemetry/opentelemetry-collector-contrib/processor/k8sattributesprocessor)
without the need for the Application to do anything special. Most importantly
such enrichment is completely uniform for all 3 signals. The Collector
guarantees that logs, traces and metrics have precisely the same attribute names
@@ -228,7 +228,7 @@ data.
OpenTelemetry Collector can read system logs (link TBD) and automatically enrich
them with Resource information using the
-[resourcedetection](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/master/processor/resourcedetectionprocessor)
+[resourcedetection](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourcedetectionprocessor)
processor.
### Infrastructure Logs
@@ -268,7 +268,7 @@ OpenTelemetry recommends to collect application logs using Collector's
[filelog receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/filelogreceiver).
Alternatively, another log collection agent, such as FluentBit, can collect
logs,
-[then send](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/master/receiver/fluentforwardreceiver)
+[then send](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/fluentforwardreceiver)
to OpenTelemetry Collector where the logs can be further processed and enriched.
### Legacy First-Party Applications Logs
@@ -291,7 +291,7 @@ auto-instrumenting solutions that modify trace logging libraries used by the
application to automatically output the trace context such as the trace id or
span id with every log statement. The trace context can be automatically
extracted from incoming requests if standard compliant request propagation is
-used, e.g. via [W3C TraceContext](https://www.w3.org/TR/trace-context). In
+used, e.g. via [W3C TraceContext](https://www.w3.org/TR/trace-context/). In
addition, the requests outgoing from the application may be injected with the
same trace context data, thus resulting in context propagation through the
application and creating an opportunity to have full trace context in logs
@@ -322,7 +322,7 @@ using OpenTelemetry
Alternatively, if the Collector does not have the necessary file reading and
parsing capabilities, another log collection agent, such as FluentBit can
collect the logs,
-[then send the logs](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/master/receiver/fluentforwardreceiver)
+[then send the logs](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/fluentforwardreceiver)
to OpenTelemetry Collector.

@@ -338,7 +338,7 @@ see [Trace Context in Non-OTLP Log Formats](../compatibility/logging_trace_conte
The second approach is to modify the application so that the logs are output via
a network protocol, e.g. via
-[OTLP](https://github.com/open-telemetry/opentelemetry-proto/blob/master/opentelemetry/proto/logs/v1/logs.proto).
+[OTLP](https://github.com/open-telemetry/opentelemetry-proto/blob/main/opentelemetry/proto/logs/v1/logs.proto).
The most convenient way to achieve this is to provide addons or extensions to
the commonly used logging libraries. The addons implement sending over such
network protocols, which would then typically require small, localized changes
@@ -438,7 +438,7 @@ auto-instrumented logging statements will do the following:
statements.
This is possible to do for certain languages (e.g. in Java) and we can reuse
-[existing open-source libraries](https://docs.datadoghq.com/tracing/connect_logs_and_traces/java/?tab=log4j2)
+[existing open-source libraries](https://docs.datadoghq.com/tracing/other_telemetry/connect_logs_and_traces/java/)
that do this.
A further optional modification would be to auto-instrument loggers to send logs
diff --git a/specification/logs/data-model-appendix.md b/specification/logs/data-model-appendix.md
index 3b845fcd720..ebd7c5abeca 100644
--- a/specification/logs/data-model-appendix.md
+++ b/specification/logs/data-model-appendix.md
@@ -798,7 +798,7 @@ All other fields | |
[OpenTelemetry resource semantic convention](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/resource/README.md).
This is a selection of the most relevant fields. See
-[for the full reference](https://www.elastic.co/guide/en/ecs/current/ecs-field-reference.html)
+[for the full reference](https://www.elastic.co/docs/reference/ecs/ecs-field-reference)
for an exhaustive list.
## Appendix B: `SeverityNumber` example mappings
diff --git a/specification/metrics/api.md b/specification/metrics/api.md
index 257be9b508e..7af17c0a3c9 100644
--- a/specification/metrics/api.md
+++ b/specification/metrics/api.md
@@ -200,7 +200,7 @@ identifying fields are equal.
#### Instrument name syntax
The instrument name syntax is defined below using the [Augmented Backus-Naur
-Form](https://tools.ietf.org/html/rfc5234):
+Form](https://datatracker.ietf.org/doc/html/rfc5234):
```abnf
instrument-name = ALPHA 0*254 ("_" / "." / "-" / "/" / ALPHA / DIGIT)
diff --git a/specification/metrics/data-model.md b/specification/metrics/data-model.md
index 797bf095df5..6c6eb30e235 100644
--- a/specification/metrics/data-model.md
+++ b/specification/metrics/data-model.md
@@ -187,7 +187,7 @@ in scope for key design decisions:
- Using OTLP as an intermediary format between two non-compatible formats
- Importing [statsd](https://github.com/statsd/statsd) => Prometheus PRW
- - Importing [collectd](https://collectd.org/wiki/index.php/Binary_protocol)
+ - Importing [collectd](https://www.collectd.org/wiki/index.php/Binary_protocol)
=> Prometheus PRW
- Importing Prometheus endpoint scrape => [statsd push | collectd | opencensus]
- Importing OpenCensus "oca" => any non OC or OTel format
diff --git a/specification/overview.md b/specification/overview.md
index 61cc81b5994..b0da25136f7 100644
--- a/specification/overview.md
+++ b/specification/overview.md
@@ -264,7 +264,7 @@ function that is invoked on demand by the SDK.
### Metrics data model and SDK
The Metrics data model is [specified here](metrics/data-model.md) and is based on
-[metrics.proto](https://github.com/open-telemetry/opentelemetry-proto/blob/master/opentelemetry/proto/metrics/v1/metrics.proto).
+[metrics.proto](https://github.com/open-telemetry/opentelemetry-proto/blob/main/opentelemetry/proto/metrics/v1/metrics.proto).
This data model defines three semantics: An Event model used by the API, an
in-flight data model used by the SDK and OTLP, and a TimeSeries model which
denotes how exporters should interpret the in-flight model.
@@ -365,7 +365,7 @@ running locally with the application) and Collector (a standalone running
service).
Read more at OpenTelemetry Service [Long-term
-Vision](https://github.com/open-telemetry/opentelemetry-collector/blob/master/docs/vision.md).
+Vision](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/vision.md).
## Instrumentation Libraries
diff --git a/specification/protocol/README.md b/specification/protocol/README.md
index ad30cb7d36a..5620363a42c 100644
--- a/specification/protocol/README.md
+++ b/specification/protocol/README.md
@@ -9,7 +9,7 @@ path_base_for_github_subdir:
The OpenTelemetry protocol (OTLP) design goals, requirements, and
[specification] have moved to
-[github.com/open-telemetry/opentelemetry-proto/docs](https://github.com/open-telemetry/opentelemetry-proto/blob/main/docs/).
+[github.com/open-telemetry/opentelemetry-proto/docs](https://github.com/open-telemetry/opentelemetry-proto/tree/main/docs).
You can also view the specification from the OpenTelemetry website, see [OTLP][specification].
diff --git a/specification/trace/api.md b/specification/trace/api.md
index 29593591492..ff15e36b619 100644
--- a/specification/trace/api.md
+++ b/specification/trace/api.md
@@ -256,7 +256,7 @@ overridable.
The API MUST allow retrieving the `TraceId` and `SpanId` in the following forms:
-* Hex - returns the lowercase [hex encoded](https://tools.ietf.org/html/rfc4648#section-8)
+* Hex - returns the lowercase [hex encoded](https://datatracker.ietf.org/doc/html/rfc4648#section-8)
`TraceId` (result MUST be a 32-hex-character lowercase string) or `SpanId`
(result MUST be a 16-hex-character lowercase string).
* Binary - returns the binary representation of the `TraceId` (result MUST be a
diff --git a/specification/trace/sdk.md b/specification/trace/sdk.md
index 868302411e8..b1bc294baeb 100644
--- a/specification/trace/sdk.md
+++ b/specification/trace/sdk.md
@@ -431,7 +431,7 @@ specification, the Sampler decision is more nuanced: only a portion of
the identifier is used, after checking whether the OpenTelemetry
TraceState field contains an explicit randomness value.
-[W3CCONTEXTMAIN]: https://www.w3.org/TR/trace-context-2
+[W3CCONTEXTMAIN]: https://www.w3.org/TR/trace-context-2/
##### `TraceIdRatioBased` sampler configuration
@@ -529,9 +529,9 @@ The following configuration properties should be available when creating the sam
* polling interval - polling interval for getting configuration from remote
* initial sampler - initial sampler that is used before the first configuration is fetched
-[jaeger-remote-sampling]: https://www.jaegertracing.io/docs/1.41/sampling/#remote-sampling
-[jaeger-remote-sampling-api]: https://www.jaegertracing.io/docs/1.41/apis/#remote-sampling-configuration-stable
-[jaeger-adaptive-sampling]: https://www.jaegertracing.io/docs/1.41/sampling/#adaptive-sampling
+[jaeger-remote-sampling]: https://www.jaegertracing.io/docs/1.41/architecture/sampling/#remote-sampling
+[jaeger-remote-sampling-api]: https://www.jaegertracing.io/docs/1.41/architecture/apis/#remote-sampling-configuration-stable
+[jaeger-adaptive-sampling]: https://www.jaegertracing.io/docs/1.41/architecture/sampling/#adaptive-sampling
### Sampling Requirements
@@ -548,8 +548,8 @@ OpenTelemetry defines an optional [explicit randomness value][OTELRVALUE] encode
This specification recommends the use of either TraceID randomness or explicit randomness,
which ensures that samplers always have sufficient randomness when using W3C Trace Context propagation.
-[W3CCONTEXTMAIN]: https://www.w3.org/TR/trace-context-2
-[W3CCONTEXTLEVEL1]: https://www.w3.org/TR/trace-context
+[W3CCONTEXTMAIN]: https://www.w3.org/TR/trace-context-2/
+[W3CCONTEXTLEVEL1]: https://www.w3.org/TR/trace-context/
[W3CCONTEXTTRACEID]: https://www.w3.org/TR/trace-context-2/#randomness-of-trace-id
[W3CCONTEXTTRACESTATE]: https://www.w3.org/TR/trace-context-2/#tracestate-header
[W3CCONTEXTSAMPLEDFLAG]: https://www.w3.org/TR/trace-context-2/#sampled-flag