Skip to content

Conversation

@felix-hilden
Copy link

@felix-hilden felix-hilden commented Sep 20, 2025

Fixes #7261

Hi @mholt, if I may ping you, thanks for labeling the associated issue! I decided to try to contribute a fix, as the scope is quite well defined.

First, I think my initial suggestion in the ticket of using a flat directive would probably
be improved by using a map instead, more consistent to what Caddyfile already does:

tracing {
  span my-span-name
  span_attributes {
    attr1 value1
    attr2 {placeholder}
  }
}

I'm disclosing that I have used Claude to generate most of the code in this PR as this is my first time touching Go in general, although naturally I reviewed the output and tried to understand it. So if holding my hand through the MR is too much, feel free to discard it altogether in favor of a better solution!

If not, there are a couple of things I wanted to ask about. I think given the tests, is pretty already in a reasonable place. Building from that, on a practical level if you have quick pointers:

  • Is the use of placeholders implemented and tested reasonably?
  • Do we need to do anything special to support JSON configuration? I know we have some JSON processing in the module already in the struct, but I'm asking just in case.
  • For documentation, I created WIP: Add span attributes to tracing documentation website#496. Do we need anything more, e.g. in code?

Many thanks! I'm very excited to try out Caddy for real and hopefully switch to it for good.

@CLAassistant
Copy link

CLAassistant commented Sep 20, 2025

CLA assistant check
All committers have signed the CLA.


````bash
$ go test ./...
$ go test ./modules/caddyhttp/tracing/
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also I thought for newbies like me, if you want to attract them 😅 this is a helpful first pointer after the build.

@mholt
Copy link
Member

mholt commented Sep 24, 2025

Thanks for the contribution! I'll ask if @hairyhenderson, who I know is very busy, might have a chance to look this over, since I don't know much about the metrics/tracing stuff.

@felix-hilden
Copy link
Author

Thank you! That would be much appreciated. But also I think the tracing part is the simple one. I believe based on the test that the span attributes are already recorded correctly. But the hard part is the replacements. I don't think I found a nice example of how response placeholders are generated and used. I pushed a test expecting a default span attr from OTEL, but you can see it fail there. Additionally, I couldn't really use the other response placeholders out of the box either. So I wonder if I'm doing something wrong either in setting up the test to be realistic, or in the middleware code to make them unavailable.

@felix-hilden
Copy link
Author

Hi! Gently pinging this, is there anyone else that might be able to lend a hand with a review @mholt? Implementers of other systems that have used placeholders etc. 🙏

@mholt
Copy link
Member

mholt commented Oct 30, 2025

Well, the use of placeholders / replacer looks good IMO. I'm just not qualified to approve the rest of the code changes 😅

@felix-hilden
Copy link
Author

Alright! I found the Caddy documentation for placeholders, so I fixed the failing test which was assuming response placeholders that actually are not even supposed to be there. Then I decided to try the setup live before spending more of your time. So here's a trial of the functionality using the binary built from the branch:

Setup

To test trace output, I'm spinning up an opentelemetry collector with some debug output.

Collector configuration
# collector.yaml
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
processors:
  batch:
exporters:
  debug:
    verbosity: detailed
  debug/ignore:
    verbosity: basic
service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [debug]
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [debug/ignore]
    logs:
      receivers: [otlp]
      processors: [batch]
      exporters: [debug/ignore]

Current Caddy span name

To establish a baseline, here's what the latest Caddy outputs when tracing the builtin metrics.

Caddyfile
{
    auto_https off
    admin off
    grace_period 1s
    metrics {
        per_host
    }
}

metrics.localhost:80 {
    tracing {
        span my_metrics_span_name
    }
	metrics
}
Docker compose
# docker-compose.yaml
services:
  caddy:
    image: caddy:latest
    restart: unless-stopped
    volumes:
      - ./mycaddy:/etc/caddy
    environment:
      OTEL_EXPORTER_OTLP_ENDPOINT: http://collector:4317
      OTEL_SERVICE_NAME: caddy
    depends_on:
      - collector
    ports:
      - "80:80"
  collector:
    restart: unless-stopped
    image: otel/opentelemetry-collector-contrib:latest
    command: ["--config", "/etc/otelcol/config.yaml"]
    volumes:
      - ./collector.yaml:/etc/otelcol/config.yaml:ro

When running docker compose up and visiting metrics.localhost,
the output of the collector is as follows:

2025-11-22T12:16:41.129Z info    Traces  {"resource": {"service.instance.id": "fabc0088-b767-45c6-9e42-acc9c1d29bb5", "service.name": "otelcol-contrib", "service.version": "0.135.0"}, "otelcol.component.id": "debug", "otelcol.component.kind": "exporter", "otelcol.signal": "traces", "resource spans": 1, "spans": 1}
2025-11-22T12:16:41.129Z info    ResourceSpans #0
Resource SchemaURL: https://opentelemetry.io/schemas/1.34.0
Resource attributes:
     -> service.name: Str(caddy)
     -> telemetry.sdk.language: Str(go)
     -> telemetry.sdk.name: Str(opentelemetry)
     -> telemetry.sdk.version: Str(1.37.0)
     -> webengine.name: Str(Caddy)
     -> webengine.version: Str(v2.10.2)
ScopeSpans #0
ScopeSpans SchemaURL:
InstrumentationScope go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp 0.61.0
Span #0
    Trace ID       : b37740f20f154780fe5df7420d039807
    Parent ID      :
    ID             : 96f3b95258151ebf
    Name           : my_metrics_span_name
    Kind           : Server
    Start time     : 2025-11-22 12:16:40.518916511 +0000 UTC
    End time       : 2025-11-22 12:16:40.520369428 +0000 UTC
    Status code    : Unset
    Status message :
Attributes:
     -> server.address: Str(metrics.localhost)
     -> http.request.method: Str(GET)
     -> url.scheme: Str(http)
     -> network.peer.address: Str(192.168.65.1)
     -> network.peer.port: Int(51976)
     -> user_agent.original: Str(Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:144.0) Gecko/20100101 Firefox/144.0)
     -> client.address: Str(192.168.65.1)
     -> url.path: Str(/)
     -> network.protocol.version: Str(1.1)
     -> http.response.body.size: Int(2251)
     -> http.response.status_code: Int(200)
 {"resource": {"service.instance.id": "fabc0088-b767-45c6-9e42-acc9c1d29bb5", "service.name": "otelcol-contrib", "service.version": "0.135.0"}, "otelcol.component.id": "debug", "otelcol.component.kind": "exporter", "otelcol.signal": "traces"}

The custom metrics span name is written out as expected.

Span attributes addition

Next, I tested setting static span attributes with and without placeholders.
Naturally for this I had to instead run Caddy locally,
and only have the collector in the Docker compose.

Caddyfile
{
    auto_https off
    admin off
    grace_period 1s
    metrics {
        per_host
    }
}

metrics.localhost:80 {
    tracing {
        span my_metrics_span_name
        span_attributes {
            my_attribute my_attr_value
            my_method "requesting with {http.request.method}"
        }
    }
	metrics
}
Docker compose
services:
  collector:
    restart: unless-stopped
    image: otel/opentelemetry-collector-contrib:latest
    command: ["--config", "/etc/otelcol/config.yaml"]
    volumes:
      - ./collector.yaml:/etc/otelcol/config.yaml:ro

Output

When separately running docker-compose up for the collector and then locally
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317 OTEL_SERVICE_NAME=caddy ./caddy run --config mycaddy/Caddyfile
the trace attributes are written successfully:

Attributes:
     -> server.address: Str(metrics.localhost)
     -> http.request.method: Str(GET)
     -> url.scheme: Str(http)
     -> network.peer.address: Str(127.0.0.1)
     -> network.peer.port: Int(64931)
     -> user_agent.original: Str(Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:144.0) Gecko/20100101 Firefox/144.0)
     -> client.address: Str(127.0.0.1)
     -> url.path: Str(/)
     -> network.protocol.version: Str(1.1)
     -> my_attribute: Str(my_attr_value)
     -> my_method: Str(requesting with GET)
     -> http.response.body.size: Int(2222)
     -> http.response.status_code: Int(200)

Conclusion

I would consider this done as far as I'm concerned, given that you've signed off the placeholder implementation and the integration with Open Telemetry works as expected 🙏 But of course if you need another signoff, let's wait for that! Is there anyone besides Dave that we could ask to provide that?

Or if you have any questions that would make you feel more comfortable with merging it yourself, I'm happy to answer as well!

@felix-hilden felix-hilden changed the title WIP: add span attributes to tracing module Add span attributes to tracing module Nov 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Allow setting span attributes in opentelemetry tracing

3 participants