Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Console exporter initialization via console value for OTEL_TRACES_EXPORTER is not documented. #4262

Closed
dnutels opened this issue Nov 8, 2023 · 6 comments
Labels
bug Something isn't working document Documentation-related pkg:sdk-node priority:p4 Bugs and spec inconsistencies which do not fall into a higher prioritization waiting-for-spec

Comments

@dnutels
Copy link

dnutels commented Nov 8, 2023

What happened?

Steps to Reproduce

Application from: https://opentelemetry.io/docs/instrumentation/js/getting-started/nodejs/
Module configuration from https://opentelemetry.io/docs/instrumentation/js/automatic/

Expected Result

Traces sent to console.

Actual Result

Traces sent to console, but the console value for OTEL_TRACES_EXPORTER is not documented here:

https://opentelemetry.io/docs/specs/otel/configuration/sdk-environment-variables/#exporter-selection

despite appearing here:

https://github.com/open-telemetry/opentelemetry-js-contrib/blob/main/metapackages/auto-instrumentations-node/test/register.test.ts

This works:

export OTEL_LOG_LEVEL="debug"
export OTEL_TRACES_EXPORTER="console"
export OTEL_METRICS_EXPORTER="none"
export OTEL_LOGS_EXPORTER="none"
export OTEL_NODE_RESOURCE_DETECTORS="none"
export OTEL_SERVICE_NAME="first-svc"
export NODE_OPTIONS="--require @opentelemetry/auto-instrumentations-node/register"
node lib/

Additional Details

OpenTelemetry Setup Code

No response

package.json

No response

Relevant log output

No response

@dnutels dnutels added bug Something isn't working triage labels Nov 8, 2023
@pichlermarc
Copy link
Member

Hmm, interesting - thank you for opening this issue. It's not part of opentelemetry.io/docs/specs/otel/configuration/sdk-environment-variables/#exporter-selection beacuse the specifiction does not include 'console' as a known value to instanitate a console exporter.

Strictly speaking we could remove it, but I think it is a useful feature to have. There's a few options that we could handle this:

  • do nothing (it's not part of the spec, but I don't think adding additional values for this env var is prohibited. Further, 'console' will likely not be used for anything other than the console exporter if it's ever added to the spec)
    • documenting it may be helpful, but we'd need a specific place in the OTel docs to do so.
  • try to get it specified (which will add it to the linked doc)
  • remove that feature and mark the change as breaking in @opentelemetry/sdk-node

@dnutels do you have a specific preference for one of the solutions listed above? 🤔

@pichlermarc pichlermarc added priority:p4 Bugs and spec inconsistencies which do not fall into a higher prioritization document Documentation-related and removed triage labels Nov 8, 2023
@dnutels
Copy link
Author

dnutels commented Nov 9, 2023

I would certainly be against removing it... My semi-coherent thoughts to follow.

Consider an adoption process of OTEL in an application. It's quite possible that having the telemetry signals be sent to the console during the initial (and perhaps even past that) development is easier than having them being sent to a local collector (docker-composed or otherwise).

In some languages (JavaScript/Python, at least) this would also simplify debugging as you adopt/develop (since it's all in-proc).

Finally, if you do use a local collector over Docker (which would probably be the default for most) ... you pay with your machine resources as Docker needs to be running.

So, having said that, and considering that we do have other quite odd values permitted (zipkin and prometheus I assume due to the OpenTracing & OpenCensus merger back in the day) my preference would be to add console as part of the specification. It is global enough to warrant this.

It would be especially useful for logs, IMO.

I wonder now if it should be a more general file instead, with ENDPOINT becoming the file path. Then you'd cover odd cases that do not allow direct OTLP emission and you must go through the filesystem with, say, a hardened agent or some such...

Then, you'd be able to cover console with ENDPOINT being empty, use /dev/null to discard and these type of things...

Of course, it may be awkward due to the OTEL_EXPORTER_OTLP_ENDPOINT being a single definition across traces, metrics and logs, so we wouldn't be able to send logs to console, metrics and traces to the collector. All or nothing.

@pichlermarc
Copy link
Member

would certainly be against removing it... My semi-coherent thoughts to follow.

Consider an adoption process of OTEL in an application. It's quite possible that having the telemetry signals be sent to the console during the initial (and perhaps even past that) development is easier than having them being sent to a local collector (docker-composed or otherwise).

Yep, I can tell from experience that this is the usual way.

In some languages (JavaScript/Python, at least) this would also simplify debugging as you adopt/develop (since it's all in-proc).

Certainly.

Finally, if you do use a local collector over Docker (which would probably be the default for most) ... you pay with your machine resources as Docker needs to be running.

I agree needing a collector to run locally is overkill.

So, having said that, and considering that we do have other quite odd values permitted (zipkin and prometheus I assume due to the OpenTracing & OpenCensus merger back in the day) my preference would be to add console as part of the specification. It is global enough to warrant this.

I think this is a great idea. I think that there's currently no specification for a trace or logs console exporter, so that is somthing that would have to be done as a prerequisite of that change. I was only able to find the spec for the metrics stdout exporter https://github.com/open-telemetry/opentelemetry-specification/blob/d855249b210ecd832889fc4ce91a4444f82e252a/specification/metrics/sdk_exporters/stdout.md?plain=1#L2

I currently do not have the bandwidth to drive this topic as we have other-higher priority items on the agenda, but I'd appreciate if you'd open an issue on the spec repo (or comment on an existing one, if applicable) and add the case there.

It would be especially useful for logs, IMO.

Do you mean having support for setting the exporters via environment variables for all signals? There's currently #3871 to track this if I understand this sentence correctly.

I wonder now if it should be a more general file instead, with ENDPOINT becoming the file path. Then you'd cover odd cases that do not allow direct OTLP emission and you must go through the filesystem with, say, a hardened agent or some such...

Then, you'd be able to cover console with ENDPOINT being empty, use /dev/null to discard and these type of things...

Do you mean an OTLP file exporter? There is a spec for that but I think it's mainly intended for the collector. If there's enough people that would be interested in this, and a good description of use-cases, we could certainly discuss adding it to this repo in a new issue.

Of course, it may be awkward due to the OTEL_EXPORTER_OTLP_ENDPOINT being a single definition across traces, metrics and logs, so we wouldn't be able to send logs to console, metrics and traces to the collector. All or nothing.

You don't have to set only OTEL_EXPORTER_OTLP_ENDPOINT, you can also set it to different ones for different signals using OTEL_EXPORTER_OTLP_TRACES_ENDPOINT, OTEL_EXPORTER_OTLP_METRICS_ENDPOINT, OTEL_EXPORTER_OTLP_LOGS_ENDPOINT - so that could be a non-issue.

@dnutels
Copy link
Author

dnutels commented Nov 9, 2023

Already there: open-telemetry/opentelemetry-specification#3742

So perhaps close this once the PR drops? Your call.

@pichlermarc
Copy link
Member

Already there: open-telemetry/opentelemetry-specification#3742

Great 🙂

So perhaps close this once the PR drops? Your call.

Let's do that 🙂 I applied the waiting-for-spec label and then we can close this issue once the PR is merged.
Then the rest of the work would be adding support for the other signals, which can be tracked by feature #3871

@pichlermarc
Copy link
Member

Closing as the spec-PR has merged 🙂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working document Documentation-related pkg:sdk-node priority:p4 Bugs and spec inconsistencies which do not fall into a higher prioritization waiting-for-spec
Projects
None yet
Development

No branches or pull requests

2 participants