Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Target Allocator - ServiceMonitor scheme #1669

Open
rhysxevans opened this issue Apr 20, 2023 · 50 comments · Fixed by #1710
Open

Target Allocator - ServiceMonitor scheme #1669

rhysxevans opened this issue Apr 20, 2023 · 50 comments · Fixed by #1710
Assignees
Labels
area:target-allocator Issues for target-allocator bug Something isn't working

Comments

@rhysxevans
Copy link

Hi

I have the TA up and with the PrometheusCR enabled, this is to look at options of migrating to using the OTEL collector for more purposes.

At present I am getting data as expected except for service monitors (to be fair in my test lab I only have a couple of these setup) that contain a "non default" scheme and have authentication data in them. eg

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  creationTimestamp: "2023-04-06T18:07:21Z"
  generation: 1
  labels:
    app: first-cluster
  name: first-cluster
  namespace: opensearch-first-cluster
  resourceVersion: "991661"
  uid: cfad90db-7361-4b05-96de-b89b36537dbc
spec:
  endpoints:
  - basicAuth:
      password:
        key: password
        name: first-cluster-opensearch-monitoring
      username:
        key: username
        name: first-cluster-opensearch-monitoring
    interval: 30s
    path: /_prometheus/metrics
    port: http
    scheme: https
    tlsConfig:
      insecureSkipVerify: true
  jobLabel: opensearch_first-cluster
  namespaceSelector:
    matchNames:
    - opensearch-first-cluster
  selector:
    matchLabels:
      opster.io/opensearch-cluster: first-cluster
apiVersion: v1
kind: Service
metadata:
  annotations:
    banzaicloud.com/last-applied: UEsDBBQACAAIAAAAAAAAAAAAAAAAAAAAAAAIAAAAb3JpZ2luYWyUks2O2zAMhN+FZ3nrrGO362O39wRN0UuRAy3TjRBFEig6PQR690JO4Py0QLA3ezTzYUjwBBjMT+JovIMWjgtQsDeuhxY2xEejCRQcSLBHQWhPYLEjG/OXD1GIX4z/5AO5SMh6V2g7ZhVaGAxHmf+TAocH+kc/yzGgzm83oEeb/+OIv9NATE5ThPbXQ/Vr9uXabJqns17vVzn/jSzJZBceSYH2Tthbmwuflcvsq0BuM8He74r+p/9ocqBrqhL1siwWr8NbsWw+98WXCrFoqnro+mW5oK6GtE0KYiCd1xc8y3mMC3gnEkBNOrRvr2WpILAXr72FFn68r0GBIP8mWc+WpOa4MLo4hWdG9ZxR3TEOJGx0vBKa54TmjsAab9L183Rdpq2CSJa0eP74XeWNCso4naT12H9Fi05n6yml9DcAAP//UEsHCDhHq9BCAQAA4AIAAFBLAQIUABQACAAIAAAAAAA4R6vQQgEAAOACAAAIAAAAAAAAAAAAAAAAAAAAAABvcmlnaW5hbFBLBQYAAAAAAQABADYAAAB4AQAAAAA=
  creationTimestamp: "2023-04-06T18:07:23Z"
  labels:
    opster.io/opensearch-cluster: first-cluster
  name: first-cluster
  namespace: opensearch-first-cluster
  ownerReferences:
  - apiVersion: opensearch.opster.io/v1
    blockOwnerDeletion: true
    controller: true
    kind: OpenSearchCluster
    name: first-cluster
    uid: b630ac40-12f9-467d-83aa-635fbd401eb5
  resourceVersion: "991678"
  uid: add9fb87-651a-4898-8a10-ccc29ed0b043
spec:
  clusterIP: 192.168.5.15
  clusterIPs:
  - 192.168.5.15
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: http
    port: 9200
    protocol: TCP
    targetPort: 9200
  - name: transport
    port: 9300
    protocol: TCP
    targetPort: 9300
  - name: metrics
    port: 9600
    protocol: TCP
    targetPort: 9600
  - name: rca
    port: 9650
    protocol: TCP
    targetPort: 9650
  selector:
    opster.io/opensearch-cluster: first-cluster
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

This config works with prometheus but using the otel collector (via the TA) I get 2023-04-20T20:04:56.248Z warn internal/transaction.go:121 Failed to scrape Prometheus endpoint {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "scrape_timestamp": 1682021096246, "target_labels": "{__name__=\"up\", container=\"opensearch\", endpoint=\"http\", instance=\"100.64.141.115:9200\", job=\"first-cluster-masters\", namespace=\"opensearch-first-cluster\", pod=\"first-cluster-masters-1\", service=\"first-cluster-masters\"}"}

Just looking to find out if this is a known "issue" or is, and it probably is, it that I am doing something wrong? Any help is appreciated

This is all being deployed using the opentelemetry-operator helm chart version 0.26.3

Target Allocator

apiVersion: v1
data:
  targetallocator.yaml: |
    allocation_strategy: least-weighted
    config:
      scrape_configs:
      - job_name: otel-collector
        scrape_interval: 10s
        static_configs:
        - targets:
          - 0.0.0.0:8888
    label_selector:
      app.kubernetes.io/component: opentelemetry-collector
      app.kubernetes.io/instance: otel.allocator
      app.kubernetes.io/managed-by: opentelemetry-operator
kind: ConfigMap
metadata:
  creationTimestamp: "2023-04-20T18:37:48Z"
  labels:
    app.kubernetes.io/component: opentelemetry-targetallocator
    app.kubernetes.io/instance: otel.allocator
    app.kubernetes.io/managed-by: opentelemetry-operator
    app.kubernetes.io/name: allocator-targetallocator
    app.kubernetes.io/part-of: opentelemetry
    app.kubernetes.io/version: latest
  name: allocator-targetallocator
  namespace: otel
  ownerReferences:
  - apiVersion: opentelemetry.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: OpenTelemetryCollector
    name: allocator
    uid: 012622e4-db73-4c9c-8f7c-39e47cdabb8f
  resourceVersion: "9749302"
  uid: a591df05-6f7e-4267-8953-338363472f87

Collector

apiVersion: v1
data:
  collector.yaml: |
    exporters:
      logging:
        loglevel: debug
      prometheusremotewrite:
        endpoint: http://prometheus.kube-prometheus-stack.svc.cluster.local:9090/api/v1/write
        external_labels:
          scraper: ${POD_NAME}
          source: otel-allocator
    extensions:
      memory_ballast:
        size_in_percentage: 20
    processors:
      batch:
        send_batch_max_size: 1000
        send_batch_size: 800
        timeout: 30s
      memory_limiter:
        check_interval: 1s
        limit_percentage: 70
        spike_limit_percentage: 30
    receivers:
      hostmetrics:
        collection_interval: 30s
        scrapers:
          cpu: null
          load:
            cpu_average: true
          memory: null
      prometheus:
        config:
          global:
            scrape_interval: 1m
            scrape_timeout: 10s
            evaluation_interval: 1m
          scrape_configs:
          - job_name: otel-collector
            honor_timestamps: true
            scrape_interval: 10s
            scrape_timeout: 10s
            metrics_path: /metrics
            scheme: http
            follow_redirects: true
            enable_http2: true
            http_sd_configs:
            - follow_redirects: false
              enable_http2: false
              url: http://allocator-targetallocator:80/jobs/otel-collector/targets?collector_id=$POD_NAME
        target_allocator:
          collector_id: ${POD_NAME}
          endpoint: http://allocator-targetallocator
          http_sd_config:
            refresh_interval: 60s
          interval: 30s
    service:
      extensions:
      - memory_ballast
      pipelines:
        metrics:
          exporters:
          - prometheusremotewrite
          processors:
          - memory_limiter
          - batch
          receivers:
          - prometheus
kind: ConfigMap
metadata:
  creationTimestamp: "2023-04-20T18:37:48Z"
  labels:
    app.kubernetes.io/component: opentelemetry-collector
    app.kubernetes.io/instance: otel.allocator
    app.kubernetes.io/managed-by: opentelemetry-operator
    app.kubernetes.io/name: allocator-collector
    app.kubernetes.io/part-of: opentelemetry
    app.kubernetes.io/version: latest
  name: allocator-collector
  namespace: otel
  ownerReferences:
  - apiVersion: opentelemetry.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: OpenTelemetryCollector
    name: allocator
    uid: 012622e4-db73-4c9c-8f7c-39e47cdabb8f
  resourceVersion: "9771170"
  uid: db4349e1-0f44-4b71-af3e-7467b80b7696
@rhysxevans
Copy link
Author

So I did some digging and I think this is not the scheme more the access to the secret that hosts the login details for the service to be monitored

@jaronoff97
Copy link
Contributor

Could you send the response from the target allocator's scrape_configs endpoint? (i.e. curl http://allocator-targetallocator:80/scrape_configs) This would help us ensure that the configuration for this is being propagated correctly. I imagine/wonder if the TA needs to learn how to pull secret data from ServiceMonitor configuration if the prometheus-operator doesn't do that automatically. Could you also send any target allocator logs that exist?

@jaronoff97 jaronoff97 added the area:target-allocator Issues for target-allocator label Apr 25, 2023
@rhysxevans
Copy link
Author

Hi

Below is the relevant section I think. (I have removed the other discovered resources)

curl http://allocator-targetallocator:80/scrape_configs | jq
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 78226    0 78226    0     0  1853k      0 --:--:-- --:--:-- --:--:-- 1818k
{
  "otelcol": {
    "enable_http2": true,
    "follow_redirects": true,
    "honor_timestamps": true,
    "job_name": "otelcol",
    "metrics_path": "/metrics",
    "scheme": "http",
    "scrape_interval": "10s",
    "scrape_timeout": "10s",
    "static_configs": [
      {
        "targets": [
          "0.0.0.0:8888"
        ]
      }
    ]
  },
  "serviceMonitor/opensearch-first-cluster/first-cluster/0": {
    "basic_auth": {
      "username": ""
    },
    "enable_http2": true,
    "follow_redirects": true,
    "honor_timestamps": true,
    "job_name": "serviceMonitor/opensearch-first-cluster/first-cluster/0",
    "kubernetes_sd_configs": [
      {
        "enable_http2": true,
        "follow_redirects": true,
        "kubeconfig_file": "",
        "namespaces": {
          "names": [
            "opensearch-first-cluster"
          ],
          "own_namespace": false
        },
        "role": "endpointslice"
      }
    ],
    "metrics_path": "/_prometheus/metrics",
    "relabel_configs": [
      {
        "action": "replace",
        "regex": "(.*)",
        "replacement": "$1",
        "separator": ";",
        "source_labels": [
          "job"
        ],
        "target_label": "__tmp_prometheus_job_name"
      },
      {
        "action": "keep",
        "regex": "(first-cluster);true",
        "replacement": "$1",
        "separator": ";",
        "source_labels": [
          "__meta_kubernetes_service_label_opster_io_opensearch_cluster",
          "__meta_kubernetes_service_labelpresent_opster_io_opensearch_cluster"
        ]
      },
      {
        "action": "keep",
        "regex": "http",
        "replacement": "$1",
        "separator": ";",
        "source_labels": [
          "__meta_kubernetes_endpointslice_port_name"
        ]
      },
      {
        "action": "replace",
        "regex": "Node;(.*)",
        "replacement": "${1}",
        "separator": ";",
        "source_labels": [
          "__meta_kubernetes_endpointslice_address_target_kind",
          "__meta_kubernetes_endpointslice_address_target_name"
        ],
        "target_label": "node"
      },
      {
        "action": "replace",
        "regex": "Pod;(.*)",
        "replacement": "${1}",
        "separator": ";",
        "source_labels": [
          "__meta_kubernetes_endpointslice_address_target_kind",
          "__meta_kubernetes_endpointslice_address_target_name"
        ],
        "target_label": "pod"
      },
      {
        "action": "replace",
        "regex": "(.*)",
        "replacement": "$1",
        "separator": ";",
        "source_labels": [
          "__meta_kubernetes_namespace"
        ],
        "target_label": "namespace"
      },
      {
        "action": "replace",
        "regex": "(.*)",
        "replacement": "$1",
        "separator": ";",
        "source_labels": [
          "__meta_kubernetes_service_name"
        ],
        "target_label": "service"
      },
      {
        "action": "replace",
        "regex": "(.*)",
        "replacement": "$1",
        "separator": ";",
        "source_labels": [
          "__meta_kubernetes_pod_name"
        ],
        "target_label": "pod"
      },
      {
        "action": "replace",
        "regex": "(.*)",
        "replacement": "$1",
        "separator": ";",
        "source_labels": [
          "__meta_kubernetes_pod_container_name"
        ],
        "target_label": "container"
      },
      {
        "action": "drop",
        "regex": "(Failed|Succeeded)",
        "replacement": "$1",
        "separator": ";",
        "source_labels": [
          "__meta_kubernetes_pod_phase"
        ]
      },
      {
        "action": "replace",
        "regex": "(.*)",
        "replacement": "${1}",
        "separator": ";",
        "source_labels": [
          "__meta_kubernetes_service_name"
        ],
        "target_label": "job"
      },
      {
        "action": "replace",
        "regex": "(.+)",
        "replacement": "${1}",
        "separator": ";",
        "source_labels": [
          "__meta_kubernetes_service_label_opensearch_first_cluster"
        ],
        "target_label": "job"
      },
      {
        "action": "replace",
        "regex": "(.*)",
        "replacement": "http",
        "separator": ";",
        "target_label": "endpoint"
      },
      {
        "action": "hashmod",
        "modulus": 1,
        "regex": "(.*)",
        "replacement": "$1",
        "separator": ";",
        "source_labels": [
          "__address__"
        ],
        "target_label": "__tmp_hash"
      },
      {
        "action": "keep",
        "regex": "$(SHARD)",
        "replacement": "$1",
        "separator": ";",
        "source_labels": [
          "__tmp_hash"
        ]
      }
    ],
    "scheme": "https",
    "scrape_interval": "30s",
    "scrape_timeout": "10s",
    "tls_config": {
      "insecure_skip_verify": true
    }
  }
}

Target Allocator logs

kubectl -n otel logs pod/allocator-targetallocator-8445664fc6-sbxwr
{"level":"info","ts":"2023-04-21T12:22:55Z","msg":"Starting the Target Allocator"}
{"level":"info","ts":"2023-04-21T12:22:55Z","logger":"allocator","msg":"Unrecognized filter strategy; filtering disabled"}
{"level":"info","ts":"2023-04-21T12:22:55Z","msg":"Waiting for caches to sync for servicemonitors\n"}
{"level":"info","ts":"2023-04-21T12:22:55Z","logger":"allocator","msg":"Starting server..."}
{"level":"info","ts":"2023-04-21T12:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T12:22:55Z","msg":"Caches are synced for servicemonitors\n"}
{"level":"info","ts":"2023-04-21T12:22:55Z","msg":"Waiting for caches to sync for podmonitors\n"}
{"level":"info","ts":"2023-04-21T12:22:55Z","msg":"Caches are synced for podmonitors\n"}
{"level":"info","ts":"2023-04-21T12:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T12:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T13:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T13:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T13:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T13:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T14:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T14:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T14:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T14:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T15:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T15:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T15:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T15:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T16:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T16:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T16:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T16:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T17:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T17:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T17:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T17:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T18:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T18:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T18:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T18:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T19:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T19:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T19:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T19:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T20:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T20:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T20:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T20:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T21:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T21:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T21:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T21:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T22:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T22:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T22:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T22:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T23:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T23:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T23:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-21T23:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T00:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T00:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T00:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T00:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T01:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T01:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T01:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T01:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T02:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T02:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T02:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T02:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T03:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T03:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T03:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T03:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T04:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T04:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T04:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T04:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T05:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T05:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T05:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T05:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T06:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T06:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T06:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T06:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T07:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T07:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T07:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T07:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T08:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T08:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T08:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T08:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T09:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T09:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T09:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T09:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T10:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T10:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T10:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T10:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T11:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T11:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T11:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T11:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T12:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T12:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T12:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T12:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T13:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T13:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T13:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T13:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T14:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T14:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T14:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T14:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T15:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T15:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T15:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T15:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T16:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T16:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T16:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T16:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T17:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T17:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T17:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T17:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T18:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T18:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T18:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T18:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T19:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T19:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T19:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T19:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T20:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T20:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T20:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T20:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T21:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T21:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T21:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T21:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T22:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T22:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T22:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T22:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T23:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T23:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T23:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-22T23:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T00:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T00:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T00:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T00:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T01:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T01:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T01:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T01:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T02:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T02:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T02:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T02:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T03:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T03:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T03:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T03:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T04:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T04:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T04:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T04:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T05:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T05:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T05:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T05:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T06:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T06:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T06:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T06:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T07:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T07:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T07:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T07:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T08:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T08:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T08:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T08:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T09:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T09:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T09:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T09:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T10:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T10:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T10:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T10:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T11:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T11:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T11:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T11:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T12:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T12:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T12:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T12:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T13:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T13:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T13:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T13:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T14:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T14:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T14:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T14:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T15:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T15:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T15:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T15:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T16:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T16:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T16:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T16:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T17:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T17:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T17:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T17:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T18:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T18:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T18:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T18:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T19:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T19:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T19:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T19:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T20:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T20:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T20:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T20:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T21:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T21:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T21:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T21:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T22:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T22:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T22:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T22:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T23:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T23:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T23:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-23T23:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T00:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T00:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T00:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T00:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T01:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T01:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T01:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T01:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T02:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T02:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T02:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T02:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T03:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T03:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T03:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T03:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T04:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T04:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T04:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T04:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T05:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T05:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T05:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T05:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T06:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T06:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T06:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T06:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T07:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T07:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T07:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T07:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T08:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T08:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T08:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T08:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T09:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T09:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T09:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T09:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T10:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T10:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T10:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T10:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T11:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T11:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T11:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T11:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T12:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T12:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T12:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T12:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T13:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T13:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T13:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T13:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T14:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T14:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T14:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T14:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T15:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T15:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T15:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T15:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T16:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T16:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T16:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T16:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T17:07:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T17:22:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T17:37:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T17:52:55Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T17:59:44Z","logger":"allocator","msg":"No event found. Restarting watch routine","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T17:59:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T18:02:44Z","logger":"allocator","msg":"No event found. Restarting watch routine","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T18:02:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T18:17:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T18:32:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T18:47:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T19:02:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T19:17:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T19:32:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T19:47:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T20:02:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T20:17:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T20:32:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T20:47:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T21:02:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T21:17:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T21:32:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T21:47:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T22:02:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T22:17:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T22:32:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T22:47:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T23:02:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T23:17:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T23:32:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-24T23:47:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T00:02:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T00:17:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T00:32:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T00:47:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T01:02:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T01:17:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T01:32:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T01:47:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T02:02:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T02:17:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T02:32:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T02:47:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T03:02:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T03:17:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T03:32:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T03:47:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T04:02:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T04:17:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T04:32:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T04:47:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T05:02:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T05:17:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T05:32:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T05:47:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T06:02:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T06:17:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T06:32:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T06:47:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T07:02:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T07:17:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T07:32:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T07:47:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T08:02:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T08:17:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T08:32:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T08:47:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T09:02:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T09:17:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T09:32:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T09:47:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T10:02:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T10:17:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T10:32:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T10:47:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T11:02:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T11:17:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T11:32:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T11:47:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T12:02:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T12:17:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T12:32:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T12:47:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T13:02:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T13:17:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T13:32:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T13:47:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T14:02:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T14:17:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T14:32:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T14:47:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T15:02:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}
{"level":"info","ts":"2023-04-25T15:17:44Z","logger":"allocator","msg":"Successfully started a collector pod watcher","component":"opentelemetry-targetallocator"}

Collector logs

2023-04-25T14:48:11.275Z	warn	internal/transaction.go:121	Failed to scrape Prometheus endpoint	{"kind": "receiver", "name": "prometheus", "data_type": "metrics", "scrape_timestamp": 1682434091272, "target_labels": "{__name__=\"up\", container=\"opensearch\", endpoint=\"http\", instance=\"100.64.139.12:9200\", job=\"first-cluster\", namespace=\"opensearch-first-cluster\", pod=\"first-cluster-masters-0\", service=\"first-cluster\"}"}
2023-04-25T14:48:12.637Z	warn	internal/transaction.go:121	Failed to scrape Prometheus endpoint	{"kind": "receiver", "name": "prometheus", "data_type": "metrics", "scrape_timestamp": 1682434092634, "target_labels": "{__name__=\"up\", container=\"opensearch\", endpoint=\"http\", instance=\"100.64.141.115:9200\", job=\"first-cluster-masters\", namespace=\"opensearch-first-cluster\", pod=\"first-cluster-masters-1\", service=\"first-cluster-masters\"}"}

If you need anything else let me know

Thanks

@rhysxevans
Copy link
Author

Service Monitor setup

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  creationTimestamp: "2023-04-06T18:07:21Z"
  generation: 1
  labels:
    app: first-cluster
  name: first-cluster
  namespace: opensearch-first-cluster
  resourceVersion: "991661"
  uid: cfad90db-7361-4b05-96de-b89b36537dbc
spec:
  endpoints:
  - basicAuth:
      password:
        key: password
        name: first-cluster-opensearch-monitoring
      username:
        key: username
        name: first-cluster-opensearch-monitoring
    interval: 30s
    path: /_prometheus/metrics
    port: http
    scheme: https
    tlsConfig:
      insecureSkipVerify: true
  jobLabel: opensearch_first-cluster
  namespaceSelector:
    matchNames:
    - opensearch-first-cluster
  selector:
    matchLabels:
      opster.io/opensearch-cluster: first-cluster

@rhysxevans
Copy link
Author

rhysxevans commented Apr 25, 2023

first-cluster-opensearch-monitoring is a secret

apiVersion: v1
data:
  password: b3BlbnNlasdasdaJjaA==
  username: bW9uaasdascmluZw==
kind: Secret
metadata:
  creationTimestamp: "2023-04-06T18:07:19Z"
  name: first-cluster-opensearch-monitoring
  namespace: opensearch-first-cluster
  resourceVersion: "991640"
  uid: 7a62728f-336d-40cd-a954-99c114761e46
type: kubernetes.io/basic-auth

@jaronoff97
Copy link
Contributor

You may want to hide that secret...

@rhysxevans
Copy link
Author

It is not a valid one

@jaronoff97
Copy link
Contributor

It seems that

    "basic_auth": {
      "username": ""
    },

In the scrape config is the issue... I think this may be a target allocator bug. At first I thought we may need to let the TA mount a secret, but it's doing target discovery fine, so I think the real issue is something related to how we marshal the config which is out of our control. Either way, this is going to take a bit of investigatory work.

@rhysxevans
Copy link
Author

Ok, let me know if you need me to do anything. And thanks for your help

@jaronoff97 jaronoff97 added the bug Something isn't working label Apr 25, 2023
@jaronoff97
Copy link
Contributor

Alright, i found the problematic lines! We aren't setting the secrets in the store which is what gets used by the prometheus-operator generation here and here. I believe the fix is going to be in setting the store, though i'm not sure how to go about that right now.

@jaronoff97
Copy link
Contributor

ah okay, prometheus-operator has a bunch of functions we need to call to make this work something like this block. I'd say this work is possible, but not a tiny change... I don't have a ton of capacity to work on it currently unfortunately. I'll ask around in the next operator SIG meeting and in the slack and see if anyone else can take it.

@rhysxevans
Copy link
Author

Ok, thanks for you help

@jaronoff97
Copy link
Contributor

@matej-g offered to help out here, thank you! Please let me know if you need any more clarification here.

@matej-g
Copy link
Contributor

matej-g commented May 4, 2023

Thanks for the pointers @jaronoff97, sorry for the delay 🙂 - PR is here #1710

@matej-g
Copy link
Contributor

matej-g commented May 11, 2023

Some additional context, I also chatted with @jaronoff97.

It seems that currently, we won't be able to support specifying these credential fields that are coming from secrets, without further work, possibly on both target allocator and receiver.

When we specify a secret in the scrape config, it will get redacted to <secret> in the /scrape_configs response (see my screenshot here - #1710 (comment)), making it not possible for receiver to know the actual secret. I think we'd first need to find a way to communicate which secret needs the collector look into in order to properly build credentials on the collector side of things.

On the other hand, this should not affect credentials that can be provide with a file path - this would only require that the collector has that particular secret mounted.

@matej-g
Copy link
Contributor

matej-g commented Jun 12, 2023

Sorry for the silence on this one. It looks like we're not the only project dealing with this issue, potentially we might have a solution from prometheus/common#487 (we might have to warn our users that their secrets might be exposed via target allocator endpoint, alas that is the downside if they want to use credentials).

In order to move forward with #1710 (it also includes addition of some unit tests which are useful outside of this issue as well), @jaronoff97 would you be fine going with your suggestion - i.e. we can state that for now we support only those types of credentials that can be provided in a file, document this and revisit this topic once prometheus/common#487 moves forward.

This will unfortunately not unblock the present issue instantly, but allows us to at least provide partial solution.

@matej-g
Copy link
Contributor

matej-g commented Jun 28, 2023

I think this was mistakenly marked as resolved, #1710 provided only partial solution, this should be re-open.

@matej-g
Copy link
Contributor

matej-g commented Jan 10, 2024

So this popped up again on my radar and I would like to address this, together with #1844 and build on changes incoming in #2328.

But I think we need some alignment on the next steps. It seems we will need to deal with two distinct use cases:

  • Mounting files for authorization / TLS - These should be mounted by the collector, so when they are referenced in the endpoint config, collector pod(s) can use them. We could require users to mount it themselves, as suggested previously, or to be more user friendly, the operator should ensure that these are mounted automatically.

  • Referencing secret / configmap - As I described above, the problem here is that we need to communicate the content of the secret between the TA and collector. But default, based on the upstream Prometheus code, a secret is being redacted during marshalling, so when collector pulls the scrape configs, it will only show up as <secret>. I see two ways to work around this:

    1. Override the marshalling and expose the secret values (a la solution referenced here config: allow exposing real secret value through marshal prometheus/common#487 (comment)), making the collector capable of directly "reading" it. Obviously, this opens up the possibility of leaking a secret, since they will be present in plain text in the scrape_config.
    2. Instead of providing secrets directly, communicate which secrets / configmaps keys and values should be used for particular endpoint authorization. The collector would then need to pull the secret and get the actual value, and use that in the authorization config. This would require more work and adjustments on the collector, but would avoid having to provide secrets directly.

cc @jaronoff97

@jaronoff97
Copy link
Contributor

@matej-g i've been thinking about this as well... let me address each of the cases:

  1. mounting is fine, i think the issue is that the operator may not know about the configs that need to be mounted. The scrape configs generated by the servicemonitors come from the TA directly to the collector, which means the operator isn't aware of what needs to be present.
  2. I think the first option is going to be easier architecturally than the second. The second option has the same problem as the first which is that the operator isn't aware of what secrets / configmaps need to be mounted.

All of this brings me to an unfortunate conclusion... I think we are reaching the limit of what is possible with this servicemonitor CRD architecture. The decision to have the CRDs be pulled by the target allocator was one that was made prior to me joining the project and it didn't feel right to critique it at the time. Given the issue we are discussing is one of many that stems from this architecture choice and that we are discussing the next version of our own architecture, I think now is the time to figure out if there is a better way to architect this functionality.

In my opinion, we should move the CRD functionality entirely to the operator. The operator would take on the onus to pull the prometheus CRDs, translate their YAML to a scrape config and simply write it as part of the collector's (and TA's) config map. This would allow the operator to provide much more convenience when it comes to secrets and configs because we would no longer need to deal with the secrets being marshalled. Furthermore, moving this functionality out of the target allocator and on to the operator would reduce the scope of the TA (and probably improve its performance) and improve the overall user experience for the prom CRDs. The only drawback of doing this is that non-operator users would no longer be able to take advantage of this. Given that doing so has always been done outside our recommendations, I think this is a worthwhile tradeoff. ex:

sequenceDiagram
    User->>+Operator: Applies collector w/ promCRD enabled
    Operator->>+Operator: Reconciles collector CRD
    Operator-->>-Operator: Pulls prom CRDs that match selector
    Operator->>+Target-Allocator: Creates TA w/ CRD config
    Operator->>+Collector: Creates Collector w/ CRD config
    Collector->>Target-Allocator: Requests Targets

On any change from the operators prom crd watcher it simply rolls out a new version of the collector and target allocator which both will have a hash of the scrape config as an annotation. Slotting in to the above configuration for the issue at hand would look like this:

sequenceDiagram
    User->>+Operator: Applies collector w/ promCRD enabled
    Operator->>+Operator: Reconciles collector CRD
    Operator-->>-Operator: Pulls prom CRDs that match selector
    Operator->>+User: Propagates warnings for missing secrets/config mounts

After this exists, we could visit the possibility of doing this automatically for a user as well, though I'm sure that's going to be a bit thornier. Let me know your thoughts! Thank you 🙇

cc @swiatekm-sumo who may have some better ideas about this.

@swiatekm-sumo
Copy link
Contributor

For reference, what prometheus-operator does here is simply writing the whole configuration, credentials included, into a Secret, which is then mounted in the Prometheus Pod and config-reloader takes care of making Prometheus load the new configuration. I mention this because if we want to do anything outside of this loop, we'll need to add it ourselves.

A similar architecture, which I understand Jacob's proposal to be, requires that we deal with the configuration reload problem, which is quite thorny, and is the reason prometheus-operator uses a config-reloader program in the first place. Restarting everything every time this configuration changes is a brute force solution, and I don't think it would scale well to large clusters. It certainly feels wrong to me to force these restarts on users who don't need them (because they don't need autorization for their scrapes). The alternative is to use something like config-reloader, but we'd need it for both the collector and the target allocator, and from what I know it's relatively expensive to reload the collector.

Making users mount the Secrets manually in the Collector seems like it could be a reasonable workaround for the time being, but we'd also need to do develop a convention for file naming, as ServiceMonitor only allows a Secret name to be specified for authorization, so Target Allocator would need to change these to file path references, and the user would actually need to mount the Secrets under these file paths.

Maybe the mid-term solution is simply to secure the connection between the Collector and Target Allocator? It wouldn't be particularly difficult to set up client TLS authentication for this connection - the operator can easily generate the Secret and ensure it's mounted in all the Pods. This also has some negative consequences, like making troubleshooting TA harder, but along with custom unmarshaling, it solves all the problems, and is only marginally more complicated than the filename approach.

@jaronoff97
Copy link
Contributor

Yes, the configuration reload problem is definitely thorny. I was thinking with the rate limiting you put in place for the TA, we would use that to limit how often we're rolling out new workloads.

I think setting up TLS between the Collector and TA is a good thing to do regardless of this issue and should be possible given we have a dependency on cert-manager already.

Let's discuss these ideas at next week's SIG meeting.

@swiatekm-sumo
Copy link
Contributor

Yes, the configuration reload problem is definitely thorny. I was thinking with the rate limiting you put in place for the TA, we would use that to limit how often we're rolling out new workloads.

I don't think any amount of rate limiting will fix this. We're adding a per-node allocation strategy in #2430, and when using that you'd have to recreate all the Pods of a DaemonSet, which can be hundreds in a large cluster. Putting the usability of this aside, users have come to expect performance similar to prometheus-operator, which takes under a minute from a ServiceMonitor change to the first new scrape.

@jaronoff97
Copy link
Contributor

jaronoff97 commented Jan 12, 2024

yeah that's very true. Ugh. I can check in with collector SIG people and see if their efforts for dynamic reloading have made progress. I see you already have an open issue 😮‍💨

@alita1991
Copy link

I hit this issue today while trying to find a solution to scrape protected metrics endpoints, I created a ServiceMonitor with basic_auth credentials and the collector is unable to scrape the endpoint.

Still waiting for prometheus/common#487 to be merged, I know is not the perfect solution, but can unblock me.

My goal is to reduce the manual work on the collectors and the ServiceMonitor/PodMonitor scrape is the key solution.

@swiatekm-sumo
Copy link
Contributor

FYI @alita1991 I'm fine using the kind of "horrible workaround" mentioned in that PR. But in order to actually do this, we'd need to secure the connection between the collector and operator. My idea for implementing this was:

  • Add an optional https server to the target allocator
  • Generate a TLS cert for the target allocator
  • Mount the TLS cert to the collector, use it for authenticating the connection to the target allocator
  • Mount the TLS cert to the target allocator, set up client-side TLS auth
  • If serving over https, use custom marshalling to include secrets in the scrape config

@rashmichandrashekar
Copy link
Contributor

FYI @alita1991 I'm fine using the kind of "horrible workaround" mentioned in that PR. But in order to actually do this, we'd need to secure the connection between the collector and operator. My idea for implementing this was:

  • Add an optional https server to the target allocator
  • Generate a TLS cert for the target allocator
  • Mount the TLS cert to the collector, use it for authenticating the connection to the target allocator
  • Mount the TLS cert to the target allocator, set up client-side TLS auth
  • If serving over https, use custom marshalling to include secrets in the scrape config

@swiatekm-sumo -Will this be available anytime soon? Since the latest version of prometheus common with the fix is picked up by the operator?

@swiatekm-sumo
Copy link
Contributor

@rashmichandrashekar I don't think anyone is actively working on this at the moment. If you or anyone else would like to, I can provide more detailed guidance on how to proceed.

@ItielOlenick
Copy link
Contributor

@swiatekm-sumo I'd like to work on this feature

@swiatekm-sumo
Copy link
Contributor

Great, thanks for picking this up! My suggestion would be to start by adding the https server, then enable communication over mtls between the collector and target allocator, and finally the secret marshalling change. All of this together sounds like a lot for a single PR, and smaller PRs get reviewed faster on average.

@ItielOlenick
Copy link
Contributor

@swiatekm-sumo After familiarizing myself with the codebase, I've started adding the HTTPS server and experimented with the functionality introduced in prometheus/common#487.

I have some questions and ideas:

  1. It makes sense to have only one of the HTTPS/HTTP servers running, since we probably don't want to keep two versions of the scrape_config (one with the secrets revealed and one without) plus the rest of the server's logic.
  2. Readiness and liveness probes - since we are serving with mTLS, we cannot use the HTTPGetAction even with the schema as HTTPSs. I'm thinking of either creating a small executable that will make use of the mounted certs and make the necessary call to the server using ExecAction, or creating a separate server dedicated for the probes.

@swiatekm-sumo
Copy link
Contributor

I actually wanted to keep both servers, because hiding everything behind auth makes debugging a lot more difficult. One of the simplest ways of troubleshooting issues with the target allocator is to forward the http port and just check the endpoints in a web browser. Probes are another reason to keep the http server.

Is it a big problem to keep both versions of the serialized scrape configs?

@ItielOlenick
Copy link
Contributor

I agree.
It will be simpler to have both servers running from that perspective. I'll look into that.

This was referenced May 1, 2024
@ItielOlenick
Copy link
Contributor

ItielOlenick commented May 8, 2024

Regarding the next steps, this is what I had in mind:

  • Adding logic to the collector to enable using mTLS when connecting to the TA.
  • Taking advantage of the existing certificate manager to manage the CA, server, and client certificates. Should we reuse the existing issuer already configured?
  • Adding relevant flags/configurations to mount certificates to both the client and server and utilize mTLS.

@swiatekm-sumo
Copy link
Contributor

Regarding the next steps, this is what I had in mind:

* Adding logic to the collector to enable using mTLS when connecting to the TA.

This was already done in open-telemetry/opentelemetry-collector-contrib#31449, we just need to configure the receiver correctly.

* Taking advantage of the existing certificate manager to manage the CA, server, and client certificates. Should we reuse the existing issuer already configured?

This might be a can of worms. Technically, we require cert-manager, but this is only for webhooks. The Operator Helm chart has an option to generate the certs statically without cert-manager, so if we put a hard dependency on it here, we'll be locking those users out of this functionality. I'd prefer not to do that, if possible, but that decision depends on how complex the non-cert-manager implementation will be. @jaronoff97 @pavolloffay wdyt?

* Adding relevant flags/configurations to mount certificates to both the client and server and utilize mTLS.

👍

@jaronoff97
Copy link
Contributor

I definitely agree with the above for the first and third points... for the middle point, I think we could take a similar to approach as we now do for the autodetection for RBAC permissions. i.e. if you give the operator permissions for the creating and managing certs AND you have cert manager installed, we will automatically secure the connection for you. I don't think this needs to be in the initial version of this and until then we could simply provide some example configs on how this would be done.

@swiatekm-sumo
Copy link
Contributor

That's fair enough for an initial implementation, I agree.

@ItielOlenick
Copy link
Contributor

For the initial version i was thinking of this addition to the CRD API:

spec:
  config:
    targetAllocator:
      mtls:
        enabled: true
        targetAllocatorCerts:
          ca_file: ""
          cert_file: ""
          key_file: ""
        collectorCerts:
          ca_file: ""
          cert_file: ""
          key_file: ""

What do you think?

@thefirstofthe300
Copy link

My two cents would be that the underscores should be removed and replaced with lowerCamelCasing.

From the end-user perspective, what purpose do these files serve? Are they references to files mounted by a secret? If so, how would I define the secret? If they're managed by the OTEL operator, why do I need to care about the certificate configuration at all?

@ItielOlenick
Copy link
Contributor

ItielOlenick commented May 14, 2024 via email

@swiatekm-sumo
Copy link
Contributor

I was actually thinking that we should start with what Jacob mentioned - if the cert-manager CRDs are installed and the operator has permission to create them, then use cert-manager to provision the certificates. I'd rather avoid adding fields to the Collector CRDs until we're confident we know how this should work. And ideally, I'd rather avoid making this configurable at all.

@ItielOlenick
Copy link
Contributor

Since @jaronoff97 suggested the initial version of this change doesn't need to include managing certs and we will provide example configs in the meanwhile, i was going in this direction.
Open to suggestions about how we should get the relevant configs to the Collector/TA without changing the CRDs. Unless we do want to go with managing certs with cert-manager off the bat.

@swiatekm-sumo
Copy link
Contributor

I'd personally rather do the no-config version first, which means using cert-manager. There isn't really any benefit to the user orchestrating the certs themselves, other than us not needing to do it, and like I mentioned, I really don't want to add fields to the CRD unless absolutely necessary.

@ItielOlenick
Copy link
Contributor

If the readme specifically states that cert manager needs to be installed in the cluster, why do we need to check if the CRDs exist for this functionality?

@swiatekm-sumo
Copy link
Contributor

That, in my opinion, is a documentation failure we should fix. Cert-manager is used by the default kustomize manifests for provisioning webhook certificates, but it's very much possible to not use it. For example, the official Helm Chart can work without cert-manager by manually creating the necessary Secrets.

I don't think we should be making the dependency on cert-manager deeper.

@ItielOlenick
Copy link
Contributor

Got it. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:target-allocator Issues for target-allocator bug Something isn't working
Projects
None yet
8 participants