Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I cannot extract the vulnerability report from trivy operator CRD #1996

Open
rukender opened this issue Apr 10, 2024 · 12 comments
Open

I cannot extract the vulnerability report from trivy operator CRD #1996

rukender opened this issue Apr 10, 2024 · 12 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@rukender
Copy link

rukender commented Apr 10, 2024

I'm not able to see the vulnerability report in my other cluster but all the other reports I can see it. how can I fix this?

I'm using Trivy operator 0.19.1 version

# kubectl get configauditreports -o wide -n trivy
NAME                                                          SCANNER   AGE     CRITICAL   HIGH   MEDIUM   LOW
limitrange-trivy                                              Trivy     17h     0          0      0        1
replicaset-8fd58f485                                          Trivy     3h49m   0          0      2        4
resourcequota-trivy                                           Trivy     17h     0          0      0        1
service-ops-k8s-secops-trivy-operator-trivy-operator-shared   Trivy     17h     0          0      0        1

These are all CRDs
# kubectl get crd|grep aqua
clustercompliancereports.aquasecurity.github.io             2024-04-09T16:34:17Z
clusterconfigauditreports.aquasecurity.github.io            2024-04-09T16:34:17Z
clusterinfraassessmentreports.aquasecurity.github.io        2024-04-09T16:34:17Z
clusterrbacassessmentreports.aquasecurity.github.io         2024-04-09T16:34:17Z
clustersbomreports.aquasecurity.github.io                   2024-04-09T16:34:17Z
configauditreports.aquasecurity.github.io                   2024-04-09T16:34:17Z
exposedsecretreports.aquasecurity.github.io                 2024-04-09T16:34:17Z
infraassessmentreports.aquasecurity.github.io               2024-04-09T16:34:17Z
rbacassessmentreports.aquasecurity.github.io                2024-04-09T16:34:17Z
sbomreports.aquasecurity.github.io                          2024-04-09T16:34:17Z
vulnerabilityreports.aquasecurity.github.io                 2024-04-09T16:34:17Z

# kubectl get vulnerabilityreports.aquasecurity.github.io -o wide -A
No resources found

# kubectl get vulnerabilityreports.aquasecurity.github.io -o wide -n trivy
No resources found in trivy namespace.

Its not reporting for vulnerability

Here is the trivy-operator configmap

 kubectl get configmap trivy-operator -n trivy
NAME             DATA   AGE
trivy-operator   10     21h

# kubectl describe configmap trivy-operator -n trivy
Name:         trivy-operator
Namespace:    trivy
Labels:       app.kubernetes.io/instance=ops-k8s-secops-trivy-operator
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=trivy-operator-shared
              app.kubernetes.io/version=0.17.1
              argocd.argoproj.io/instance=ops-k8s-secops-trivy-operator
              helm.sh/chart=trivy-operator-shared-0.19.1
Annotations:  <none>

Data
====
compliance.failEntriesLimit:
----
10
configAuditReports.scanner:
----
Trivy
nodeCollector.volumes:
----
[{"hostPath":{"path":"/var/lib/etcd"},"name":"var-lib-etcd"},{"hostPath":{"path":"/var/lib/kubelet"},"name":"var-lib-kubelet"},{"hostPath":{"path":"/var/lib/kube-scheduler"},"name":"var-lib-kube-scheduler"},{"hostPath":{"path":"/var/lib/kube-controller-manager"},"name":"var-lib-kube-controller-manager"},{"hostPath":{"path":"/etc/systemd"},"name":"etc-systemd"},{"hostPath":{"path":"/lib/systemd"},"name":"lib-systemd"},{"hostPath":{"path":"/etc/kubernetes"},"name":"etc-kubernetes"},{"hostPath":{"path":"/etc/cni/net.d/"},"name":"etc-cni-netd"}]
report.recordFailedChecksOnly:
----
true
scanJob.compressLogs:
----
true
scanJob.podTemplateContainerSecurityContext:
----
{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"privileged":false,"readOnlyRootFilesystem":true}
node.collector.imageRef:
----
registry.example.com:5000/aquasecurity/node-collector:0.0.6
nodeCollector.volumeMounts:
----
[{"mountPath":"/var/lib/etcd","name":"var-lib-etcd","readOnly":true},{"mountPath":"/var/lib/kubelet","name":"var-lib-kubelet","readOnly":true},{"mountPath":"/var/lib/kube-scheduler","name":"var-lib-kube-scheduler","readOnly":true},{"mountPath":"/var/lib/kube-controller-manager","name":"var-lib-kube-controller-manager","readOnly":true},{"mountPath":"/etc/systemd","name":"etc-systemd","readOnly":true},{"mountPath":"/lib/systemd/","name":"lib-systemd","readOnly":true},{"mountPath":"/etc/kubernetes","name":"etc-kubernetes","readOnly":true},{"mountPath":"/etc/cni/net.d/","name":"etc-cni-netd","readOnly":true}]
scanJob.podTemplatePodSecurityContext:
----
{"FsGroup":10000,"RunAsGroup":10000,"RunAsNonRoot":true,"RunAsUser":10000,"SupplementalGroups":[10000]}
vulnerabilityReports.scanner:
----
Trivy

BinaryData
====

Events:  <none>
@rukender rukender added the kind/bug Categorizes issue or PR as related to a bug. label Apr 10, 2024
@chen-keinan
Copy link
Collaborator

@rukender are you using helm install ... to deploy ?

@rukender
Copy link
Author

@rukender are you using helm install ... to deploy?

Yes, we are using helm chart to deploy in our k8s cluster using ArgoCD.

@chen-keinan
Copy link
Collaborator

@rukender are you using helm install ... to deploy?

Yes, we are using helm chart to deploy in our k8s cluster using ArgoCD.

can you please do the following:

  1. helm uninstall trivy-operator -n trivy-system
  2. delete all crds :
kubectl delete crd vulnerabilityreports.aquasecurity.github.io
    kubectl delete crd exposedsecretreports.aquasecurity.github.io
    kubectl delete crd configauditreports.aquasecurity.github.io
    kubectl delete crd clusterconfigauditreports.aquasecurity.github.io
    kubectl delete crd rbacassessmentreports.aquasecurity.github.io
    kubectl delete crd infraassessmentreports.aquasecurity.github.io
    kubectl delete crd clusterrbacassessmentreports.aquasecurity.github.io
    kubectl delete crd clustercompliancereports.aquasecurity.github.io
    kubectl delete crd clusterinfraassessmentreports.aquasecurity.github.io
    kubectl delete crd sbomreports.aquasecurity.github.io
    kubectl delete crd clustersbomreports.aquasecurity.github.io
    kubectl delete crd clustervulnerabilityreports.aquasecurity.github.io
  1. install again the latest trivy-operator via helm install...

@chen-keinan
Copy link
Collaborator

@rukender any update on this issue ?

@rukender
Copy link
Author

rukender commented Apr 16, 2024

@rukender any update on this issue ?

@chen-keinan sorry, here is the update on this

# kubectl get crds | grep aqua | awk '{print $1}' | xargs kubectl delete crd
customresourcedefinition.apiextensions.k8s.io "clustercompliancereports.aquasecurity.github.io" deleted
customresourcedefinition.apiextensions.k8s.io "clusterconfigauditreports.aquasecurity.github.io" deleted
customresourcedefinition.apiextensions.k8s.io "clusterinfraassessmentreports.aquasecurity.github.io" deleted
customresourcedefinition.apiextensions.k8s.io "clusterrbacassessmentreports.aquasecurity.github.io" deleted
customresourcedefinition.apiextensions.k8s.io "clustersbomreports.aquasecurity.github.io" deleted
customresourcedefinition.apiextensions.k8s.io "configauditreports.aquasecurity.github.io" deleted
customresourcedefinition.apiextensions.k8s.io "exposedsecretreports.aquasecurity.github.io" deleted
customresourcedefinition.apiextensions.k8s.io "infraassessmentreports.aquasecurity.github.io" deleted
customresourcedefinition.apiextensions.k8s.io "rbacassessmentreports.aquasecurity.github.io" deleted
customresourcedefinition.apiextensions.k8s.io "sbomreports.aquasecurity.github.io" deleted
customresourcedefinition.apiextensions.k8s.io "vulnerabilityreports.aquasecurity.github.io" deleted

# kubectl get pods -n trivy
NAME                                                              READY   STATUS      RESTARTS   AGE
node-collector-7d4f56f9fc-dp2sh                                   0/1     Completed   0          10m
trivy-operator-trivy-operator-shared-75c77ck6dhg   1/1     Running     0          10m

# kubectl get vulnerabilityreports.aquasecurity.github.io -o wide
No resources found in default namespace.

I can still see the same issue after deleting the CRDs
Also, when I see in the ArgoUI, the trivy is only generating configauditreports.

I'm seeing these error message for different pods

{"level":"error"
"ts":"2024-04-16T14:14:55Z"
"msg":"Reconciler error"
"controller":"resourcequota"
"controllerGroup":""
"controllerKind":"ResourceQuota"
"ResourceQuota":{"name":"example-cluster"
"namespace":"example-cluster"}
"namespace":"example-cluster"
"name":"example-cluster"
"reconcileID":"ba815386-1cd7-400d-a8a9-fde4c53ee8dc"
"error":"the server could not find the requested resource (post configauditreports.aquasecurity.github.io)"
"stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:329\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:227"}

@rukender
Copy link
Author

More details:

W0416 14:08:19.293929       1 reflector.go:535] pkg/mod/k8s.io/client-go@v0.28.4/tools/cache/reflector.go:229: failed to list *v1alpha1.VulnerabilityReport: the server could not find the requested resource (get vulnerabilityreports.aquasecurity.github.io)
E0416 14:08:19.293956       1 reflector.go:147] pkg/mod/k8s.io/client-go@v0.28.4/tools/cache/reflector.go:229: Failed to watch *v1alpha1.VulnerabilityReport: failed to list *v1alpha1.VulnerabilityReport: the server could not find the requested resource (get vulnerabilityreports.aquasecurity.github.io)
W0416 14:08:19.294003       1 reflector.go:535] pkg/mod/k8s.io/client-go@v0.28.4/tools/cache/reflector.go:229: failed to list *v1alpha1.ClusterConfigAuditReport: the server could not find the requested resource (get clusterconfigauditreports.aquasecurity.github.io)

@chen-keinan
Copy link
Collaborator

chen-keinan commented Apr 16, 2024

@rukender do you mind doing a simple test with local kind cluster and deploy trivy-operator to it with default settings
with helm install:

helm install trivy-operator aqua/trivy-operator \
     --namespace trivy-system \
     --create-namespace \
     --version 0.21.4

just to make sure you are able to get vulnerabilities , to confirm if it Env. related , so we could think on other directions

@rukender
Copy link
Author

@rukender do you mind doing a simple test with local kind cluster and deploy trivy-operator to it with default settings with helm install:

helm install trivy-operator aqua/trivy-operator \
     --namespace trivy-system \
     --create-namespace \
     --version 0.21.4

just to make sure you are able to get vulnerabilities , to confirm if it Env. related , so we could think on other directions

This is replicaset for trivy-operator:

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  annotations:
    deployment.kubernetes.io/desired-replicas: '1'
    deployment.kubernetes.io/max-replicas: '1'
    deployment.kubernetes.io/revision: '1'
  creationTimestamp: '2024-04-16T14:08:02Z'
  generation: 1
  labels:
    app.kubernetes.io/instance: trivy-operator
    app.kubernetes.io/name: trivy-operator-shared
    pod-template-hash: 75c77c8cd9
  name: trivy-operator-trivy-operator-shared-75c77c8cd9
  namespace: trivy
  ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: Deployment
      name: trivy-operator-trivy-operator-shared
      uid: 6ff98613-3352-4484-b7a8-1203177faa68
  resourceVersion: '660578862'
  uid: ce89ad8c-1dfc-42da-be75-dab9c8f31481
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: trivy-operator
      app.kubernetes.io/name: trivy-operator-shared
      pod-template-hash: 75c77c8cd9
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: trivy-operator
        app.kubernetes.io/name: trivy-operator-shared
        pod-template-hash: 75c77c8cd9
    spec:
      automountServiceAccountToken: true
      containers:
        - env:
            - name: OPERATOR_NAMESPACE
              value: trivy
            - name: OPERATOR_TARGET_NAMESPACES
            - name: OPERATOR_EXCLUDE_NAMESPACES
            - name: OPERATOR_TARGET_WORKLOADS
              value: >-
                pod,replicaset,replicationcontroller,statefulset,daemonset,cronjob,job
            - name: OPERATOR_SERVICE_ACCOUNT
              value: trivy-operator-trivy-operator-shared
            - name: OPERATOR_LOG_DEV_MODE
              value: 'false'
            - name: OPERATOR_SCAN_JOB_TTL
            - name: OPERATOR_SCAN_JOB_TIMEOUT
              value: 5m
            - name: OPERATOR_CONCURRENT_SCAN_JOBS_LIMIT
              value: '9'
            - name: OPERATOR_CONCURRENT_NODE_COLLECTOR_LIMIT
              value: '1'
            - name: OPERATOR_SCAN_JOB_RETRY_AFTER
              value: 30s
            - name: OPERATOR_BATCH_DELETE_LIMIT
              value: '10'
            - name: OPERATOR_BATCH_DELETE_DELAY
              value: 10s
            - name: OPERATOR_METRICS_BIND_ADDRESS
              value: ':8080'
            - name: OPERATOR_METRICS_FINDINGS_ENABLED
              value: 'true'
            - name: OPERATOR_METRICS_VULN_ID_ENABLED
              value: 'false'
            - name: OPERATOR_HEALTH_PROBE_BIND_ADDRESS
              value: ':9090'
            - name: OPERATOR_VULNERABILITY_SCANNER_ENABLED
              value: 'true'
            - name: OPERATOR_SBOM_GENERATION_ENABLED
              value: 'true'
            - name: OPERATOR_VULNERABILITY_SCANNER_SCAN_ONLY_CURRENT_REVISIONS
              value: 'true'
            - name: OPERATOR_SCANNER_REPORT_TTL
              value: 24h
            - name: OPERATOR_CACHE_REPORT_TTL
              value: 120h
            - name: CONTROLLER_CACHE_SYNC_TIMEOUT
              value: 5m
            - name: OPERATOR_CONFIG_AUDIT_SCANNER_ENABLED
              value: 'true'
            - name: OPERATOR_RBAC_ASSESSMENT_SCANNER_ENABLED
              value: 'true'
            - name: OPERATOR_INFRA_ASSESSMENT_SCANNER_ENABLED
              value: 'true'
            - name: OPERATOR_CONFIG_AUDIT_SCANNER_SCAN_ONLY_CURRENT_REVISIONS
              value: 'true'
            - name: OPERATOR_EXPOSED_SECRET_SCANNER_ENABLED
              value: 'true'
            - name: OPERATOR_METRICS_EXPOSED_SECRET_INFO_ENABLED
              value: 'false'
            - name: OPERATOR_METRICS_CONFIG_AUDIT_INFO_ENABLED
              value: 'false'
            - name: OPERATOR_METRICS_RBAC_ASSESSMENT_INFO_ENABLED
              value: 'false'
            - name: OPERATOR_METRICS_INFRA_ASSESSMENT_INFO_ENABLED
              value: 'false'
            - name: OPERATOR_METRICS_IMAGE_INFO_ENABLED
              value: 'true'
            - name: OPERATOR_METRICS_CLUSTER_COMPLIANCE_INFO_ENABLED
              value: 'false'
            - name: OPERATOR_WEBHOOK_BROADCAST_URL
            - name: OPERATOR_WEBHOOK_BROADCAST_TIMEOUT
              value: 30s
            - name: OPERATOR_SEND_DELETED_REPORTS
              value: 'false'
            - name: OPERATOR_PRIVATE_REGISTRY_SCAN_SECRETS_NAMES
              value: '{}'
            - name: OPERATOR_ACCESS_GLOBAL_SECRETS_SERVICE_ACCOUNTS
              value: 'true'
            - name: OPERATOR_BUILT_IN_TRIVY_SERVER
              value: 'false'
            - name: TRIVY_SERVER_HEALTH_CHECK_CACHE_EXPIRATION
              value: 10h
            - name: OPERATOR_MERGE_RBAC_FINDING_WITH_CONFIG_AUDIT
              value: 'false'
            - name: OPERATOR_CLUSTER_COMPLIANCE_ENABLED
              value: 'true'
          image: 'registry.example.com:5000/aquasecurity/trivy-operator:0.17.1'
          imagePullPolicy: IfNotPresent
          livenessProbe:
            failureThreshold: 10
            httpGet:
              path: /healthz/
              port: probes
              scheme: HTTP
            initialDelaySeconds: 5
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          name: trivy-operator-shared
          ports:
            - containerPort: 8080
              name: metrics
              protocol: TCP
            - containerPort: 9090
              name: probes
              protocol: TCP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /readyz/
              port: probes
              scheme: HTTP
            initialDelaySeconds: 5
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          resources:
            limits:
              cpu: '5'
              memory: 10Gi
            requests:
              cpu: '5'
              memory: 10Gi
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
                - ALL
            privileged: false
            readOnlyRootFilesystem: true
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 2000
        runAsGroup: 2000
        runAsUser: 2000
        supplementalGroups:
          - 2000
      serviceAccount: trivy-operator-trivy-operator-shared
      serviceAccountName: trivy-operator-trivy-operator-shared
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  fullyLabeledReplicas: 1
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1

@chen-keinan
Copy link
Collaborator

chen-keinan commented Apr 17, 2024

@rukender have you tested trivy-operator on local cluster kind ?

@rukender
Copy link
Author

rukender commented Apr 17, 2024

@chen-keinan I ran it locally and it is working fine.

% kubectl get vulnerabilityreports --all-namespaces -o wide
NAMESPACE            NAME                                                   REPOSITORY                       TAG                  SCANNER   AGE   CRITICAL   HIGH   MEDIUM   LOW   UNKNOWN
kube-system          daemonset-kindnet-kindnet-cni                          kindest/kindnetd                 v20240202-8f1494ea   Trivy     87s   0          4      19       24    0
kube-system          daemonset-kube-proxy-kube-proxy                        kube-proxy                       v1.29.2              Trivy     81s   0          2      6        17    0
kube-system          pod-8b4f55974                                          kube-controller-manager          v1.29.2              Trivy     78s   0          2      2        0     0
kube-system          pod-etcd-kind-control-plane-etcd                       etcd                             3.5.10-0             Trivy     83s   0          4      8        0     0
kube-system          pod-kube-apiserver-kind-control-plane-kube-apiserver   kube-apiserver                   v1.29.2              Trivy     88s   0          1      2        0     0
kube-system          pod-kube-scheduler-kind-control-plane-kube-scheduler   kube-scheduler                   v1.29.2              Trivy     87s   0          1      2        0     0
kube-system          replicaset-coredns-76f75df574-coredns                  coredns/coredns                  v1.11.1              Trivy     83s   0          3      5        0     0
local-path-storage   replicaset-5cbdfd7595                                  kindest/local-path-provisioner   v20240202-8f1494ea   Trivy     77s   0          2      11       13    0
trivy-system         replicaset-trivy-operator-84b86599cb-trivy-operator    aquasecurity/trivy-operator      0.19.4               Trivy     78s   0          0      1        2     0

Does it mean in my cluster we have issue with helm chart? or any other possible issue you think of?

@chen-keinan
Copy link
Collaborator

chen-keinan commented Apr 17, 2024

@rukender I suspect it related to cluster env or config. have tried running default helm install ... om you cluster or you use a different settings ?

@rukender
Copy link
Author

rukender commented Apr 17, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants