Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Repeated creation of scan-vulnerabilityreport pods #1800

Open
szEvEz opened this issue Jan 25, 2024 · 2 comments
Open

Repeated creation of scan-vulnerabilityreport pods #1800

szEvEz opened this issue Jan 25, 2024 · 2 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence. target/kubernetes Issues relating to kubernetes cluster scanning

Comments

@szEvEz
Copy link

szEvEz commented Jan 25, 2024

Hi,

after upgrading to the latest version of the operator, we've stumbled upon the following behaviour:

  • vulnerability-report pods were spawned continuously
$ kubectl get pods

NAME                                                             READY   STATUS      RESTARTS   AGE
scan-vulnerabilityreport-54cfcf859c-pzj92                        0/1     Init:0/1    0          42s
scan-vulnerabilityreport-66fc6b4dcc-cpqbc                        0/1     Init:0/1    0          46s
scan-vulnerabilityreport-6b44d4cb75-w9g8n                        1/1     Running     0          12s
scan-vulnerabilityreport-76f5956dbf-gk5bx                        0/1     Completed   0          46s
scan-vulnerabilityreport-7f8d7cf6fd-dhpgj                        0/1     Completed   0          49s
scan-vulnerabilityreport-8596cc7758-xp6rh                        0/1     Init:0/1    0          8s
scan-vulnerabilityreport-9fdbbb4d8-6fd9z                         0/1     Init:0/1    0          43s
scan-vulnerabilityreport-c66b9cf44-z8n2g                         0/1     Init:0/1    0          41s
scan-vulnerabilityreport-d48d95446-9vx9w                         0/1     Completed   0          47s
scan-vulnerabilityreport-db88687bd-mhrpz                         0/1     Init:0/1    0          9s
trivy-operator-57f76cd687-2nr78                                  1/1     Running     0          51m

Even after several hours, the same behavior persisted

  • there were no new deployments on that cluster
  • I've inspected the scan-vulnerability pods to check, whats going on
$ kubectl logs scan-vulnerabilityreport-78569dffcb-wrvpd                                                                                                                                                                                                                                       
                                                                                                                                                                                                                                                                      
{                                                                                                                                                                                                                                                                                                                                                               
  "SchemaVersion": 2,                                                                   
  "CreatedAt": "2024-01-25T08:24:15.355624798Z",                                        
  "ArtifactName": "/sbom-celery/sbom-celery.json",                                      
  "ArtifactType": "cyclonedx",                                                          
  "Metadata": {                                                                         
    "OS": {                                                                             
      "Family": "debian",                                                               
      "Name": "11.7"                                                                    
    },                                                                                  
    "ImageConfig": {                                                                    
      "architecture": "",                                                               
      "created": "0001-01-01T00:00:00Z",
      ...
}
  • all those reports had a relating "sbom" artifact name
  • After configuring sbomGenerationEnabled: false , this behaviour stopped

Environment:

  • Trivy-Operator version
    • Chart version 0.20.2
    • App version 0.18.2
  • Kubernetes version v1.28.3
@szEvEz szEvEz added the kind/bug Categorizes issue or PR as related to a bug. label Jan 25, 2024
@chen-keinan
Copy link
Collaborator

@szEvEz can you please provide more info/context on log ?

  • are vulnerability reports get generated ?
    is it reproducible. ?

@chen-keinan
Copy link
Collaborator

chen-keinan commented Jan 28, 2024

@szEvEz note the reconciliation do not happen only on a new deployment it also happen when report TTL has exceeded.
the scan job (above) is triggered when report TTL exceeded and it reusing the existing sbom to make scanning faster rather then downloading image again and inspecting it.

@chen-keinan chen-keinan added target/kubernetes Issues relating to kubernetes cluster scanning priority/backlog Higher priority than priority/awaiting-more-evidence. labels Feb 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence. target/kubernetes Issues relating to kubernetes cluster scanning
Projects
None yet
Development

No branches or pull requests

2 participants