Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Load .trivyignore (or ignore-policy) from ConfigMaps in target namespaces #1857

Open
maltemorgenstern opened this issue Feb 16, 2024 Discussed in #1847 · 4 comments
Open
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. target/kubernetes Issues relating to kubernetes cluster scanning

Comments

@maltemorgenstern
Copy link
Contributor

Discussed in #1847

Originally posted by maltemorgenstern February 8, 2024
Hey there,
I started to play around with the trivy-operator and wanted to get your thoughts on an issue/question.

Current situation

We have a kubernets cluster managed by a platform team. It contains different shared services (logging, metrics, gitops, ...) and can be used by dev teams. They can request their own namespace - and start deploying their applications (pods).

Adding the trivy-operator as a managed service would increase the cluster security while reducing efforts for our developers. Trivy would be managed by the platform team and automatically scan new workloads deployed by the teams - and they would not need to worry about how to scan images.

The findings could be passed to each team using a grafana dashboard - and even alerting on new findings would be possible out of the box 🚀

The problem

But - as always - there will be false-positive findings that need to be suppressed (in order to actual spot critical vulnerabilities). As far as I can tell this can be done in two ways:

  • Configure a .trivyignore file that contains CVEs - but would match all workloads in the cluster
  • Configure (multiple) trivy.ignorePolicy rego rules - that can be scoped to namespaces or even specific workloads

In both scenarios the config would have to be placed in the trivy-operator-trivy-config ConfigMap - inside the operator namespace.

This would mean that the platform team would have to maintain all ignore configs - for each team and each of their workloads, which would be a lot of work.

I think it would be a great feature to allow teams to configure their own .trivyignore (or trivy.ignorePolicy) inside a ConfigMap deployed to their namespace. This way the teams could manage findings themselves and would not depend on the platform team to maintain a central config.

This would require the trivy-operator to read multiple ConfigMaps from other namespaces and merge the configs before applying them.

What are your thoughts about this? 🙂

@chen-keinan
Copy link
Collaborator

@maltemorgenstern contributions are welcome.

@chen-keinan chen-keinan added kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. target/kubernetes Issues relating to kubernetes cluster scanning labels Feb 19, 2024
@teimyBr
Copy link

teimyBr commented Feb 28, 2024

Would also be nice this Feature will help us a lot

@chen-keinan
Copy link
Collaborator

chen-keinan commented Mar 11, 2024

@maltemorgenstern maybe solution similar to ignore policy with ignoreFile make sense for your use-case ?

@maltemorgenstern
Copy link
Contributor Author

@chen-keinan there are for sure some similarities between these two features.

Having the ability to apply different ignoreFiles (like ignorePolicies) for different namespaces/workloads is one part of this feature request.

But the other (and probably more complex) part is the ability to load the ignoreFile from somewhere else than the values.yaml - preferably from ConfigMaps in the target namespaces.

There might be some similarities to #1223 here - maybe a generic solution could help to solve both these issues (but that is just an idea).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. target/kubernetes Issues relating to kubernetes cluster scanning
Projects
None yet
Development

No branches or pull requests

3 participants