Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): update dependency linuxkit/linuxkit to v1.2.0 #3558

Merged
merged 1 commit into from Mar 11, 2024

Conversation

uniget-bot
Copy link

This PR contains the following updates:

Package Update Change
linuxkit/linuxkit minor 1.0.1 -> 1.2.0

Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

linuxkit/linuxkit (linuxkit/linuxkit)

v1.2.0

Compare Source

What's Changed

New Contributors

Full Changelog: linuxkit/linuxkit@v1.0.1...v1.2.0


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

Copy link

@nicholasdille-bot nicholasdille-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Auto-approved because label type/renovate is present.

Copy link

🔍 Vulnerabilities of ghcr.io/uniget-org/tools/linuxkit:1.2.0

📦 Image Reference ghcr.io/uniget-org/tools/linuxkit:1.2.0
digestsha256:5cf28fc5b59fc9001246252dcd36f9b035c4295d22c20ec74ef8483f9da55983
vulnerabilitiescritical: 4 high: 22 medium: 16 low: 1 unspecified: 3
platformlinux/amd64
size23 MB
packages116
critical: 2 high: 14 medium: 8 low: 0 stdlib 1.19.2 (golang)

pkg:golang/stdlib@1.19.2

critical : CVE--2023--24540

Affected range<1.19.9
Fixed version1.19.9
Description

Not all valid JavaScript whitespace characters are considered to be whitespace. Templates containing whitespace characters outside of the character set "\t\n\f\r\u0020\u2028\u2029" in JavaScript contexts that also contain actions may not be properly sanitized during execution.

critical : CVE--2023--24538

Affected range<1.19.8
Fixed version1.19.8
Description

Templates do not properly consider backticks (`) as Javascript string delimiters, and do not escape them as expected.

Backticks are used, since ES6, for JS template literals. If a template contains a Go template action within a Javascript template literal, the contents of the action can be used to terminate the literal, injecting arbitrary Javascript code into the Go template.

As ES6 template literals are rather complex, and themselves can do string interpolation, the decision was made to simply disallow Go template actions from being used inside of them (e.g. "var a = {{.}}"), since there is no obviously safe way to allow this behavior. This takes the same approach as github.com/google/safehtml.

With fix, Template.Parse returns an Error when it encounters templates like this, with an ErrorCode of value 12. This ErrorCode is currently unexported, but will be exported in the release of Go 1.21.

Users who rely on the previous behavior can re-enable it using the GODEBUG flag jstmpllitinterp=1, with the caveat that backticks will now be escaped. This should be used with caution.

high : CVE--2023--29403

Affected range<1.19.10
Fixed version1.19.10
Description

On Unix platforms, the Go runtime does not behave differently when a binary is run with the setuid/setgid bits. This can be dangerous in certain cases, such as when dumping memory state, or assuming the status of standard i/o file descriptors.

If a setuid/setgid binary is executed with standard I/O file descriptors closed, opening any files can result in unexpected content being read or written with elevated privileges. Similarly, if a setuid/setgid program is terminated, either via panic or signal, it may leak the contents of its registers.

high : CVE--2023--45287

Affected range<1.20.0
Fixed version1.20.0
Description

Before Go 1.20, the RSA based TLS key exchanges used the math/big library, which is not constant time. RSA blinding was applied to prevent timing attacks, but analysis shows this may not have been fully effective. In particular it appears as if the removal of PKCS#1 padding may leak timing information, which in turn could be used to recover session key bits.

In Go 1.20, the crypto/tls library switched to a fully constant time RSA implementation, which we do not believe exhibits any timing side channels.

high : CVE--2023--39325

Affected range<1.20.10
Fixed version1.20.10
Description

A malicious HTTP/2 client which rapidly creates requests and immediately resets them can cause excessive server resource consumption. While the total number of requests is bounded by the http2.Server.MaxConcurrentStreams setting, resetting an in-progress request allows the attacker to create a new request while the existing one is still executing.

With the fix applied, HTTP/2 servers now bound the number of simultaneously executing handler goroutines to the stream concurrency limit (MaxConcurrentStreams). New requests arriving when at the limit (which can only happen after the client has reset an existing, in-flight request) will be queued until a handler exits. If the request queue grows too large, the server will terminate the connection.

This issue is also fixed in golang.org/x/net/http2 for users manually configuring HTTP/2.

The default stream concurrency limit is 250 streams (requests) per HTTP/2 connection. This value may be adjusted using the golang.org/x/net/http2 package; see the Server.MaxConcurrentStreams setting and the ConfigureServer function.

high : CVE--2023--24537

Affected range<1.19.8
Fixed version1.19.8
Description

Calling any of the Parse functions on Go source code which contains //line directives with very large line numbers can cause an infinite loop due to integer overflow.

high : CVE--2023--24536

Affected range<1.19.8
Fixed version1.19.8
Description

Multipart form parsing can consume large amounts of CPU and memory when processing form inputs containing very large numbers of parts.

This stems from several causes:

  1. mime/multipart.Reader.ReadForm limits the total memory a parsed multipart form can consume. ReadForm can undercount the amount of memory consumed, leading it to accept larger inputs than intended.
  2. Limiting total memory does not account for increased pressure on the garbage collector from large numbers of small allocations in forms with many parts.
  3. ReadForm can allocate a large number of short-lived buffers, further increasing pressure on the garbage collector.

The combination of these factors can permit an attacker to cause an program that parses multipart forms to consume large amounts of CPU and memory, potentially resulting in a denial of service. This affects programs that use mime/multipart.Reader.ReadForm, as well as form parsing in the net/http package with the Request methods FormFile, FormValue, ParseMultipartForm, and PostFormValue.

With fix, ReadForm now does a better job of estimating the memory consumption of parsed forms, and performs many fewer short-lived allocations.

In addition, the fixed mime/multipart.Reader imposes the following limits on the size of parsed forms:

  1. Forms parsed with ReadForm may contain no more than 1000 parts. This limit may be adjusted with the environment variable GODEBUG=multipartmaxparts=.
  2. Form parts parsed with NextPart and NextRawPart may contain no more than 10,000 header fields. In addition, forms parsed with ReadForm may contain no more than 10,000 header fields across all parts. This limit may be adjusted with the environment variable GODEBUG=multipartmaxheaders=.

high : CVE--2023--24534

Affected range<1.19.8
Fixed version1.19.8
Description

HTTP and MIME header parsing can allocate large amounts of memory, even when parsing small inputs, potentially leading to a denial of service.

Certain unusual patterns of input data can cause the common function used to parse HTTP and MIME headers to allocate substantially more memory than required to hold the parsed headers. An attacker can exploit this behavior to cause an HTTP server to allocate large amounts of memory from a small request, potentially leading to memory exhaustion and a denial of service.

With fix, header parsing now correctly allocates only the memory required to hold parsed headers.

high : CVE--2022--41725

Affected range<1.19.6
Fixed version1.19.6
Description

A denial of service is possible from excessive resource consumption in net/http and mime/multipart.

Multipart form parsing with mime/multipart.Reader.ReadForm can consume largely unlimited amounts of memory and disk files. This also affects form parsing in the net/http package with the Request methods FormFile, FormValue, ParseMultipartForm, and PostFormValue.

ReadForm takes a maxMemory parameter, and is documented as storing "up to maxMemory bytes +10MB (reserved for non-file parts) in memory". File parts which cannot be stored in memory are stored on disk in temporary files. The unconfigurable 10MB reserved for non-file parts is excessively large and can potentially open a denial of service vector on its own. However, ReadForm did not properly account for all memory consumed by a parsed form, such as map entry overhead, part names, and MIME headers, permitting a maliciously crafted form to consume well over 10MB. In addition, ReadForm contained no limit on the number of disk files created, permitting a relatively small request body to create a large number of disk temporary files.

With fix, ReadForm now properly accounts for various forms of memory overhead, and should now stay within its documented limit of 10MB + maxMemory bytes of memory consumption. Users should still be aware that this limit is high and may still be hazardous.

In addition, ReadForm now creates at most one on-disk temporary file, combining multiple form parts into a single temporary file. The mime/multipart.File interface type's documentation states, "If stored on disk, the File's underlying concrete type will be an *os.File.". This is no longer the case when a form contains more than one file part, due to this coalescing of parts into a single file. The previous behavior of using distinct files for each form part may be reenabled with the environment variable GODEBUG=multipartfiles=distinct.

Users should be aware that multipart.ReadForm and the http.Request methods that call it do not limit the amount of disk consumed by temporary files. Callers can limit the size of form data with http.MaxBytesReader.

high : CVE--2022--41724

Affected range<1.19.6
Fixed version1.19.6
Description

Large handshake records may cause panics in crypto/tls.

Both clients and servers may send large TLS handshake records which cause servers and clients, respectively, to panic when attempting to construct responses.

This affects all TLS 1.3 clients, TLS 1.2 clients which explicitly enable session resumption (by setting Config.ClientSessionCache to a non-nil value), and TLS 1.3 servers which request client certificates (by setting Config.ClientAuth >= RequestClientCert).

high : CVE--2022--41723

Affected range<1.19.6
Fixed version1.19.6
Description

A maliciously crafted HTTP/2 stream could cause excessive CPU consumption in the HPACK decoder, sufficient to cause a denial of service from a small number of small requests.

high : CVE--2022--41722

Affected range<1.19.6
Fixed version1.19.6
Description

A path traversal vulnerability exists in filepath.Clean on Windows.

On Windows, the filepath.Clean function could transform an invalid path such as "a/../c:/b" into the valid path "c:\b". This transformation of a relative (if invalid) path into an absolute path could enable a directory traversal attack.

After fix, the filepath.Clean function transforms this path into the relative (but still invalid) path ".\c:\b".

high : CVE--2022--41720

Affected range>=1.19.0-0
<1.19.4
Fixed version1.19.4
Description

On Windows, restricted files can be accessed via os.DirFS and http.Dir.

The os.DirFS function and http.Dir type provide access to a tree of files rooted at a given directory. These functions permit access to Windows device files under that root. For example, os.DirFS("C:/tmp").Open("COM1") opens the COM1 device. Both os.DirFS and http.Dir only provide read-only filesystem access.

In addition, on Windows, an os.DirFS for the directory (the root of the current drive) can permit a maliciously crafted path to escape from the drive and access any path on the system.

With fix applied, the behavior of os.DirFS("") has changed. Previously, an empty root was treated equivalently to "/", so os.DirFS("").Open("tmp") would open the path "/tmp". This now returns an error.

high : CVE--2022--41716

Affected range>=1.19.0-0
<1.19.3
Fixed version1.19.3
Description

Due to unsanitized NUL values, attackers may be able to maliciously set environment variables on Windows.

In syscall.StartProcess and os/exec.Cmd, invalid environment variable values containing NUL values are not properly checked for. A malicious environment variable value can exploit this behavior to set a value for a different environment variable. For example, the environment variable string "A=B\x00C=D" sets the variables "A=B" and "C=D".

high : CVE--2023--29400

Affected range<1.19.9
Fixed version1.19.9
Description

Templates containing actions in unquoted HTML attributes (e.g. "attr={{.}}") executed with empty input can result in output with unexpected results when parsed due to HTML normalization rules. This may allow injection of arbitrary attributes into tags.

high : CVE--2023--24539

Affected range<1.19.9
Fixed version1.19.9
Description

Angle brackets (<>) are not considered dangerous characters when inserted into CSS contexts. Templates containing multiple actions separated by a '/' character can result in unexpectedly closing the CSS context and allowing for injection of unexpected HTML, if executed with untrusted input.

medium : CVE--2023--29406

Affected range<1.19.11
Fixed version1.19.11
Description

The HTTP/1 client does not fully validate the contents of the Host header. A maliciously crafted Host header can inject additional headers or entire requests.

With fix, the HTTP/1 client now refuses to send requests containing an invalid Request.Host or Request.URL.Host value.

medium : CVE--2023--39319

Affected range<1.20.8
Fixed version1.20.8
Description

The html/template package does not apply the proper rules for handling occurrences of "<script", "<!--", and "</script" within JS literals in <script> contexts. This may cause the template parser to improperly consider script contexts to be terminated early, causing actions to be improperly escaped. This could be leveraged to perform an XSS attack.

medium : CVE--2023--39318

Affected range<1.20.8
Fixed version1.20.8
Description

The html/template package does not properly handle HTML-like "" comment tokens, nor hashbang "#!" comment tokens, in <script> contexts. This may cause the template parser to improperly interpret the contents of <script> contexts, causing actions to be improperly escaped. This may be leveraged to perform an XSS attack.

medium : CVE--2023--45284

Affected range<1.20.11
Fixed version1.20.11
Description

On Windows, The IsLocal function does not correctly detect reserved device names in some cases.

Reserved names followed by spaces, such as "COM1 ", and reserved names "COM" and "LPT" followed by superscript 1, 2, or 3, are incorrectly reported as local.

With fix, IsLocal now correctly reports these names as non-local.

medium : CVE--2023--39326

Affected range<1.20.12
Fixed version1.20.12
Description

A malicious HTTP sender can use chunk extensions to cause a receiver reading from a request or response body to read many more bytes from the network than are in the body.

A malicious HTTP client can further exploit this to cause a server to automatically read a large amount of data (up to about 1GiB) when a handler fails to read the entire body of a request.

Chunk extensions are a little-used HTTP feature which permit including additional metadata in a request or response body sent using the chunked encoding. The net/http chunked encoding reader discards this metadata. A sender can exploit this by inserting a large metadata segment with each byte transferred. The chunk reader now produces an error if the ratio of real body to encoded bytes grows too small.

medium : CVE--2023--29409

Affected range<1.19.12
Fixed version1.19.12
Description

Extremely large RSA keys in certificate chains can cause a client/server to expend significant CPU time verifying signatures.

With fix, the size of RSA keys transmitted during handshakes is restricted to <= 8192 bits.

Based on a survey of publicly trusted RSA keys, there are currently only three certificates in circulation with keys larger than this, and all three appear to be test certificates that are not actively deployed. It is possible there are larger keys in use in private PKIs, but we target the web PKI, so causing breakage here in the interests of increasing the default safety of users of crypto/tls seems reasonable.

medium : CVE--2023--24532

Affected range<1.19.7
Fixed version1.19.7
Description

The ScalarMult and ScalarBaseMult methods of the P256 Curve may return an incorrect result if called with some specific unreduced scalars (a scalar larger than the order of the curve).

This does not impact usages of crypto/ecdsa or crypto/ecdh.

medium : CVE--2022--41717

Affected range>=1.19.0-0
<1.19.4
Fixed version1.19.4
Description

An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests.

HTTP/2 server connections contain a cache of HTTP header keys sent by the client. While the total number of entries in this cache is capped, an attacker sending very large keys can cause the server to allocate approximately 64 MiB per open connection.

critical: 2 high: 1 medium: 2 low: 0 github.com/moby/buildkit 0.11.1 (golang)

pkg:golang/github.com/moby/buildkit@0.11.1

critical 10.0: CVE--2024--23652 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

Affected range<0.12.5
Fixed version0.12.5
CVSS Score10
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:N/I:H/A:H
Description

Impact

A malicious BuildKit frontend or Dockerfile using RUN --mount could trick the feature that removes empty files created for the mountpoints into removing a file outside the container, from the host system.

Patches

The issue has been fixed in v0.12.5

Workarounds

Avoid using BuildKit frontend from an untrusted source or building an untrusted Dockerfile containing RUN --mount feature.

References

critical 9.8: CVE--2024--23653 Incorrect Authorization

Affected range<0.12.5
Fixed version0.12.5
CVSS Score9.8
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
Description

Impact

In addition to running containers as build steps, BuildKit also provides APIs for running interactive containers based on built images. It was possible to use these APIs to ask BuildKit to run a container with elevated privileges. Normally, running such containers is only allowed if special security.insecure entitlement is enabled both by buildkitd configuration and allowed by the user initializing the build request.

Patches

The issue has been fixed in v0.12.5 .

Workarounds

Avoid using BuildKit frontends from untrusted sources. A frontend image is usually specified as the #syntax line on your Dockerfile, or with --frontend flag when using buildctl build command.

References

high 8.7: CVE--2024--23651 Concurrent Execution using Shared Resource with Improper Synchronization ('Race Condition')

Affected range<0.12.5
Fixed version0.12.5
CVSS Score8.7
CVSS VectorCVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:C/C:H/I:H/A:N
Description

Impact

Two malicious build steps running in parallel sharing the same cache mounts with subpaths could cause a race condition that can lead to files from the host system being accessible to the build container.

Patches

The issue has been fixed in v0.12.5

Workarounds

Avoid using BuildKit frontend from an untrusted source or building an untrusted Dockerfile containing cache mounts with --mount=type=cache,source=... options.

References

https://www.openwall.com/lists/oss-security/2019/05/28/1

medium 6.5: CVE--2023--26054 Exposure of Sensitive Information to an Unauthorized Actor

Affected range>=0.10.0
<0.11.4
Fixed version0.11.4
CVSS Score6.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:N/A:N
Description

When the user sends a build request that contains a Git URL that contains credentials and the build creates a provenance attestation describing that build, these credentials could be visible from the provenance attestation.

Git URL can be passed in two ways:

  1. Invoking build directly from a URL with credentials.
buildctl build --frontend dockerfile.v0 --context https://<credentials>@url/repo.git

Equivalent in docker buildx would be

docker buildx build https://<credentials>@url/repo.git
  1. If the client sends additional VCS info hint parameters on builds from a local source. Usually, that would mean reading the origin URL from .git/config file.

Thanks to Oscar Alberto Tovar for discovering the issue.

Impact

When a build is performed under specific conditions where credentials were passed to BuildKit they may be visible to everyone who has access to provenance attestation.

Provenance attestations and VCS info hints were added in version v0.11.0. Previous versions are not vulnerable.

In v0.10, when building directly from Git URL, the same URL could be visible in BuildInfo structure that is a predecessor of Provenance attestations. Previous versions are not vulnerable.

Note: Docker Build-push Github action builds from Git URLs by default but is not affected by this issue even when working with private repositories because the credentials are passed with build secrets and not with URLs.

Patches

Bug is fixed in v0.11.4 .

Workarounds

It is recommended to pass credentials with build secrets when building directly from Git URL as a more secure alternative than modifying the URL.

In Docker Buildx, VCS info hint can be disabled by setting BUILDX_GIT_INFO=0. buildctl does not set VCS hints based on .git directory, and values would need to be passed manually with --opt.

References

medium 5.3: CVE--2024--23650 Improper Check for Unusual or Exceptional Conditions

Affected range<0.12.5
Fixed version0.12.5
CVSS Score5.3
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L
Description

Impact

A malicious BuildKit client or frontend could craft a request that could lead to BuildKit daemon crashing with a panic.

Patches

The issue has been fixed in v0.12.5

Workarounds

Avoid using BuildKit frontends from untrusted sources. A frontend image is usually specified as the #syntax line on your Dockerfile, or with --frontend flag when using buildctl build command.

References

critical: 0 high: 2 medium: 2 low: 0 unspecified: 1golang.org/x/net 0.4.0 (golang)

pkg:golang/golang.org/x/net@0.4.0

high 7.5: CVE--2023--39325 Uncontrolled Resource Consumption

Affected range<0.17.0
Fixed version0.17.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
Description

A malicious HTTP/2 client which rapidly creates requests and immediately resets them can cause excessive server resource consumption. While the total number of requests is bounded by the http2.Server.MaxConcurrentStreams setting, resetting an in-progress request allows the attacker to create a new request while the existing one is still executing.

With the fix applied, HTTP/2 servers now bound the number of simultaneously executing handler goroutines to the stream concurrency limit (MaxConcurrentStreams). New requests arriving when at the limit (which can only happen after the client has reset an existing, in-flight request) will be queued until a handler exits. If the request queue grows too large, the server will terminate the connection.

This issue is also fixed in golang.org/x/net/http2 for users manually configuring HTTP/2.

The default stream concurrency limit is 250 streams (requests) per HTTP/2 connection. This value may be adjusted using the golang.org/x/net/http2 package; see the Server.MaxConcurrentStreams setting and the ConfigureServer function.

high 7.5: CVE--2022--41723 Uncontrolled Resource Consumption

Affected range<0.7.0
Fixed version0.7.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
Description

A maliciously crafted HTTP/2 stream could cause excessive CPU consumption in the HPACK decoder, sufficient to cause a denial of service from a small number of small requests.

medium 6.1: CVE--2023--3978 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')

Affected range<0.13.0
Fixed version0.13.0
CVSS Score6.1
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:L/A:N
Description

Text nodes not in the HTML namespace are incorrectly literally rendered, causing text which should be escaped to not be. This could lead to an XSS attack.

medium 5.3: CVE--2023--44487 Uncontrolled Resource Consumption

Affected range<0.17.0
Fixed version0.17.0
CVSS Score5.3
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L
Description

HTTP/2 Rapid reset attack

The HTTP/2 protocol allows clients to indicate to the server that a previous stream should be canceled by sending a RST_STREAM frame. The protocol does not require the client and server to coordinate the cancellation in any way, the client may do it unilaterally. The client may also assume that the cancellation will take effect immediately when the server receives the RST_STREAM frame, before any other data from that TCP connection is processed.

Abuse of this feature is called a Rapid Reset attack because it relies on the ability for an endpoint to send a RST_STREAM frame immediately after sending a request frame, which makes the other endpoint start working and then rapidly resets the request. The request is canceled, but leaves the HTTP/2 connection open.

The HTTP/2 Rapid Reset attack built on this capability is simple: The client opens a large number of streams at once as in the standard HTTP/2 attack, but rather than waiting for a response to each request stream from the server or proxy, the client cancels each request immediately.

The ability to reset streams immediately allows each connection to have an indefinite number of requests in flight. By explicitly canceling the requests, the attacker never exceeds the limit on the number of concurrent open streams. The number of in-flight requests is no longer dependent on the round-trip time (RTT), but only on the available network bandwidth.

In a typical HTTP/2 server implementation, the server will still have to do significant amounts of work for canceled requests, such as allocating new stream data structures, parsing the query and doing header decompression, and mapping the URL to a resource. For reverse proxy implementations, the request may be proxied to the backend server before the RST_STREAM frame is processed. The client on the other hand paid almost no costs for sending the requests. This creates an exploitable cost asymmetry between the server and the client.

Multiple software artifacts implementing HTTP/2 are affected. This advisory was originally ingested from the swift-nio-http2 repo advisory and their original conent follows.

swift-nio-http2 specific advisory

swift-nio-http2 is vulnerable to a denial-of-service vulnerability in which a malicious client can create and then reset a large number of HTTP/2 streams in a short period of time. This causes swift-nio-http2 to commit to a large amount of expensive work which it then throws away, including creating entirely new Channels to serve the traffic. This can easily overwhelm an EventLoop and prevent it from making forward progress.

swift-nio-http2 1.28 contains a remediation for this issue that applies reset counter using a sliding window. This constrains the number of stream resets that may occur in a given window of time. Clients violating this limit will have their connections torn down. This allows clients to continue to cancel streams for legitimate reasons, while constraining malicious actors.

unspecified : GMS--2023--418 OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities

Affected range<0.7.0
Fixed version0.7.0
Description

A maliciously crafted HTTP/2 stream could cause excessive CPU consumption in the HPACK decoder, sufficient to cause a denial of service from a small number of small requests.

critical: 0 high: 1 medium: 1 low: 0 unspecified: 1google.golang.org/grpc 1.50.1 (golang)

pkg:golang/google.golang.org/grpc@1.50.1

high 7.5: GHSA--m425--mq94--257g

Affected range<1.56.3
Fixed version1.56.3
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
Description

Impact

In affected releases of gRPC-Go, it is possible for an attacker to send HTTP/2 requests, cancel them, and send subsequent requests, which is valid by the HTTP/2 protocol, but would cause the gRPC-Go server to launch more concurrent method handlers than the configured maximum stream limit.

Patches

This vulnerability was addressed by #6703 and has been included in patch releases: 1.56.3, 1.57.1, 1.58.3. It is also included in the latest release, 1.59.0.

Along with applying the patch, users should also ensure they are using the grpc.MaxConcurrentStreams server option to apply a limit to the server's resources used for any single connection.

Workarounds

None.

References

#6703

medium 5.3: CVE--2023--44487 Uncontrolled Resource Consumption

Affected range<1.56.3
Fixed version1.56.3
CVSS Score5.3
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L
Description

HTTP/2 Rapid reset attack

The HTTP/2 protocol allows clients to indicate to the server that a previous stream should be canceled by sending a RST_STREAM frame. The protocol does not require the client and server to coordinate the cancellation in any way, the client may do it unilaterally. The client may also assume that the cancellation will take effect immediately when the server receives the RST_STREAM frame, before any other data from that TCP connection is processed.

Abuse of this feature is called a Rapid Reset attack because it relies on the ability for an endpoint to send a RST_STREAM frame immediately after sending a request frame, which makes the other endpoint start working and then rapidly resets the request. The request is canceled, but leaves the HTTP/2 connection open.

The HTTP/2 Rapid Reset attack built on this capability is simple: The client opens a large number of streams at once as in the standard HTTP/2 attack, but rather than waiting for a response to each request stream from the server or proxy, the client cancels each request immediately.

The ability to reset streams immediately allows each connection to have an indefinite number of requests in flight. By explicitly canceling the requests, the attacker never exceeds the limit on the number of concurrent open streams. The number of in-flight requests is no longer dependent on the round-trip time (RTT), but only on the available network bandwidth.

In a typical HTTP/2 server implementation, the server will still have to do significant amounts of work for canceled requests, such as allocating new stream data structures, parsing the query and doing header decompression, and mapping the URL to a resource. For reverse proxy implementations, the request may be proxied to the backend server before the RST_STREAM frame is processed. The client on the other hand paid almost no costs for sending the requests. This creates an exploitable cost asymmetry between the server and the client.

Multiple software artifacts implementing HTTP/2 are affected. This advisory was originally ingested from the swift-nio-http2 repo advisory and their original conent follows.

swift-nio-http2 specific advisory

swift-nio-http2 is vulnerable to a denial-of-service vulnerability in which a malicious client can create and then reset a large number of HTTP/2 streams in a short period of time. This causes swift-nio-http2 to commit to a large amount of expensive work which it then throws away, including creating entirely new Channels to serve the traffic. This can easily overwhelm an EventLoop and prevent it from making forward progress.

swift-nio-http2 1.28 contains a remediation for this issue that applies reset counter using a sliding window. This constrains the number of stream resets that may occur in a given window of time. Clients violating this limit will have their connections torn down. This allows clients to continue to cancel streams for legitimate reasons, while constraining malicious actors.

unspecified : GMS--2023--3788 OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities

Affected range<1.56.3
Fixed version1.56.3, 1.57.1, 1.58.3
Description

Impact

In affected releases of gRPC-Go, it is possible for an attacker to send HTTP/2 requests, cancel them, and send subsequent requests, which is valid by the HTTP/2 protocol, but would cause the gRPC-Go server to launch more concurrent method handlers than the configured maximum stream limit.

Patches

This vulnerability was addressed by #6703 and has been included in patch releases: 1.56.3, 1.57.1, 1.58.3. It is also included in the latest release, 1.59.0.

Along with applying the patch, users should also ensure they are using the grpc.MaxConcurrentStreams server option to apply a limit to the server's resources used for any single connection.

Workarounds

None.

References

#6703

critical: 0 high: 1 medium: 0 low: 0 github.com/docker/distribution 2.8.1+incompatible (golang)

pkg:golang/github.com/docker/distribution@2.8.1+incompatible

high 7.5: CVE--2023--2253 Undefined Behavior for Input to API

Affected range<2.8.2-beta.1
Fixed version2.8.2-beta.1
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
Description

Impact

Systems that run distribution built after a specific commit running on memory-restricted environments can suffer from denial of service by a crafted malicious /v2/_catalog API endpoint request.

Patches

Upgrade to at least 2.8.2-beta.1 if you are running v2.8.x release. If you use the code from the main branch, update at least to the commit after f55a6552b006a381d9167e328808565dd2bf77dc.

Workarounds

There is no way to work around this issue without patching. Restrict access to the affected API endpoint: see the recommendations section.

References

/v2/_catalog endpoint accepts a parameter to control the maximum amount of records returned (query string: n).

When not given the default n=100 is used. The server trusts that n has an acceptable value, however when using a
maliciously large value, it allocates an array/slice of n of strings before filling the slice with data.

This behaviour was introduced ~7yrs ago [1].

Recommendation

The /v2/_catalog endpoint was designed specifically to do registry syncs with search or other API systems. Such an endpoint would create a lot of load on the backend system, due to overfetch required to serve a request in certain implementations.

Because of this, we strongly recommend keeping this API endpoint behind heightened privilege and avoiding leaving it exposed to the internet.

For more information

If you have any questions or comments about this advisory:

[1] faulty commit

critical: 0 high: 1 medium: 0 low: 0 go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp 0.29.0 (golang)

pkg:golang/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp@0.29.0

high 7.5: CVE--2023--45142 Allocation of Resources Without Limits or Throttling

Affected range<0.44.0
Fixed version0.44.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
Description

Summary

This handler wrapper https://github.com/open-telemetry/opentelemetry-go-contrib/blob/5f7e6ad5a49b45df45f61a1deb29d7f1158032df/instrumentation/net/http/otelhttp/handler.go#L63-L65
out of the box adds labels

  • http.user_agent
  • http.method

that have unbound cardinality. It leads to the server's potential memory exhaustion when many malicious requests are sent to it.

Details

HTTP header User-Agent or HTTP method for requests can be easily set by an attacker to be random and long. The library internally uses httpconv.ServerRequest that records every value for HTTP method and User-Agent.

PoC

Send many requests with long randomly generated HTTP methods or/and User agents (e.g. a million) and observe how memory consumption increases during it.

Impact

In order to be affected, the program has to configure a metrics pipeline, use otelhttp.NewHandler wrapper, and does not filter any unknown HTTP methods or User agents on the level of CDN, LB, previous middleware, etc.

Others

It is similar to already reported vulnerabilities

Workaround for affected versions

As a workaround to stop being affected otelhttp.WithFilter() can be used, but it requires manual careful configuration to not log certain requests entirely.

For convenience and safe usage of this library, it should by default mark with the label unknown non-standard HTTP methods and User agents to show that such requests were made but do not increase cardinality. In case someone wants to stay with the current behavior, library API should allow to enable it.

The other possibility is to disable HTTP metrics instrumentation by passing otelhttp.WithMeterProvider option with noop.NewMeterProvider.

Solution provided by upgrading

In PR open-telemetry/opentelemetry-go-contrib#4277, released with package version 0.44.0, the values collected for attribute http.request.method were changed to be restricted to a set of well-known values and other high cardinality attributes were removed.

References

critical: 0 high: 1 medium: 0 low: 0 go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace 0.29.0 (golang)

pkg:golang/go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace@0.29.0

high 7.5: CVE--2023--45142 Allocation of Resources Without Limits or Throttling

Affected range<0.44.0
Fixed version0.44.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
Description

Summary

This handler wrapper https://github.com/open-telemetry/opentelemetry-go-contrib/blob/5f7e6ad5a49b45df45f61a1deb29d7f1158032df/instrumentation/net/http/otelhttp/handler.go#L63-L65
out of the box adds labels

  • http.user_agent
  • http.method

that have unbound cardinality. It leads to the server's potential memory exhaustion when many malicious requests are sent to it.

Details

HTTP header User-Agent or HTTP method for requests can be easily set by an attacker to be random and long. The library internally uses httpconv.ServerRequest that records every value for HTTP method and User-Agent.

PoC

Send many requests with long randomly generated HTTP methods or/and User agents (e.g. a million) and observe how memory consumption increases during it.

Impact

In order to be affected, the program has to configure a metrics pipeline, use otelhttp.NewHandler wrapper, and does not filter any unknown HTTP methods or User agents on the level of CDN, LB, previous middleware, etc.

Others

It is similar to already reported vulnerabilities

Workaround for affected versions

As a workaround to stop being affected otelhttp.WithFilter() can be used, but it requires manual careful configuration to not log certain requests entirely.

For convenience and safe usage of this library, it should by default mark with the label unknown non-standard HTTP methods and User agents to show that such requests were made but do not increase cardinality. In case someone wants to stay with the current behavior, library API should allow to enable it.

The other possibility is to disable HTTP metrics instrumentation by passing otelhttp.WithMeterProvider option with noop.NewMeterProvider.

Solution provided by upgrading

In PR open-telemetry/opentelemetry-go-contrib#4277, released with package version 0.44.0, the values collected for attribute http.request.method were changed to be restricted to a set of well-known values and other high cardinality attributes were removed.

References

critical: 0 high: 1 medium: 0 low: 0 go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc 0.29.0 (golang)

pkg:golang/go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc@0.29.0

high 7.5: CVE--2023--47108 Allocation of Resources Without Limits or Throttling

Affected range<0.46.0
Fixed version0.46.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
Description

Summary

The grpc Unary Server Interceptor opentelemetry-go-contrib/instrumentation/google.golang.org/grpc/otelgrpc/interceptor.go

// UnaryServerInterceptor returns a grpc.UnaryServerInterceptor suitable
// for use in a grpc.NewServer call.
func UnaryServerInterceptor(opts ...Option) grpc.UnaryServerInterceptor {

out of the box adds labels

  • net.peer.sock.addr
  • net.peer.sock.port

that have unbound cardinality. It leads to the server's potential memory exhaustion when many malicious requests are sent.

Details

An attacker can easily flood the peer address and port for requests.

PoC

Apply the attached patch to the example and run the client multiple times. Observe how each request will create a unique histogram and how the memory consumption increases during it.

Impact

In order to be affected, the program has to configure a metrics pipeline, use UnaryServerInterceptor, and does not filter any client IP address and ports via middleware or proxies, etc.

Others

It is similar to already reported vulnerabilities.

Workaround for affected versions

As a workaround to stop being affected, a view removing the attributes can be used.

The other possibility is to disable grpc metrics instrumentation by passing otelgrpc.WithMeterProvider option with noop.NewMeterProvider.

Solution provided by upgrading

In PR #4322, to be released with v0.46.0, the attributes were removed.

References

critical: 0 high: 0 medium: 1 low: 1 github.com/aws/aws-sdk-go 1.44.82 (golang)

pkg:golang/github.com/aws/aws-sdk-go@1.44.82

medium : CVE--2020--8911

Affected range>=0
Fixed versionNot Fixed
Description

The Go AWS S3 Crypto SDK contains vulnerabilities that can permit an attacker with write access to a bucket to decrypt files in that bucket.

Files encrypted by the V1 EncryptionClient using either the AES-CBC content cipher or the KMS key wrap algorithm are vulnerable. Users should migrate to the V1 EncryptionClientV2 API, which will not create vulnerable files. Old files will remain vulnerable until re-encrypted with the new client.

low : CVE--2020--8912

Affected range>=0
Fixed versionNot Fixed
Description

The Go AWS S3 Crypto SDK contains vulnerabilities that can permit an attacker with write access to a bucket to decrypt files in that bucket.

Files encrypted by the V1 EncryptionClient using either the AES-CBC content cipher or the KMS key wrap algorithm are vulnerable. Users should migrate to the V1 EncryptionClientV2 API, which will not create vulnerable files. Old files will remain vulnerable until re-encrypted with the new client.

critical: 0 high: 0 medium: 1 low: 0 unspecified: 1github.com/containerd/containerd 1.6.18 (golang)

pkg:golang/github.com/containerd/containerd@1.6.18

medium : GHSA--7ww5--4wqc--m92c

Affected range<=1.6.25
Fixed version1.6.26
Description

/sys/devices/virtual/powercap accessible by default to containers

Intel's RAPL (Running Average Power Limit) feature, introduced by the Sandy Bridge microarchitecture, provides software insights into hardware energy consumption. To facilitate this, Intel introduced the powercap framework in Linux kernel 3.13, which reads values via relevant MSRs (model specific registers) and provides unprivileged userspace access via sysfs. As RAPL is an interface to access a hardware feature, it is only available when running on bare metal with the module compiled into the kernel.

By 2019, it was realized that in some cases unprivileged access to RAPL readings could be exploited as a power-based side-channel against security features including AES-NI (potentially inside a SGX enclave) and KASLR (kernel address space layout randomization). Also known as the PLATYPUS attack, Intel assigned CVE-2020-8694 and CVE-2020-8695, and AMD assigned CVE-2020-12912.

Several mitigations were applied; Intel reduced the sampling resolution via a microcode update, and the Linux kernel prevents access by non-root users since 5.10. However, this kernel-based mitigation does not apply to many container-based scenarios:

  • Unless using user namespaces, root inside a container has the same level of privilege as root outside the container, but with a slightly more narrow view of the system
  • sysfs is mounted inside containers read-only; however only read access is needed to carry out this attack on an unpatched CPU

While this is not a direct vulnerability in container runtimes, defense in depth and safe defaults are valuable and preferred, especially as this poses a risk to multi-tenant container environments. This is provided by masking /sys/devices/virtual/powercap in the default mount configuration, and adding an additional set of rules to deny it in the default AppArmor profile.

While sysfs is not the only way to read from the RAPL subsystem, other ways of accessing it require additional capabilities such as CAP_SYS_RAWIO which is not available to containers by default, or perf paranoia level less than 1, which is a non-default kernel tunable.

References

unspecified : GMS--2023--6564 OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities

Affected range<=1.6.25
Fixed version1.6.26, 1.7.11
Description

/sys/devices/virtual/powercap accessible by default to containers

Intel's RAPL (Running Average Power Limit) feature, introduced by the Sandy Bridge microarchitecture, provides software insights into hardware energy consumption. To facilitate this, Intel introduced the powercap framework in Linux kernel 3.13, which reads values via relevant MSRs (model specific registers) and provides unprivileged userspace access via sysfs. As RAPL is an interface to access a hardware feature, it is only available when running on bare metal with the module compiled into the kernel.

By 2019, it was realized that in some cases unprivileged access to RAPL readings could be exploited as a power-based side-channel against security features including AES-NI (potentially inside a SGX enclave) and KASLR (kernel address space layout randomization). Also known as the PLATYPUS attack, Intel assigned CVE-2020-8694 and CVE-2020-8695, and AMD assigned CVE-2020-12912.

Several mitigations were applied; Intel reduced the sampling resolution via a microcode update, and the Linux kernel prevents access by non-root users since 5.10. However, this kernel-based mitigation does not apply to many container-based scenarios:

  • Unless using user namespaces, root inside a container has the same level of privilege as root outside the container, but with a slightly more narrow view of the system
  • sysfs is mounted inside containers read-only; however only read access is needed to carry out this attack on an unpatched CPU

While this is not a direct vulnerability in container runtimes, defense in depth and safe defaults are valuable and preferred, especially as this poses a risk to multi-tenant container environments. This is provided by masking /sys/devices/virtual/powercap in the default mount configuration, and adding an additional set of rules to deny it in the default AppArmor profile.

While sysfs is not the only way to read from the RAPL subsystem, other ways of accessing it require additional capabilities such as CAP_SYS_RAWIO which is not available to containers by default, or perf paranoia level less than 1, which is a non-default kernel tunable.

References

critical: 0 high: 0 medium: 1 low: 0 golang.org/x/crypto 0.2.0 (golang)

pkg:golang/golang.org/x/crypto@0.2.0

medium 5.9: CVE--2023--48795 Insufficient Verification of Data Authenticity

Affected range<0.17.0
Fixed version0.17.0
CVSS Score5.9
CVSS VectorCVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:H/A:N
Description

Summary

Terrapin is a prefix truncation attack targeting the SSH protocol. More precisely, Terrapin breaks the integrity of SSH's secure channel. By carefully adjusting the sequence numbers during the handshake, an attacker can remove an arbitrary amount of messages sent by the client or server at the beginning of the secure channel without the client or server noticing it.

Mitigations

To mitigate this protocol vulnerability, OpenSSH suggested a so-called "strict kex" which alters the SSH handshake to ensure a Man-in-the-Middle attacker cannot introduce unauthenticated messages as well as convey sequence number manipulation across handshakes.

Warning: To take effect, both the client and server must support this countermeasure.

As a stop-gap measure, peers may also (temporarily) disable the affected algorithms and use unaffected alternatives like AES-GCM instead until patches are available.

Details

The SSH specifications of ChaCha20-Poly1305 (chacha20-poly1305@openssh.com) and Encrypt-then-MAC (*-etm@openssh.com MACs) are vulnerable against an arbitrary prefix truncation attack (a.k.a. Terrapin attack). This allows for an extension negotiation downgrade by stripping the SSH_MSG_EXT_INFO sent after the first message after SSH_MSG_NEWKEYS, downgrading security, and disabling attack countermeasures in some versions of OpenSSH. When targeting Encrypt-then-MAC, this attack requires the use of a CBC cipher to be practically exploitable due to the internal workings of the cipher mode. Additionally, this novel attack technique can be used to exploit previously unexploitable implementation flaws in a Man-in-the-Middle scenario.

The attack works by an attacker injecting an arbitrary number of SSH_MSG_IGNORE messages during the initial key exchange and consequently removing the same number of messages just after the initial key exchange has concluded. This is possible due to missing authentication of the excess SSH_MSG_IGNORE messages and the fact that the implicit sequence numbers used within the SSH protocol are only checked after the initial key exchange.

In the case of ChaCha20-Poly1305, the attack is guaranteed to work on every connection as this cipher does not maintain an internal state other than the message's sequence number. In the case of Encrypt-Then-MAC, practical exploitation requires the use of a CBC cipher; while theoretical integrity is broken for all ciphers when using this mode, message processing will fail at the application layer for CTR and stream ciphers.

For more details see https://terrapin-attack.com.

Impact

This attack targets the specification of ChaCha20-Poly1305 (chacha20-poly1305@openssh.com) and Encrypt-then-MAC (*-etm@openssh.com), which are widely adopted by well-known SSH implementations and can be considered de-facto standard. These algorithms can be practically exploited; however, in the case of Encrypt-Then-MAC, we additionally require the use of a CBC cipher. As a consequence, this attack works against all well-behaving SSH implementations supporting either of those algorithms and can be used to downgrade (but not fully strip) connection security in case SSH extension negotiation (RFC8308) is supported. The attack may also enable attackers to exploit certain implementation flaws in a man-in-the-middle (MitM) scenario.

Copy link

Attempting automerge. See https://github.com/uniget-org/tools/actions/runs/8236382633.

Copy link

PR is clean and can be merged. See https://github.com/uniget-org/tools/actions/runs/8236382633.

@github-actions github-actions bot merged commit d709474 into main Mar 11, 2024
9 of 10 checks passed
@github-actions github-actions bot deleted the renovate/linuxkit-linuxkit-1.x branch March 11, 2024 16:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants