Total
2128 CVE
| CVE | Vendors | Products | Updated | CVSS v3.1 |
|---|---|---|---|---|
| CVE-2024-7017 | 4 Apple, Google, Linux and 1 more | 4 Macos, Chrome, Linux Kernel and 1 more | 2025-11-17 | 7.5 High |
| Inappropriate implementation in DevTools in Google Chrome prior to 126.0.6478.182 allowed a remote attacker to potentially perform a sandbox escape via a crafted HTML page. (Chromium security severity: High) | ||||
| CVE-2022-49939 | 1 Linux | 1 Linux Kernel | 2025-11-14 | 7.0 High |
| In the Linux kernel, the following vulnerability has been resolved: binder: fix UAF of ref->proc caused by race condition A transaction of type BINDER_TYPE_WEAK_HANDLE can fail to increment the reference for a node. In this case, the target proc normally releases the failed reference upon close as expected. However, if the target is dying in parallel the call will race with binder_deferred_release(), so the target could have released all of its references by now leaving the cleanup of the new failed reference unhandled. The transaction then ends and the target proc gets released making the ref->proc now a dangling pointer. Later on, ref->node is closed and we attempt to take spin_lock(&ref->proc->inner_lock), which leads to the use-after-free bug reported below. Let's fix this by cleaning up the failed reference on the spot instead of relying on the target to do so. ================================================================== BUG: KASAN: use-after-free in _raw_spin_lock+0xa8/0x150 Write of size 4 at addr ffff5ca207094238 by task kworker/1:0/590 CPU: 1 PID: 590 Comm: kworker/1:0 Not tainted 5.19.0-rc8 #10 Hardware name: linux,dummy-virt (DT) Workqueue: events binder_deferred_func Call trace: dump_backtrace.part.0+0x1d0/0x1e0 show_stack+0x18/0x70 dump_stack_lvl+0x68/0x84 print_report+0x2e4/0x61c kasan_report+0xa4/0x110 kasan_check_range+0xfc/0x1a4 __kasan_check_write+0x3c/0x50 _raw_spin_lock+0xa8/0x150 binder_deferred_func+0x5e0/0x9b0 process_one_work+0x38c/0x5f0 worker_thread+0x9c/0x694 kthread+0x188/0x190 ret_from_fork+0x10/0x20 | ||||
| CVE-2022-50014 | 1 Linux | 1 Linux Kernel | 2025-11-14 | 7.0 High |
| In the Linux kernel, the following vulnerability has been resolved: mm/gup: fix FOLL_FORCE COW security issue and remove FOLL_COW Ever since the Dirty COW (CVE-2016-5195) security issue happened, we know that FOLL_FORCE can be possibly dangerous, especially if there are races that can be exploited by user space. Right now, it would be sufficient to have some code that sets a PTE of a R/O-mapped shared page dirty, in order for it to erroneously become writable by FOLL_FORCE. The implications of setting a write-protected PTE dirty might not be immediately obvious to everyone. And in fact ever since commit 9ae0f87d009c ("mm/shmem: unconditionally set pte dirty in mfill_atomic_install_pte"), we can use UFFDIO_CONTINUE to map a shmem page R/O while marking the pte dirty. This can be used by unprivileged user space to modify tmpfs/shmem file content even if the user does not have write permissions to the file, and to bypass memfd write sealing -- Dirty COW restricted to tmpfs/shmem (CVE-2022-2590). To fix such security issues for good, the insight is that we really only need that fancy retry logic (FOLL_COW) for COW mappings that are not writable (!VM_WRITE). And in a COW mapping, we really only broke COW if we have an exclusive anonymous page mapped. If we have something else mapped, or the mapped anonymous page might be shared (!PageAnonExclusive), we have to trigger a write fault to break COW. If we don't find an exclusive anonymous page when we retry, we have to trigger COW breaking once again because something intervened. Let's move away from this mandatory-retry + dirty handling and rely on our PageAnonExclusive() flag for making a similar decision, to use the same COW logic as in other kernel parts here as well. In case we stumble over a PTE in a COW mapping that does not map an exclusive anonymous page, COW was not properly broken and we have to trigger a fake write-fault to break COW. Just like we do in can_change_pte_writable() added via commit 64fe24a3e05e ("mm/mprotect: try avoiding write faults for exclusive anonymous pages when changing protection") and commit 76aefad628aa ("mm/mprotect: fix soft-dirty check in can_change_pte_writable()"), take care of softdirty and uffd-wp manually. For example, a write() via /proc/self/mem to a uffd-wp-protected range has to fail instead of silently granting write access and bypassing the userspace fault handler. Note that FOLL_FORCE is not only used for debug access, but also triggered by applications without debug intentions, for example, when pinning pages via RDMA. This fixes CVE-2022-2590. Note that only x86_64 and aarch64 are affected, because only those support CONFIG_HAVE_ARCH_USERFAULTFD_MINOR. Fortunately, FOLL_COW is no longer required to handle FOLL_FORCE. So let's just get rid of it. Thanks to Nadav Amit for pointing out that the pte_dirty() check in FOLL_FORCE code is problematic and might be exploitable. Note 1: We don't check for the PTE being dirty because it doesn't matter for making a "was COWed" decision anymore, and whoever modifies the page has to set the page dirty either way. Note 2: Kernels before extended uffd-wp support and before PageAnonExclusive (< 5.19) can simply revert the problematic commit instead and be safe regarding UFFDIO_CONTINUE. A backport to v5.19 requires minor adjustments due to lack of vma_soft_dirty_enabled(). | ||||
| CVE-2022-49967 | 1 Linux | 1 Linux Kernel | 2025-11-14 | 4.7 Medium |
| In the Linux kernel, the following vulnerability has been resolved: bpf: Fix a data-race around bpf_jit_limit. While reading bpf_jit_limit, it can be changed concurrently via sysctl, WRITE_ONCE() in __do_proc_doulongvec_minmax(). The size of bpf_jit_limit is long, so we need to add a paired READ_ONCE() to avoid load-tearing. | ||||
| CVE-2025-38028 | 1 Linux | 1 Linux Kernel | 2025-11-14 | 4.7 Medium |
| In the Linux kernel, the following vulnerability has been resolved: NFS/localio: Fix a race in nfs_local_open_fh() Once the clp->cl_uuid.lock has been dropped, another CPU could come in and free the struct nfsd_file that was just added. To prevent that from happening, take the RCU read lock before dropping the spin lock. | ||||
| CVE-2025-38073 | 1 Linux | 1 Linux Kernel | 2025-11-14 | 4.7 Medium |
| In the Linux kernel, the following vulnerability has been resolved: block: fix race between set_blocksize and read paths With the new large sector size support, it's now the case that set_blocksize can change i_blksize and the folio order in a manner that conflicts with a concurrent reader and causes a kernel crash. Specifically, let's say that udev-worker calls libblkid to detect the labels on a block device. The read call can create an order-0 folio to read the first 4096 bytes from the disk. But then udev is preempted. Next, someone tries to mount an 8k-sectorsize filesystem from the same block device. The filesystem calls set_blksize, which sets i_blksize to 8192 and the minimum folio order to 1. Now udev resumes, still holding the order-0 folio it allocated. It then tries to schedule a read bio and do_mpage_readahead tries to create bufferheads for the folio. Unfortunately, blocks_per_folio == 0 because the page size is 4096 but the blocksize is 8192 so no bufferheads are attached and the bh walk never sets bdev. We then submit the bio with a NULL block device and crash. Therefore, truncate the page cache after flushing but before updating i_blksize. However, that's not enough -- we also need to lock out file IO and page faults during the update. Take both the i_rwsem and the invalidate_lock in exclusive mode for invalidations, and in shared mode for read/write operations. I don't know if this is the correct fix, but xfs/259 found it. | ||||
| CVE-2025-37988 | 1 Linux | 1 Linux Kernel | 2025-11-14 | 4.7 Medium |
| In the Linux kernel, the following vulnerability has been resolved: fix a couple of races in MNT_TREE_BENEATH handling by do_move_mount() Normally do_lock_mount(path, _) is locking a mountpoint pinned by *path and at the time when matching unlock_mount() unlocks that location it is still pinned by the same thing. Unfortunately, for 'beneath' case it's no longer that simple - the object being locked is not the one *path points to. It's the mountpoint of path->mnt. The thing is, without sufficient locking ->mnt_parent may change under us and none of the locks are held at that point. The rules are * mount_lock stabilizes m->mnt_parent for any mount m. * namespace_sem stabilizes m->mnt_parent, provided that m is mounted. * if either of the above holds and refcount of m is positive, we are guaranteed the same for refcount of m->mnt_parent. namespace_sem nests inside inode_lock(), so do_lock_mount() has to take inode_lock() before grabbing namespace_sem. It does recheck that path->mnt is still mounted in the same place after getting namespace_sem, and it does take care to pin the dentry. It is needed, since otherwise we might end up with racing mount --move (or umount) happening while we were getting locks; in that case dentry would no longer be a mountpoint and could've been evicted on memory pressure along with its inode - not something you want when grabbing lock on that inode. However, pinning a dentry is not enough - the matching mount is also pinned only by the fact that path->mnt is mounted on top it and at that point we are not holding any locks whatsoever, so the same kind of races could end up with all references to that mount gone just as we are about to enter inode_lock(). If that happens, we are left with filesystem being shut down while we are holding a dentry reference on it; results are not pretty. What we need to do is grab both dentry and mount at the same time; that makes inode_lock() safe *and* avoids the problem with fs getting shut down under us. After taking namespace_sem we verify that path->mnt is still mounted (which stabilizes its ->mnt_parent) and check that it's still mounted at the same place. From that point on to the matching namespace_unlock() we are guaranteed that mount/dentry pair we'd grabbed are also pinned by being the mountpoint of path->mnt, so we can quietly drop both the dentry reference (as the current code does) and mnt one - it's OK to do under namespace_sem, since we are not dropping the final refs. That solves the problem on do_lock_mount() side; unlock_mount() also has one, since dentry is guaranteed to stay pinned only until the namespace_unlock(). That's easy to fix - just have inode_unlock() done earlier, while it's still pinned by mp->m_dentry. | ||||
| CVE-2025-64345 | 1 Bytecodealliance | 1 Wasmtime | 2025-11-14 | 1.8 Low |
| Wasmtime is a runtime for WebAssembly. Prior to version 38.0.4, 37.0.3, 36.0.3, and 24.0.5, Wasmtime's Rust embedder API contains an unsound interaction where a WebAssembly shared linear memory could be viewed as a type which provides safe access to the host (Rust) to the contents of the linear memory. This is not sound for shared linear memories, which could be modified in parallel, and this could lead to a data race in the host. Patch releases have been issued for all supported versions of Wasmtime, notably: 24.0.5, 36.0.3, 37.0.3, and 38.0.4. These releases reject creation of shared memories via `Memory::new` and shared memories are now excluded from core dumps. As a workaround, eembeddings affected by this issue should use `SharedMemory::new` instead of `Memory::new` to create shared memories. Affected embeddings should also disable core dumps if they are unable to upgrade. Note that core dumps are disabled by default but the wasm threads proposal (and shared memory) is enabled by default. | ||||
| CVE-2022-49968 | 1 Linux | 1 Linux Kernel | 2025-11-13 | 4.7 Medium |
| In the Linux kernel, the following vulnerability has been resolved: ieee802154/adf7242: defer destroy_workqueue call There is a possible race condition (use-after-free) like below (FREE) | (USE) adf7242_remove | adf7242_channel cancel_delayed_work_sync | destroy_workqueue (1) | adf7242_cmd_rx | mod_delayed_work (2) | The root cause for this race is that the upper layer (ieee802154) is unaware of this detaching event and the function adf7242_channel can be called without any checks. To fix this, we can add a flag write at the beginning of adf7242_remove and add flag check in adf7242_channel. Or we can just defer the destructive operation like other commit 3e0588c291d6 ("hamradio: defer ax25 kfree after unregister_netdev") which let the ieee802154_unregister_hw() to handle the synchronization. This patch takes the second option. runs") | ||||
| CVE-2022-50044 | 2 Linux, Redhat | 2 Linux Kernel, Enterprise Linux | 2025-11-13 | 4.7 Medium |
| In the Linux kernel, the following vulnerability has been resolved: net: qrtr: start MHI channel after endpoit creation MHI channel may generates event/interrupt right after enabling. It may leads to 2 race conditions issues. 1) Such event may be dropped by qcom_mhi_qrtr_dl_callback() at check: if (!qdev || mhi_res->transaction_status) return; Because dev_set_drvdata(&mhi_dev->dev, qdev) may be not performed at this moment. In this situation qrtr-ns will be unable to enumerate services in device. --------------------------------------------------------------- 2) Such event may come at the moment after dev_set_drvdata() and before qrtr_endpoint_register(). In this case kernel will panic with accessing wrong pointer at qcom_mhi_qrtr_dl_callback(): rc = qrtr_endpoint_post(&qdev->ep, mhi_res->buf_addr, mhi_res->bytes_xferd); Because endpoint is not created yet. -------------------------------------------------------------- So move mhi_prepare_for_transfer_autoqueue after endpoint creation to fix it. | ||||
| CVE-2025-12432 | 4 Apple, Google, Linux and 1 more | 4 Macos, Chrome, Linux Kernel and 1 more | 2025-11-13 | 8.8 High |
| Race in V8 in Google Chrome prior to 142.0.7444.59 allowed a remote attacker to potentially exploit heap corruption via a crafted HTML page. (Chromium security severity: High) | ||||
| CVE-2025-12434 | 2 Google, Microsoft | 2 Chrome, Windows | 2025-11-13 | 4.2 Medium |
| Race in Storage in Google Chrome on Windows prior to 142.0.7444.59 allowed a remote attacker who convinced a user to engage in specific UI gestures to perform UI spoofing via a crafted HTML page. (Chromium security severity: Medium) | ||||
| CVE-2023-53094 | 1 Linux | 1 Linux Kernel | 2025-11-12 | 4.7 Medium |
| In the Linux kernel, the following vulnerability has been resolved: tty: serial: fsl_lpuart: fix race on RX DMA shutdown From time to time DMA completion can come in the middle of DMA shutdown: <process ctx>: <IRQ>: lpuart32_shutdown() lpuart_dma_shutdown() del_timer_sync() lpuart_dma_rx_complete() lpuart_copy_rx_to_tty() mod_timer() lpuart_dma_rx_free() When the timer fires a bit later, sport->dma_rx_desc is NULL: Unable to handle kernel NULL pointer dereference at virtual address 0000000000000004 pc : lpuart_copy_rx_to_tty+0xcc/0x5bc lr : lpuart_timer_func+0x1c/0x2c Call trace: lpuart_copy_rx_to_tty lpuart_timer_func call_timer_fn __run_timers.part.0 run_timer_softirq __do_softirq __irq_exit_rcu irq_exit handle_domain_irq gic_handle_irq call_on_irq_stack do_interrupt_handler ... To fix this fold del_timer_sync() into lpuart_dma_rx_free() after dmaengine_terminate_sync() to make sure timer will not be re-started in lpuart_copy_rx_to_tty() <= lpuart_dma_rx_complete(). | ||||
| CVE-2023-53047 | 2 Linux, Redhat | 2 Linux Kernel, Enterprise Linux | 2025-11-12 | 4.7 Medium |
| In the Linux kernel, the following vulnerability has been resolved: tee: amdtee: fix race condition in amdtee_open_session There is a potential race condition in amdtee_open_session that may lead to use-after-free. For instance, in amdtee_open_session() after sess->sess_mask is set, and before setting: sess->session_info[i] = session_info; if amdtee_close_session() closes this same session, then 'sess' data structure will be released, causing kernel panic when 'sess' is accessed within amdtee_open_session(). The solution is to set the bit sess->sess_mask as the last step in amdtee_open_session(). | ||||
| CVE-2023-53046 | 2 Linux, Redhat | 2 Linux Kernel, Enterprise Linux | 2025-11-12 | 4.7 Medium |
| In the Linux kernel, the following vulnerability has been resolved: Bluetooth: Fix race condition in hci_cmd_sync_clear There is a potential race condition in hci_cmd_sync_work and hci_cmd_sync_clear, and could lead to use-after-free. For instance, hci_cmd_sync_work is added to the 'req_workqueue' after cancel_work_sync The entry of 'cmd_sync_work_list' may be freed in hci_cmd_sync_clear, and causing kernel panic when it is used in 'hci_cmd_sync_work'. Here's the call trace: dump_stack_lvl+0x49/0x63 print_report.cold+0x5e/0x5d3 ? hci_cmd_sync_work+0x282/0x320 kasan_report+0xaa/0x120 ? hci_cmd_sync_work+0x282/0x320 __asan_report_load8_noabort+0x14/0x20 hci_cmd_sync_work+0x282/0x320 process_one_work+0x77b/0x11c0 ? _raw_spin_lock_irq+0x8e/0xf0 worker_thread+0x544/0x1180 ? poll_idle+0x1e0/0x1e0 kthread+0x285/0x320 ? process_one_work+0x11c0/0x11c0 ? kthread_complete_and_exit+0x30/0x30 ret_from_fork+0x22/0x30 </TASK> Allocated by task 266: kasan_save_stack+0x26/0x50 __kasan_kmalloc+0xae/0xe0 kmem_cache_alloc_trace+0x191/0x350 hci_cmd_sync_queue+0x97/0x2b0 hci_update_passive_scan+0x176/0x1d0 le_conn_complete_evt+0x1b5/0x1a00 hci_le_conn_complete_evt+0x234/0x340 hci_le_meta_evt+0x231/0x4e0 hci_event_packet+0x4c5/0xf00 hci_rx_work+0x37d/0x880 process_one_work+0x77b/0x11c0 worker_thread+0x544/0x1180 kthread+0x285/0x320 ret_from_fork+0x22/0x30 Freed by task 269: kasan_save_stack+0x26/0x50 kasan_set_track+0x25/0x40 kasan_set_free_info+0x24/0x40 ____kasan_slab_free+0x176/0x1c0 __kasan_slab_free+0x12/0x20 slab_free_freelist_hook+0x95/0x1a0 kfree+0xba/0x2f0 hci_cmd_sync_clear+0x14c/0x210 hci_unregister_dev+0xff/0x440 vhci_release+0x7b/0xf0 __fput+0x1f3/0x970 ____fput+0xe/0x20 task_work_run+0xd4/0x160 do_exit+0x8b0/0x22a0 do_group_exit+0xba/0x2a0 get_signal+0x1e4a/0x25b0 arch_do_signal_or_restart+0x93/0x1f80 exit_to_user_mode_prepare+0xf5/0x1a0 syscall_exit_to_user_mode+0x26/0x50 ret_from_fork+0x15/0x30 | ||||
| CVE-2024-6387 | 13 Almalinux, Amazon, Apple and 10 more | 85 Almalinux, Amazon Linux, Macos and 82 more | 2025-11-11 | 8.1 High |
| A security regression (CVE-2006-5051) was discovered in OpenSSH's server (sshd). There is a race condition which can lead sshd to handle some signals in an unsafe manner. An unauthenticated, remote attacker may be able to trigger it by failing to authenticate within a set time period. | ||||
| CVE-2022-49884 | 1 Linux | 1 Linux Kernel | 2025-11-10 | 4.7 Medium |
| In the Linux kernel, the following vulnerability has been resolved: KVM: Initialize gfn_to_pfn_cache locks in dedicated helper Move the gfn_to_pfn_cache lock initialization to another helper and call the new helper during VM/vCPU creation. There are race conditions possible due to kvm_gfn_to_pfn_cache_init()'s ability to re-initialize the cache's locks. For example: a race between ioctl(KVM_XEN_HVM_EVTCHN_SEND) and kvm_gfn_to_pfn_cache_init() leads to a corrupted shinfo gpc lock. (thread 1) | (thread 2) | kvm_xen_set_evtchn_fast | read_lock_irqsave(&gpc->lock, ...) | | kvm_gfn_to_pfn_cache_init | rwlock_init(&gpc->lock) read_unlock_irqrestore(&gpc->lock, ...) | Rename "cache_init" and "cache_destroy" to activate+deactivate to avoid implying that the cache really is destroyed/freed. Note, there more races in the newly named kvm_gpc_activate() that will be addressed separately. [sean: call out that this is a bug fix] | ||||
| CVE-2025-37920 | 1 Linux | 1 Linux Kernel | 2025-11-10 | 4.7 Medium |
| In the Linux kernel, the following vulnerability has been resolved: xsk: Fix race condition in AF_XDP generic RX path Move rx_lock from xsk_socket to xsk_buff_pool. Fix synchronization for shared umem mode in generic RX path where multiple sockets share single xsk_buff_pool. RX queue is exclusive to xsk_socket, while FILL queue can be shared between multiple sockets. This could result in race condition where two CPU cores access RX path of two different sockets sharing the same umem. Protect both queues by acquiring spinlock in shared xsk_buff_pool. Lock contention may be minimized in the future by some per-thread FQ buffering. It's safe and necessary to move spin_lock_bh(rx_lock) after xsk_rcv_check(): * xs->pool and spinlock_init is synchronized by xsk_bind() -> xsk_is_bound() memory barriers. * xsk_rcv_check() may return true at the moment of xsk_release() or xsk_unbind_dev(), however this will not cause any data races or race conditions. xsk_unbind_dev() removes xdp socket from all maps and waits for completion of all outstanding rx operations. Packets in RX path will either complete safely or drop. | ||||
| CVE-2025-55231 | 1 Microsoft | 9 Server, Windows, Windows Server and 6 more | 2025-11-10 | 7.5 High |
| Concurrent execution using shared resource with improper synchronization ('race condition') in Windows Storage allows an unauthorized attacker to execute code over a network. | ||||
| CVE-2025-53135 | 1 Microsoft | 20 Directx, Windows, Windows 10 and 17 more | 2025-11-10 | 7 High |
| Concurrent execution using shared resource with improper synchronization ('race condition') in Windows DirectX allows an authorized attacker to elevate privileges locally. | ||||