llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579.
History

Wed, 16 Jul 2025 13:45:00 +0000

Type Values Removed Values Added
Metrics epss

{'score': 0.00042}

epss

{'score': 0.00055}


Fri, 11 Jul 2025 13:45:00 +0000

Type Values Removed Values Added
Metrics epss

{'score': 0.00042}


Thu, 10 Jul 2025 21:15:00 +0000

Type Values Removed Values Added
Metrics ssvc

{'options': {'Automatable': 'no', 'Exploitation': 'poc', 'Technical Impact': 'partial'}, 'version': '2.0.3'}


Thu, 10 Jul 2025 19:45:00 +0000

Type Values Removed Values Added
Description llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579.
Title Integer Overflow in GGUF Parser can lead to Heap Out-of-Bounds Read/Write in gguf
Weaknesses CWE-122
CWE-680
References
Metrics cvssV4_0

{'score': 8.9, 'vector': 'CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:H/VI:H/VA:H/SC:N/SI:N/SA:N/E:P'}


cve-icon MITRE

Status: PUBLISHED

Assigner: GitHub_M

Published: 2025-07-10T19:32:45.296Z

Updated: 2025-07-10T20:31:07.240Z

Reserved: 2025-07-07T14:20:38.389Z

Link: CVE-2025-53630

cve-icon Vulnrichment

Updated: 2025-07-10T20:31:00.986Z

cve-icon NVD

Status : Awaiting Analysis

Published: 2025-07-10T20:15:27.523

Modified: 2025-07-15T13:14:49.980

Link: CVE-2025-53630

cve-icon Redhat

No data.