Eight Vulnerabilities Disclosed in the AI Development Supply Chain


The vulnerabilities are disclosed via Protect AI’s February Vulnerability Report. They are:

CVE-2023-6975: arbitrary file write in MLFLow, CVSS 9.8

CVE-2023-6753: arbitrary file write on Windows in MLFlow, CVSS 9.6

CVE-2023-6730: RCE in Hugging Face Transformers via RagRetriever.from_pretrained(), CVSS 9.0

CVE-2023-6940: server side template injection bypass in MLFlow, CVSS 9.0

CVE-2023-6976: arbitrary file upload patch bypass in MLFlow, CVSS 8.8

CVE-2023-31036: RCE via arbitrary file overwrite in Triton Inference Server, CVSS 7.5

CVE-2023-6909: local file inclusion in MLFlow, CVSS 7.5

CVE-2024-0964: LFI in Gradio, CVSS 7.5

Protect AI has called for the development of an AI/ML BOM to supplement SBOMs (software) and PBOMs (product) in a separate blog: “The AI/ML BOM specifically targets the elements of AI and machine learning systems. It addresses risks unique to AI, such as data poisoning and model bias, and requires continuous updates due to the evolving nature of AI models.”

Absent this AI/ML BOM, in-house developers are reliant on either their own expertise or that of third parties (such as Protect AI) to discover how vulnerabilities can be manipulated within the hidden machine learning pipeline to introduce flaws to the final model before deployment.

Protect AI has two basic methods for AI/ML model vulnerability detection: scanning and bounty hunters. Its Guardian product introduced in January 2024 can use the output of its AI/ML scanner (ModelScan) to provide a secure gateway. But it is the firm’s community of independent bounty hunters that are particularly effective at discovering new vulnerabilities.

Read More…