UPDATED 12:41 EDT / SEPTEMBER 26 2023

SECURITY

Security threats of AI large language models are mounting, spurring efforts to fix them

A new report on the security of artificial intelligence large language models, including OpenAI LP’s ChatGPT, shows a series of poor application development decisions that carry weaknesses in protecting enterprise data privacy and security.

The report is just one of many examples of mounting evidence of security problems with LLMs that have appeared recently, demonstrating the difficulty in mitigating these threats. The latest report is by Rezilion, a company that sells software supply chain security products. Entitled, “Explaining the Risk: Exploring LLMs Open-Source Security Landscape,” it contains analysis of two research topics. 

The first half explores the Open Worldwide Applications Security Project’s “Top 10 threats for LLMs” that was published earlier this year and recently revised. “We aim to provide practical, actionable and concise security guidance to help these professionals navigate the complex and evolving terrain of LLM security,” the OWASP authors wrote in the report’s introduction, justifying the creation of a new “top 10” list.

OWASP is a longstanding community of developers dedicated to improving their app security. The nonprofit has tens of thousands of members spread across more than 250 local chapters, and runs numerous training conferences as well as distribute regular top 10 threat lists on various topics.

The No. 1 threat has to do with attackers manipulating prompts, causing the model to do something malicious even if an LLM has certain guardrails. These could vary “from solicitation of sensitive information to influencing critical decision-making processes under the guise of normal operation,” according to OWASP. A detailed example of how this type of threat can be constructed can be found in Legit Security’s Nadav Noy blog post.

Rezilion’s report takes the OWASP Top 10 list several steps further, grouping them into four broad risk categories on trust boundaries, data management, model structure and basic security best practices. For each category, their report takes a deeper dive into how each risk appears in the LLM context and steps that developers and enterprise security managers can take to mitigate these risks.

Application security analysis of LLMs

The second half of Rezilion’s report does some original static application software analysis of a series of open-source projects, both involving LLMs and other unrelated tools for comparison purposes. They employ the OpenSSF Scorecard in this evaluation, which points out numerous vulnerabilities in the code, and how the open-source repositories are being managed — or not, as is often the case. The scorecard provides an overall grade of a specific project’s security level, ranging from 0 to 10, with higher grades indicating better overall application security.

Rezilion used this scorecard to examine 50 of the most popular LLM open source projects on GitHub that were created most recently and generated 3,000 “stars” or likes from developers. Not surprisingly, the LLM-related projects didn’t fare well. One ChatGPT version — not the original, by the way — received a 6.1 grade. The average score was 4.6. Many of the poorer scores were because the projects were new and not very well-maintained.

The researchers found that “the more popular the LLM project is, the lower its scorecard score. While we would expect to see the more popular projects maintaining better security standards and complying with best practices, in actuality, we see the exact opposite trend.” One of the AutoGPT projects had a significant code base that had never undergone any validation, meaning that a single developer had published major portions of its code. 

Many of the LLM projects contained dangerous workflow habits, such as untrusted code checkouts, logging of GitHub secrets, or using potentially untrusted inputs in various scripts. All of these techniques could result in potential compromises of the code by bad actors, for example. 

What about malicious code developers?

The OWASP and Rezilion efforts document one side of the LLM security situation: what happens when a developer inadvertently introduces coding errors. And though that is certainly important, there is another class of LLM insecurities that have to do with malicious developers who purposefully introduce malware or design their models to do bad things. There are plenty of examples of that, including AI threats to software supply chains, adversarial chatbot attack weapons and compromised GitHub code repositories

All that means it’s important to take security-by-design more seriously, especially as more developers get comfortable with using chatbots and LLMs.

Potential remedies

Perhaps the best advice is given by Orion Cassetto, director of marketing for Radiant Security. He advocates for greater model transparencySecurity leaders should look for AI solutions that expose how conclusions were reached, what activity has been performed, what source data was used, and where it used external data sources like threat intelligence,” he says.

One way to help this is to select a proprietary LLM that was created by a “security-conscious developer.” There are several examples of these products, highlighting AI-enhanced products from Palo Alto Networks Inc., Darktrace Holdings Inc. and Sentra Inc., just to name a few.

One interesting effort to watch is the U.S. government’s Bengal project run by the Intelligence Advanced Research Projects Agency. It’s examining the biases of LLMs and quantifying threat modalities and ways to mitigate threats. The first meeting with external participants will be in October.

“The U.S. Government is also interested in identifying and mitigating hazardous use of LLMs by potential adversaries,” IARPA program manager Tim McKinnon wrote in a post on DefenseScoop, a military news website. “Recent generative AI/LLM models are complex and powerful, and researchers are only beginning to understand the ramifications of their adoption.” 

And OpenAI — the company that brought ChatGPT into the world — isn’t exactly standing by either. Last week it launched the OpenAI Red Teaming Network to help benchmark and vet its models by expert researchers in more than a dozen fields who have been invited to apply. How quickly they can staff this operation up, and how these experts will work together to craft procedures, playbooks, and other security measures, remain to be seen.

Certainly, there are numerous app security tools that can examine code pipelines. But one glaring omission spotted by the Rezilion analysis is that LLMs need to adopt a security-by-design approach and incorporate existing security frameworks, such as those developed expressly for AI and LLM contexts. They include Google’s Secure AI Framework, Nvidia’s NeMo Guardrails and MITRE’s Adversarial Threat Landscape for Artificial-Intelligence Systems.

Until that happens, the burden is on each individual user and developer who wants to employ an LLM to vet them properly for security purposes.

Image: geralt/Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One-click below supports our mission to provide free, deep and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU