How To Mitigate The Enterprise Security Risks Of LLMs


Christopher Savoie, PhD, is the CEO & founder of Zapata AI. He is a published scholar in medicine, biochemistry and computer science.

Since ChatGPT came out last year, Large Language Models (LLMs) have been on the tip of every enterprise leader’s tongue. These AI-powered tools have promised to dramatically increase productivity by automating or assisting with the creation of marketing content, sales materials, regulatory documents, legal contracts and more—while transforming customer service with more responsive, human-like chatbots.

However, as these LLMs become increasingly integrated into business operations, enterprises should be aware of several potential security risks.

There are three layers to the security issues of LLMs.

1. Sharing sensitive data with an external LLM provider.

2. The security of the model itself.

3. Unauthorized access to sensitive data that LLMs are trained on.

Sharing Sensitive Data With External LLM Services

Back in May, Samsung was in the news for banning the use of ChatGPT and other AI chatbots after sensitive internal source code was shared with the service. Samsung feared the code could be stored on the servers of OpenAI, Microsoft, Google or other LLM service providers and potentially be used to train their models.

By default, ChatGPT saves users’ chat history and repurposes it to further train their models. It’s possible this data could then be exposed to other tool users. If you use an external model provider, be sure to find out how prompts and replies can be used, if they are used for training and how and where they are stored.

Many enterprises, particularly in regulated industries like healthcare or finance, have strict policies about sharing their sensitive data with external services. Sharing data with an externally hosted LLM provider is no exception. Even if data isn’t inadvertently shared with other users of these tools, customers have no recourse if the data they share with external LLM providers is hacked.

To avoid these risks entirely, enterprises should consider training and running their AI chatbot tools within their own secure environment: private cloud, on-premises—whatever the enterprise considers…

Source…