Blog

ChatGPT Security Risks - How to Protect Your Enterprise

By Leah Phipps, Product Marketing
 | 
January 31, 2024
Share:

“Although CIOs see ChatGPT and other LLMs as extremely promising, they are struggling with the potential loss of PII or corporate secrets from employees sharing this information in their interactions,” said Pat Calhoun, founder and CEO of Espressive. “Enterprises, such as JPMorgan Chase, have gone as far as denying access to ChatGPT services from corporate resources. With systemic controls and support for company-specific policies, customers can unlock the full power of LLM services in a secure manner, without having to worry about the disclosure of sensitive information.”

ChatGPT is a powerful tool, and enterprise employees want to use it at work to make their lives easier. The problem with the use of this tool is that conversations held on ChatGPT are used for training data to improve the language model—​​so naturally, there is an organizational concern that employees might use sensitive information in these conversations that then get used for public LLM training.

Our goal should be to enable employees to use this powerful tool, but to implement safeguards for the organization so that security risks are minimized and eliminated.

Security risks for the enterprise using ChatGPT

  • PII (personally identifiable information) - Data privacy regulations are increasingly at the forefront of how we handle customer data. Not to mention, customers expect the ability to control their own data. For that reason, it’s important for organizations to limit the use of PII within ChatGPT conversations.
  • Corporate / trade secrets - Some information within your organization constitutes a specific competitive advantage for the brand that needs to be protected. We should strive to make it harder for employees to accidentally input trade secrets or sensitive corporate information into ChatGPT.
  • Source code - if programmers or engineers within your organization enter source code into a ChatGPT conversation, this can expose an organization to risk by making that code available to public LLMs. There is a clear competitive disadvantage to making an organization’s owned source code available to the public.

How to solve these risks

Implement a LLM gateway

So how can an organization implement security policies on such a pervasive tool as ChatGPT? Espressive’s industry-leading Barista technology acts as a gateway to LLM access, enabling enterprises to enforce policies for safe and responsible access.

LLM gateway safeguards

  • Verify policy compliance - ensuring that there is no sharing of source code, PII, or trade secrets in a ChatGPT interaction
  • Disable access for specific content areas - IT teams have the ability to turn off LLM access to areas of an organization’s internal content and data that contain more sensitive information
  • Restrict questions to only those that are work related
  • Ensure employees understand issues with LLMs (for example, possible inaccuracies in LLM responses)

Talk to an experienced professional today

At Espressive, our team has helped countless organizations navigate the implementation and management of workplace automation tools. We speak the language too: from our ML Operations Team to our Customer Success team, you’re speaking with experienced professionals in the realm of workplace automation and ChatGPT security risk management. Book a call with our team today to learn more, or download our eBook about ChatGPT security risks to further explore the topic of managing ChatGPT use at work.

Share this post: