ChatGPT Security Risks - How to Protect Your Enterprise
“Although CIOs see ChatGPT and other LLMs as extremely promising, they are struggling with the potential loss of PII or corporate secrets from employees sharing this information in their interactions,” said Pat Calhoun, founder and CEO of Espressive. “Enterprises, such as JPMorgan Chase, have gone as far as denying access to ChatGPT services from corporate resources. With systemic controls and support for company-specific policies, customers can unlock the full power of LLM services in a secure manner, without having to worry about the disclosure of sensitive information.”
ChatGPT is a powerful tool, and enterprise employees want to use it at work to make their lives easier. The problem with the use of this tool is that conversations held on ChatGPT are used for training data to improve the language model—so naturally, there is an organizational concern that employees might use sensitive information in these conversations that then get used for public LLM training.
Our goal should be to enable employees to use this powerful tool, but to implement safeguards for the organization so that security risks are minimized and eliminated.
Security risks for the enterprise using ChatGPT
- PII (personally identifiable information) - Data privacy regulations are increasingly at the forefront of how we handle customer data. Not to mention, customers expect the ability to control their own data. For that reason, it’s important for organizations to limit the use of PII within ChatGPT conversations.
- Corporate / trade secrets - Some information within your organization constitutes a specific competitive advantage for the brand that needs to be protected. We should strive to make it harder for employees to accidentally input trade secrets or sensitive corporate information into ChatGPT.
- Source code - if programmers or engineers within your organization enter source code into a ChatGPT conversation, this can expose an organization to risk by making that code available to public LLMs. There is a clear competitive disadvantage to making an organization’s owned source code available to the public.
How to solve these risks
Implement a LLM gateway
So how can an organization implement security policies on such a pervasive tool as ChatGPT? Espressive’s industry-leading Barista technology acts as a gateway to LLM access, enabling enterprises to enforce policies for safe and responsible access.
LLM gateway safeguards
- Verify policy compliance - ensuring that there is no sharing of source code, PII, or trade secrets in a ChatGPT interaction
- Disable access for specific content areas - IT teams have the ability to turn off LLM access to areas of an organization’s internal content and data that contain more sensitive information
- Restrict questions to only those that are work related
- Ensure employees understand issues with LLMs (for example, possible inaccuracies in LLM responses)
Talk to an experienced professional today
At Espressive, our team has helped countless organizations navigate the implementation and management of workplace automation tools. We speak the language too: from our ML Operations Team to our Customer Success team, you’re speaking with experienced professionals in the realm of workplace automation and ChatGPT security risk management. Book a call with our team today to learn more, or download our eBook about ChatGPT security risks to further explore the topic of managing ChatGPT use at work.
The AI Dream Team: Virtual Agent Technology + ChatGPT
ChatGPT is setting records for the fastest-growing user base of all time. So how can enterprises responsibly enable employee access to ChatGPT while at work?
What is Barista Live Generative Answers?
Espressive recently announced Barista Live Generative Answers (LGA), which has raised a number of questions related to how...
My AI Predictions for 2024
This year, my predictions for 2024 focus on the increasing role of large language models (LLMs) in automating issue resolution. In addition, I predict we...
Barista is Now Fully Conversational!
When we founded Espressive in October of 2016, we had already learned a lot from our experience at ServiceNow about how employees wanted to get help at work...
Can LLMs Automate Resolution of Employee Questions? Not on Their Own, But They Can Help.
The level of innovation in the generative AI space has been super exciting, with what appears to be yet another foundational model being announced...