The AI Dream Team: Virtual Agent Technology + ChatGPT
ChatGPT is setting records for the fastest-growing user base of all time. According to a recent survey, 43% of professional workers are using ChatGPT for work-related tasks without safeguards, and of those workers 68% are doing so without their manager’s knowledge. With this high employee adoption rate, simply banning or limiting use of ChatGPT is not going to solve the issue. How can enterprises responsibly enable employee access to ChatGPT when at work?
Let’s examine how existing technology, like virtual agents, can provide guardrails and help unlock the full power of large language model (LLM) services responsibly while enabling employee productivity.
ChatGPT and Virtual Agent Technology: Better Together
Our experience at Espressive shows that employees call the service desk because they want someone to solve their issue for them. Despite the remarkable experiences that LLMs are providing, ChatGPT and other LLMs are focused on generating content, not launching an automation to resolve an issue.
LLMs have been trained on publicly available content vs. an enterprise’s knowledge. Many organizations are now looking at how to “fine tune” their private LLM with their content. This is a fairly complex, resource extensive, and expensive process that, if not done right, can result in hallucinations, as well as employees getting access to content they should not have.
Further, in our experience in automating employee self-help, we have found that employees prefer to have actions completed on their behalf rather than by reading instructions to follow. This is generally why they call the service desk for help and why organizations are deploying self-service virtual agents.
Many organizations have taken an approach of directly blocking access to public LLMs, such as ChatGPT and Google Bard, and deliver the experience via an integration with their virtual agent. This is, after all, where employees go when they have a question or issue.
Virtual agents are ideal at orchestrating among several possible outcomes for a given interaction, and choosing the one that will deliver the best experience. Virtual agents know how to automate the resolution of a task (e.g., reset my MFA, create a distribution list). If automation is not possible, virtual agents can look at the organization’s knowledge. When an answer is found in a knowledge article, a virtual agent first ensures the employee has access to the content, and if so, leverages its own private LLMs and the power of generative AI to create responses that are easy to consume.
If knowledge is not available, a virtual agent can then determine whether the question is appropriate to ask an LLM, or open a ticket on behalf of the employee when needed.
By choosing the best outcome to ensure an employee’s question or issue can be resolved with no human intervention, virtual agents meet employee expectations with the highest automated resolution rate.
How to Safely Bring LLMs to the Enterprise
To safely and responsibly bring LLMs into the enterprise, action should be taken by assisting employees in the transition towards this new technology. In order to solve for adoption—the key ingredient to the success—enterprises must ensure the following actions are taken:
- Make sure the technology you are bringing on (in this case, LLMs) is easy to use, meet employees where they are, and make sure it is easy to access.
- Automate the resolution of common employee questions and problems to improve resolution rates and the overall efficiency of your organization.
- Communicate often with your employees through a change management program and make sure feedback is considered so that your new technology is not abandoned before it has time to catch on.
Through the integration of an LLM with a virtual agent solution, employees are far more likely to seek information from one place that has the answers to all their questions (both work-related and not) rather than seek out information from multiple sources to get resolution. Organizations must put emphasis on employee adoption first and foremost, however, if they want to ensure the success of any new technology.
Unlock a New World of Workplace Assistance
If your organization is looking at automating your service desk, or ways to leverage LLMs, reach out and request a demo. We are “hired” to help organizations automate the resolution of service tickets with our advanced automation framework. Our philosophy is to automate first, and when not possible, leverage enterprise content combined with the generative capabilities of LLMs. This gives our customers the best of both worlds, with minimal effort.
ChatGPT Security Risks - How to Protect Your Enterprise
ChatGPT security risks include sharing of PII, trade secrets, source code, and more - but fortunately, there is a way to protect your enterprise.
What is Barista Live Generative Answers?
Espressive recently announced Barista Live Generative Answers (LGA), which has raised a number of questions related to how...
My AI Predictions for 2024
This year, my predictions for 2024 focus on the increasing role of large language models (LLMs) in automating issue resolution. In addition, I predict we...
Barista is Now Fully Conversational!
When we founded Espressive in October of 2016, we had already learned a lot from our experience at ServiceNow about how employees wanted to get help at work...
Can LLMs Automate Resolution of Employee Questions? Not on Their Own, But They Can Help.
The level of innovation in the generative AI space has been super exciting, with what appears to be yet another foundational model being announced...