Can LLMs Automate Resolution of Employee Questions? Not on Their Own, But They Can Help.
The level of innovation in the generative AI space has been super exciting, with what appears to be yet another foundational model being announced every week. This is providing organizations with an opportunity to start evaluating these models against their core use cases.
I’ve talked with many CIOs at several events in the past couple of months and almost all of them are interested in leveraging LLMs to automate the resolution of service desk tickets. It makes sense given these foundational models seem to have answers to everything under the sun. But delivering answers is different from automation resolution – and that is not a core competency of LLMs.
Why Automating Should be a Priority
We founded Espressive with the belief that employees needed a better self-help platform. From our heritage on the service desk and at ServiceNow, we saw that employees continued to contact the service desk directly versus searching for knowledge in a portal because they wanted someone to “fix the problem” for them. Very few employees had the patience, if not wherewithal, to read through complex solutions to their issues.
It is for this reason that we focused a lot of our early efforts on delivering an advanced automation framework with the broadest number of pre-built integrations and automations. This ensured that our customers could get up and running very quickly with relatively low effort, while maximizing the number of tickets resolved automatically.
In fact, for most of our customers, over 63% of their “deflected tickets” are the result of automations versus knowledge search.
Can LLMs Help You Automate?
With the introduction of tools like ChatGPT, which seems to have answers to everything, it was natural for organizations to look at them as a possible way to help with their service desk challenges. So, I asked ChatGPT whether it was capable of automating resolution to employee questions and issues. The response? “As an AI language model, I don’t have direct access to enterprise systems or workflows, so I cannot trigger automations directly.” That response didn’t surprise me since the core competency of LLMs is in generative AI, and because they do not have access to internal corporate systems.
This means that LLMs should be leveraged for their generative capabilities to provide employees with instructions on how to solve issues. Easier said than done given answers differ greatly based on applications and tools used. As a consequence, many organizations are looking at “fine tuning” these LLMs with their enterprise knowledge. But doing this is actually quite challenging which I explain in this blog.
But, again, while possible to leverage LLMs for enterprise search, it is important to remember that employees generally do not want to read instructions and instead want their problems solved for them.
Automate, Automate, Automate
It is for this reason that we always prioritize automating the resolution of a problem. When I say automating, I mean resolving the issue end-to-end by invoking APIs in a third-party system vs. requiring someone to follow instructions. In other words, don’t tell someone how to reset their password – just do it.
After 7 years of automating employee issues, we have found that, despite every organization being different, the top call drivers for most are nearly universally the same – though automating the resolution is different based on the tools used.
As a result, we have built a library of automations that target these top call drivers across the majority of technologies. For instance, software provisioning is on the top 5 for most organizations, but granting access differs based on the technology they use. This has resulted in a large pre-built library of integrations/automation which we can quickly deploy at a customer, ensuring we maximize the value delivered on day one.
Let Us Help You Help Your Employees
We are “hired” to help organizations automate the resolution of service tickets with our advanced automation framework. Our philosophy is to automate first, and when not possible, leverage enterprise content combined with the generative capabilities of LLMs. This gives our customers the best of both worlds, with minimal effort.
It is for this reason that Espressive customers commonly achieve deflection rates between 55% and 67% -- in the first couple of months. In fact, we are so confident in our automation-first approach that we are willing to contractually commit to a minimum deflection rate.
If your organization is looking at automating your service desk, or ways to leverage LLMs, give us a shout. What we recommend to most customers is to run an actual proof-of-value (POV) so you can interact with our solution, see how it would work in your environment, and understand the level of investment you would need to make on your end to be successful.
ChatGPT Security Risks - How to Protect Your Enterprise
ChatGPT security risks include sharing of PII, trade secrets, source code, and more - but fortunately, there is a way to protect your enterprise.
The AI Dream Team: Virtual Agent Technology + ChatGPT
ChatGPT is setting records for the fastest-growing user base of all time. So how can enterprises responsibly enable employee access to ChatGPT while at work?
What is Barista Live Generative Answers?
Espressive recently announced Barista Live Generative Answers (LGA), which has raised a number of questions related to how...
My AI Predictions for 2024
This year, my predictions for 2024 focus on the increasing role of large language models (LLMs) in automating issue resolution. In addition, I predict we...
Barista is Now Fully Conversational!
When we founded Espressive in October of 2016, we had already learned a lot from our experience at ServiceNow about how employees wanted to get help at work...