Blog

Key Challenges LLMs Face for Enterprise

By Pat Calhoun, Chief Executive Officer
 | 
September 21, 2023
Share:

I have had the opportunity to meet with a number of CIOs who are excited about bringing in large language models (LLMs), such as Microsoft’s Azure GPT and Google’s Vertex AI. One of the obvious use cases they have been looking at is how to leverage LLMs to assist with enterprise search for employee self-help.

This makes sense since finding enterprise content has historically been a challenge. Why? First, content is everywhere, from SharePoint to Confluence to ServiceNow. Second, content is constantly changing. As a result, most intranets and ITSM self-service portals have been unable to effectively serve up the right content to employees, so employees continue to overwhelm the service desk.

This article outlines the challenges that organizations will face on their journeys to leverage LLMs for enterprise search.  

Challenge #1: Importing Enterprise Content

Most organizations believe they can “fine tune” a private large language model by importing enterprise content into it. The concept is that if they can “convince” a private LLM to prioritize enterprise content, the enterprise search problem will be addressed.

Unfortunately, it doesn’t work that way. LLM foundational models have been trained on an enormous amount of content, presenting two primary issues:

  • Because an LLM contains a wealth of information, it is very difficult to have it “forget” trained content in favor of enterprise content. This means answers can be a combination of the enterprise’s content as well as other content the LLM was previously trained on.  
  • In the absence of enterprise content for a specific topic, the LLM will “hallucinate” an answer, resulting in highly unpredictable, but often believable, responses.

Does this mean LLMs should not be used for enterprise search? Not at all. But they should be leveraged for what they are really good at – generative AI.

Challenge #2: Searching for Enterprise Content

Finding answers to questions in enterprise content requires the use of a search platform rather than an LLM (which we just identified as the right tool for generative AI). There are a number of new search technologies available, such as vector databases, that are able to retrieve specific answers to questions being asked by employees. However, content exists across a multitude of data repositories that are constantly being updated with new content. So there needs to be a scalable way of identifying new and updated content dynamically.

Further, since employees can use a number of different ways to express their need, the words they use are not always the same as those found in enterprise content. So, the search platform must be able to maximize results by understanding how employees interact, and then seamlessly leverage that within enterprise content.

Challenge #3: Prioritizing Search Results

Once you have identified an effective search platform, the next issue is that all content becomes freely available to all employees. But this violates the basic premise of enterprise security. Because most search platforms do not understand the concept of access control, an additional layer must be built that checks search results against access control rules, ensuring employees only see what they should.

When search results are retrieved, it generally results in a number of matches, and no employee wants to have to wade through countless search results. Therefore, a strong platform also ranks the content against the original question or issue in order to ensure a precise answer is provided.

Challenge #4: Delivering Consumable Answers

After a precise answer has been identified across the multitude of content available, the generative capabilities of LLMs can then be used to ensure that responses are consumable. This is particularly important for technical questions, since many of these articles were written by a technical writer for a technical audience. Generative AI maximizes the cases where enterprise content can be understood by every employee.

Providing the precise content directly into the LLM as the prompt for generating an answer ensures hallucinations are kept to a minimum. Further, because the LLM is only used for its generative capabilities, it minimizes the overall cost of the service.

Summary


The promise of using LLMs for enterprise search can be realized as long as organizations overcome the challenges outlined in this article and use the right tool for each challenge.  


In order to deliver on employee expectations, Enterprises can either build the following capabilities, or find a platform that delivers them out of the box with little to no effort:

  • Uses LLMs for what they are good at – generative AI
  • Uses a vector database or similar technology to identify specific answers to questions being asked by employees
  • Has a scalable way of identifying new or updated content dynamically across a range of content repositories
  • Ranks the content against the original question or issue in order to ensure precise answers are provided
  • Is able to maximize search results by understanding how employees interact and seamlessly leverage that understanding within enterprise content
  • Has a built in layer that checks search results against access controls, ensuring employees only see what they should

In looking for the perfect solution, I hope that you will consider Espressive Barista, our generative AI-based virtual agent. Barista incorporates all of these things to enable enterprises to leverage LLMs for enterprise search. And Barista takes it one step further by incorporating an advanced automation framework. The truth is that employees don’t want to read knowledge articles if they don’t have to. They just want their problems resolved. Barista prioritizes automations over answers.  

Learn more about Barista here.

Share this post: