Why RAG Has a Place in Your LLMOps

April 15, 2024

The Challenge with LLMs

With the explosion of generative AI tools available for providing information, making recommendations, or creating images, LLMs have captured the public imagination. Although we cannot expect an LLM to have all the information we want, or sometimes even include inaccurate information, consumer enthusiasm for using generative AI tools continues to build.

When applied to a business scenario, however, the tolerance for models that provide incorrect or missing answers rapidly approaches 0%. We are quickly learning that broad, generic LLMs are not suitable for domain-specific or company-specific information retrieval. The large datasets that go into training an LLM often result in generic or confused responses, especially around concepts and terms that are loosely defined or have industry-specific meanings. Imagine an over-enthusiastic new employee with minimal previous work experience who confidently answers every question while lacking context and the latest information – this is akin to an LLM. This is why Retrieval Augmented Generation (RAG) has a pivotal role in the AI tech stack for LLMOps.

What Is RAG?

RAG architecture is essentially a question-and-answer system based on an authoritative external source (which in most cases is your company data), providing the missing context for the LLM. Implementing RAG into the LLM workflow produces more predictable, accurate responses by augmenting LLMs with external knowledge. 

The data ingestion step provides the RAG system with the ability to build a database of contexts  and queries in order to smartly reference and retrieve the correct information when queried in the future for the LLM to format and include in the response.

By including RAG in your AI tech stack, you are providing the domain-specific, contextually-correct external data that the LLM needs. Without retraining the model, RAG enables the LLM to access relevant and updated information for a more accurate response.

Benefits of RAG

There are many advantages to using RAG:

Greater control over data

Control over data access is critical for enterprises. For organizations managing their own LLM infrastructure, RAG enables them to use their internal data alongside the LLM. The RAG database can be maintained like any other internal database – in a secured location requiring access credentials.

Updating the LLM in real time

Retraining a billion-parameter LLM is a significant and expensive proposition requiring time and resources. Use RAG to make critical data available for generating responses with real-time or batch updates, ensuring your stakeholders are always receiving relevant responses from the LLM.

Build user trust by mitigating hallucinations

In the absence of data, an LLM will simply generate it. By using RAG with guardrails, organizations can reduce or even eliminate the possibility of hallucinations and fabricated data. It takes a lot of effort to build a working generative AI tool for a company but only one bad experience to have it go unused.

Limitations of RAG

Data and privacy

Privacy is critical for enterprises, for organizations utilizing external LLM services, RAG enables them to use their private data alongside the LLM, that said it is important to notice that the private information will be sent as context with every query to the external LLM service provider! The RAG database can be maintained like any other internal database — but its usage with external LLM service does expose the data to a third party. 

Scale and RAG Models

For correct context retrieval, RAGs use a smaller LLM, connecting the query with the correct context, and many off-the-shelf RAG models will work great on small amounts of documents. But when scaling the database, ingesting more documents with nuanced information off-the-shelf RAG models fail to retrieve the correct context. It is therefore important to understand that the RAG models need updating and refinement based on the use cases to better perform for specific tasks in scale. We will address how to collect feedback and retrain RAG models in a future blog post.

Building and Deploying RAG

Data ingestion and preprocessing

It is important to feed only relevant domain-specific data into the RAG system, in order to ensure accuracy in information retrieval. 

After the documents have been loaded into the system, each document is split into smaller more easily-referenced segments called chunks. The size of the chunk depends somewhat on the capabilities of the embedding model and is an important part of the process. Another critical part of data preprocessing is deduplication and eliminating the potential for duplicative references to exist within the RAG system.

Generating embeddings and building a vector database

As the data is ingested and processed, the embedding model builds the database of vectors, cataloging the chunks with references. Vector databases are designed for fast search and retrieval when queried, so low latency inference is required for providing real-time results. 

Inference

Once the RAG system is ready to be used as a reference and integrated with an LLM, the backbone of a queryable generative AI tool has been built. The system will be able to return contextually complete responses with referenceable sources.

Conclusion

Retrieval-Augmented Generation is becoming a best practice for improving the accuracy of LLM responses while reducing hallucinations or out-of-context answers. With a RAG setup, maintenance is also a bit easier – AI teams only need to retrain the embedding model and update the vector database. There is no need to manage and track prompts or switch LLM foundation models.

To optimize for performance across the LLMOps tech stack, enterprises using NVIDIA AI Enterprise can take advantage of tools such as NVIDIA cuDF and NVIDIA NeMo Data Curator to reduce the amount of time required to preprocess a large data corpus by performing parallel operations on the GPU. In order to fully preserve private information teams can also use TensorRT-LLM for on-prem or VPC model deployment coupled with NVIDIA NeMo Retriever for maximum privacy without losing speeding up inference performance. Learn more about NVIDIA’s NeMo Framework for Generative AI, deployed in a single click with ClearML’s full LLMOps solution.

Get started

ClearML’s open source foundational AI platform is available on GitHub. For more advanced features such as managing large unstructured datasets, role-based access control, and compute resource management by user group, please request a demo.

Facebook
Twitter
LinkedIn