MLOps Community
+00:00 GMT

Can A RAG Chatbot Really Improve Content?

Can A RAG Chatbot Really Improve Content?
# Vector Database
# RAG
# Usability
# ApertureDB

The hunt for missing documentation for ApertureDB as learnt from RAG-based Q&A on ApertureDB

September 10, 2024
Vishakha Gupta
Vishakha Gupta
Can A RAG Chatbot Really Improve Content?

Good documentation is not for the faint of heart! That may sound dramatic, but it’s true. In fact, I used to almost ignore it, given how complex it could be to maintain and the fact that there was no clear return on investment. Only when our life started depending on it, errr… by that I mean, users’ happiness, did we take on the arduous task of putting the right information in the right place.  And are we there yet? Nope! So says our chatbot! Let’s then see what is required to maintain documentation for a database product up to date, what we did before our chatbot experiment, and how we are evolving and improving it now.


Query language and API

ApertureDB is a database purpose-built for multimodal AI, so it manages multimodal data like text, images, videos, embeddings, annotations, and application metadata. Our JSON-based query language makes it easier for us to unify management of these data types behind a simple and consistent interface. In this way, a user does not have to worry about dealing with multiple data backends or query languages any more. However, the uniqueness of our product comes with a need to thoroughly document the principles and specifications of our query language.


Evolution in ApertureDB documentation tools

ApertureDB originally grew out of a project at Intel, where we started our documentation with GitHub Pages, but as our code grew it was always out of sync, since our documentation and code were written in different places. Something like Doxygen strings could be useful, but it was complicated to attach a lot of context or examples unless we wrote pages of structured comments in code files. Once we started engaging with our users at ApertureData, we couldn't continue with GitHub documentation anymore, prompting a move to writing RST files for every command in our query interface with examples; that was our documentation for a long time until we realized we wanted richer documentation capabilities. Finally we landed on Docusaurus as our documentation tool.

We also had separate Python SDK documentation, since that was generated out of Python Docstrings. Even though much of our Python SDK is a simple wrapper for JSON queries, it was very inconvenient for anyone to look up how to do something because they had to go to two different repositories with little tying them together. As part of our migration to Docusaurus, we combined both sources of documentation into a unified documentation website, and also started introducing notebook examples to provide code samples to our users. This also made it easier for us to add cross-links between the different types of documentation.


Automatic evaluation of code snippets

As anyone maintaining API documentation can tell you, while it is very important to include many examples of the code required to perform various tasks, it’s not easy to keep those examples in sync with changes to the API. The only way to ensure the sample code always works is to execute it as part of the continuous integration and deployment. At one of our quarterly team meetings, we decided to automate creation of our documentation code snippets from a test suite. It was one of our smarter ideas. Now we knew that any code we put in our docs would work and produce responses as we documented them. We no longer feared copy/paste errors!


Answering fundamental product questions

We have ventured into a very complex space where many different forms of AI are served by a bewildering array of different types of databases and related infrastructure. From early days, we had to answer questions around the need for our product, architectural details, design choices, deployment options, and cost/benefit analysis. The user journey starts with why they need a specific tool and what problems it can help them solve. Our next level of improvement was to introduce some context around the product we have built. It certainly helped when we could respond to a question by sharing a well thought-out link, instead of providing ad hoc responses on-the-fly. However, once we answered the why and what, we now needed to help our user find “how-to” without expecting them to know specific terminologies. The ability to semantically search for a concept, summarize a response, and point to relevant links is exactly why large language model (LLM) and retrieval augmented generation (RAG) methods have become so popular. That was our solution to the discoverability problem as described next.


Here comes our RAG bot

We are always on the lookout for ways to make our users’ lives easier by integrating with popular AI support tools. We recently integrated ApertureDB into LangChain as a vector store and retriever (graph store coming soon). As part of this, we built an example demo on our website  (select the semantic search example)  showing off a RAG (Retrieval-Augmented Generation) chain that answers questions from Wikipedia, using an embeddings dataset provided by Cohere. This LangChain-based implementation uses ApertureDB under the covers as the vectorstore / retriever for high-performance look up of documents that are semantically similar to the user’s query.

To test this RAG chain, we asked it questions about ApertureDB. Of course the answers were disappointing, as we don’t yet have a Wikipedia article, and the LLM told us that it had insufficient data for a meaningful answer. Out of curiosity, we built another RAG chain that used a crawl of our marketing website and product documentation. Et voila. We started getting expected answers to questions like “Can you store audio in ApertureDB”? Or is “ApertureDB a vector database”? The figure below shows the code behind this chatbot. You can try it out yourself by instantiating a demo as described earlier.

This seemed like a great functionality to push to our documentation page, and why wouldn’t we? We asked our chatbot questions like “What command do I use for adding embeddings?” and the answer gave a good description of how to do it, and also gave a reference to our AddDescriptor command. This is immensely more powerful than mere keyword search, because if people don’t know our API or specific concepts, they can ask questions about what they want to do, and this chatbot can direct them to the right examples. It's been about a month since we started sharing this functionality and we are now seeing 10-50 queries a week on this alpha version of the chatbot, which seems likely to continue growing as we share it with the community to allow them to learn more about ApertureDB.


The hunt for missing documentation

As more users have started trying out our documentation bot, we have started getting a broader set of user queries and that has made us realize that some of the chatbot’s responses lack details, say the answer isn't available, or are just plain wrong. We ourselves know the right answers and this behavior naturally sent us on a hunt through our documentation to find out why we weren't getting the expected answers. It turns out, we had solved some of those problems for our customers or in our benchmark repositories but had never included the answers, either in our marketing website or in our documentation. Now we can look at the questions that resulted in insufficient or incorrect responses and introduce helpful and accurate information where it belongs. Ultimately, if we can help our users to find guidance easily, then it's a win for everyone.


How do we plan to improve this chat?

Our RAG chain chatbot is already a valuable addition to our documentation website. Not only is it a good demonstration of our software, but it also makes life easier for both users and our own developers. We don’t plan to rest on our laurels, however, and we have many enhancements planned for the future.

In order to streamline the process of adding missing documentation when responses are insufficient, we plan to include a way for users to give immediate feedback on answers, for example with thumbs up and down buttons. Another thing we plan to do, taking a leaf from the methodology of modern AI research, is to use an LLM (Large Language Model) to assess each answer along multiple dimensions. There are a lot of parameters to tune in an AI system, and a RAG chain is no exception. There is a lot we can do to improve the prompt, tweak the segmentation, and provide better context for question answering.

There is a lot of exciting research taking place that builds on the basic RAG algorithm. Much of this is about hybridizing RAG with knowledge graphs, whether generated automatically from the text, assembled from structured data, or manually curated. Because ApertureDB combines high-performance vector search with a flexible graph database, it is ideally suited for such applications. We are also not taking advantage right now of ApertureDB's multimodal capabilities; it should be possible to not only index multimodal embeddings from our documentation but also respond with text, image, and video documents as sources of information. The only way is up!

Last but not least, we will be documenting our journey and explaining all the components listed above on our blog, subscribe here.

I want to acknowledge the insights and valuable edits from Gavin Matthews, Drew Ogle, Mederic Hunter (MLOps Community), Pushkar Garg (Clari), and Sonam Gupta (aiXplain).


Originally posted at:


Dive in
Related
Blog
Your Multimodal Data Is Constantly Evolving - How Bad Can It Get?
By Vishakha Gupta • Jun 12th, 2024 Views 413
Blog
Your Multimodal Data Is Constantly Evolving - How Bad Can It Get?
By Vishakha Gupta • Jun 12th, 2024 Views 413
Blog
Semantic Search to Glean Valuable Insights from Podcast Series Part 2
By Sonam Gupta • Jul 17th, 2024 Views 352
Blog
DREAM: Distributed RAG Experimentation Framework
By Aishwarya Prabhat • May 23rd, 2024 Views 332
10:07
video
Create a Contextual Chatbot with LLM a Vector Database in 10 Minutes
By Joselito Balleta • Jun 28th, 2023 Views 626