How to Augment ChatGPT with AWS OpenSearch

Discover the newest integration of ChatGPT with AWS OpenSearch. Learn how this combination enhances search capabilities and AI-driven insights.

Subscribe

Subscribe

The release of ChatGPT was one of the most exciting technical events in recent history. After years of disappointing technological advancement, out of nowhere, ChatGPT was released and reportedly hit 100M Monthly Active Users within two months, dwarfing the adoption rate of TikTok, Instagram, and other popular internet tools.

ChatGPT opens up many opportunities and use caseses because it can synthesize terabytes of textual data into coherent responses to user queries. This capability is beyond human reach because, as individuals, we're unable to draw inferences in the way that ChatGPT can from the vast amount of data it is trained on.

Overcoming Data Access Limitations with GPT Embeddings and AWS OpenSearch

We have all seen how ChatGPT is adept at responding to human queries, but it has some limitations. One of the primary limitations that prevents it from helping with our daily work is that it has no access to our proprietary data sources. To work around this issue, a popular pattern is to integrate the GPT Embeddings API into the prompt chain.

Here is a diagram that explains how this works:

Augmenting ChatGPT with AWS OpenSearch (1)

Instead of sending a prompt directly to GPT, you intercept the prompt entered by the user. Then you perform a kNN vector similarity search against your proprietary data. The vector similarity search returns facts related to the original prompt. The results from the vector store are appended to the prompt.

See Question And Answering Using Embeddings for a Python notebook that walks through the process with a real-life example.

LlamaIndex: Enhancing ChatGPT's Data Integration with AWS OpenSearch

LlamaIndex (previously GPT Index) is an excellent Python module that makes the process of working with Embeddings easier. Many walkthroughs exist for supplementing GPT with custom data, but they often use a simple vector store for data storage. When you are ready to build a production system, you will want to store the embeddings in a managed service.

Multiple data store providers provide kNN lookup for vector stores, but a good choice for those using AWS is to use Amazon's OpenSearch Service.

Getting Started with AWS OpenSearch: A Guide for Developers

To begin, create an Amazon OpenSearch Service domain in the AWS console. For testing purposes, you can choose the default options and create a master user and password.

Augmenting ChatGPT with AWS OpenSearch2 (1)

Once created, check the access policy in the Security configuration tab. See Identity and Access Management in Amazon OpenSearch Service and Fine-graned access control in Amazon OpenSearch Service for instructions on configuring access.

Copy the Domain endpoint URL from the OpenSearch console and test it as follows:

curl -XGET -u '<USERNAME>:<PASSWORD>' 'https://<DOMAIN_ENDPOINT>'

Efficient Data Management in AWS OpenSearch for Custom Data Sets

For this example, I'm using the Automatic Ticket Classification Dataset , which is available from https://kaggle.com to simulate my own proprietary data. I'm parsing out the customer complaints from the downloaded JSON file and storing them in a file called cdata.txt in a subdirectory named data/ .

Here is the Python code that will seed the OpenSearch domain with the vector data:

from os import getenv
from llama_index import SimpleDirectoryReader
from llama_index.indices.vector_store import GPTOpensearchIndex
from llama_index.vector_stores import OpensearchVectorClient

def query(index, question):
    return index.query(question)

muser = getenv("OS_MASTER_USERNAME")
mpass = getenv("OS_MASTER_PASSWORD")
osendpoint = getenv("OS_ENDPOINT")
endpoint = f"https://{muser}:{mpass}@{osendpoint}"
idx = "gpt-index-demo"
documents = SimpleDirectoryReader('data').load_data()

text_field = "content"
embedding_field = "embedding"

client = OpensearchVectorClient(endpoint, idx, 1536, embedding_field=embedding_field, text_field=text_field)
index = GPTOpensearchIndex(documents=documents, client=client)
print(query(index, 'How could my roommate steal my Chase card?'))

Let's walk through the above code. documents = SimpleDirectoryReader('data').load_data() loads the text file(s) in our data/ subdirectory into an index. This function splits our raw textual data into chunks, calls the OpenAI Embeddings API to vectorize them, and then stores the raw text chunks and corresponding vectors as a list of llama_index Documents.

Once we have the vectorized contents of our source material, OpensearchVectorClient establishes a client connection to our OpenSearch domain. The dimensions are set to 1536, which corresponds with the vector size returned by the OpenAI Embedding model.

Next, GPTOpensearchIndex seeds OpenSearch with our data. GPTOpensearchIndex will store both the vectors and the original raw text in OpenSearch. This is convenient because when we run kNN lookups to compare strings, we can easily reference the original text.

Finally, print(query(index, 'How could my roommate steal my Chase card?')) uses the index we just created to perform a query. I chose this prompt because within the customer complaint data, there is an example of someone whose roommate stole their card by guessing the pin code, which was the same code used to unlock their phone!

python main.py

> Adding chunk:
Good morning my name is XXXX XXXX and I apprec...
> Adding chunk: realize it until today, i checked my email and ...
> [build_index_from_documents] Total LLM token usage: 0 tokens
> [build_index_from_documents] Total embedding token usage: 4530 tokens
> [query] Total LLM token usage: 1286 tokens
> [query] Total embedding token usage: 9 tokens

Your roommate could have stolen your Chase card by taking it from your wallet or purse while you were sleeping or when you were not paying attention. He could have also taken it from your room or any other place where you keep it. He could have also used your phone password to guess your PIN and use the card to withdraw money from your account.

Direct Querying Techniques in AWS OpenSearch for Enhanced Data Retrieval

Now that we've demonstrated how to load data into OpenSearch, how do we perform subsequent queries? Below is an example:

def query_from_opensearch():
llm_predictor = LLMPredictor(llm=OpenAI(temperature=0.9, model_name="text-davinci-003"))
rdr = ElasticsearchReader(endpoint, idx)
documents = rdr.load_data(text_field, embedding_field=embedding_field)
index = GPTVectorStoreIndex(documents=documents, llm_predictor=llm_predictor, include_extra_info=False)
print(query(index, 'How could my roommate steal my Chase card?'))

Here I demonstrate how to use LLMPredictor to configure the OpenAI options. In this case, I increased the temperature to help ensure a greater variety in the model's output. I then use the ElasticsearchReader class to establish a connection to OpenSearch and create an index from the OpenSearch data using GPTVectorStoreIndex . Note that include_extra_info=False is required to avoid chunk size limitations.

Here is the result of our new function:

python main.py
INFO:root:> [build_index_from_documents] Total LLM token usage: 0 tokens
INFO:root:> [build_index_from_documents] Total embedding token usage: 0 tokens
<llama_index.indices.vector_store.base.GPTVectorStoreIndex object at 0x12edf0eb0>
INFO:root:> [query] Total LLM token usage: 1268 tokens
INFO:root:> [query] Total embedding token usage: 9 tokens

Your roommate could have stolen your Chase card by taking it from your room while you were sleeping. He could have then used the card to withdraw money from your account by using your PIN, which he figured out because it was the same as your phone password.

Harnessing AWS OpenSearch to Elevate ChatGPT's Data Handling Capabilities

The benefits of integrating AWS OpenSearch with ChatGPT extend beyond just improved data access and management. It paves the way for creating more intelligent, responsive, and personalized user experiences across various applications. From enhancing customer support bots to developing more engaging educational tools, the possibilities are boundless. As we continue to explore these integrations, AWS OpenSearch is set to play a crucial role in the future advancements of natural language processing technology.

Contact StratusGrid for help and a detailed discussion on how we can assist in optimizing your applications with AWS OpenSearch and ChatGPT.

stratusphere by stratusgrid 1

FAQ on ChatGPT and AWS OpenSearch:

What is AWS OpenSearch and how does it integrate with ChatGPT?

AWS OpenSearch is a powerful, open-source search and analytics engine designed for a wide range of applications such as log analysis, real-time application monitoring, and providing search capabilities. Integrating AWS OpenSearch with ChatGPT means you can leverage this robust platform to store, search, and analyze large datasets efficiently. This integration enhances ChatGPT's ability to process and generate responses based on your proprietary data, making it more effective and insightful.

How can AWS OpenSearch overcome ChatGPT's limitations with proprietary data?

AWS OpenSearch helps ChatGPT overcome its proprietary data limitations by offering a secure and scalable environment to store and query large volumes of custom data. This capability allows ChatGPT to conduct vector similarity searches within your unique datasets, expanding its functionality to include specialized, domain-specific information, which is crucial for tailored responses and insights.

What are the benefits of using AWS OpenSearch for data management in AI applications?

AWS OpenSearch provides scalable storage, rapid search capabilities, and the capacity to execute complex queries on extensive datasets. In AI contexts, this translates to faster data retrieval, more accurate AI responses, and the ability to utilize bespoke data sources, enhancing the personalization and relevance of AI-generated outcomes.

How does the GPT Embeddings API work with AWS OpenSearch to enhance ChatGPT's capabilities?

The GPT Embeddings API, when used with AWS OpenSearch, allows you to convert text data into vector embeddings, which can then be stored and searched within OpenSearch. This integration empowers ChatGPT to deliver responses based on a deeper, more nuanced understanding of the data, significantly enhancing user interactions through more detailed and context-aware insights.

Can AWS OpenSearch improve the response accuracy of ChatGPT in real-world applications?

Absolutely. By integrating AWS OpenSearch, ChatGPT gains access to a wider and more specialized dataset, enabling it to provide responses that are not only more accurate but also contextually relevant to the user's specific needs and inquiries, thereby enhancing the practicality and reliability of ChatGPT in real-world applications.

What are the steps to set up an AWS OpenSearch domain for ChatGPT data integration?

To integrate ChatGPT with AWS OpenSearch, you need to establish a new OpenSearch domain via the AWS console, configure the necessary access and security settings, and populate the domain with data. Utilizing tools like LlamaIndex can simplify the data integration process, allowing ChatGPT to access and interact with this data, thereby boosting its processing and response generation capabilities.

Similar posts