Coming soon - Get a detailed view of why an account is flagged as spam!
view details

This post has been de-listed

It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.

1
Flow for using LLM to ask questions on large quantity of data
Post Flair (click to view more posts with a particular flair)
Post Body

I'm digging into LLM and how to train them on specific data sources and I'm wondering if my understanding of the flow makes sense or if other methods might be more efficient and scalable.

Let's assume I'll be using OpenAI model and I want to ask questions on my data which would be a set of PDFs to keep it simple.

My understanding of the flow looks like this.

1. Process the data sources to make them searchable
- Parse and chunk all my PDF
- Transform these chunks into embedding (using OpenAI model)
- Store these embeddings in a vector database.

2. Query my vector database based on the question I want an answer to
- Transform the question using the embedding (using OpenAI model)
- Query the vector database to find the embedding that is "close" to the question embedding
- Consider that the X records with a distance < than a threshold are considered as matches

3. Query OpenAI LLM for an answer to the question
- Feed a prompt template with the context retrieved above. The context will consist of the combinations of the matches found in (2)
- Feed the question (similar as the one used in 2) to the prompt template
- Submit that prompt and return the response from OpenAI.

I tried this flow and it works fine.
My concern is about the tokens used with this method. What If the "matches" consist of 10000 paragraphs? It might end up costing a lot or even hit limitations.

Is there another approach that would scale better?
Thanks!

Comments

I have had a few ML engineers bring this exact problem up with me, since I have been thinking about it quite a lot. I'm actually working on a service at the moment to efficiently compress tokens in your RAG pipeline with a minimal decrease in output quality. If you're interested feel free to dm me or check it out at thepi.pe :)

Author
Account Strength
100%
Account Age
8 years
Verified Email
Yes
Verified Flair
No
Total Karma
1,252
Link Karma
101
Comment Karma
1,094
Profile updated: 5 days ago

Subreddit

Post Details

We try to extract some basic information from the post title. This is not always successful or accurate, please use your best judgement and compare these values to the post title and body for confirmation.
Posted
7 months ago