I'm starting work on an local model chat client wrapped around offline Wikipedia. I'd like to open the project up to anyone who is interested in working on it, specifically people who are looking to get some experience building apps around an LLM. This is a pretty straightforward implementation that will give good insight into the end to end requirements of setting up a RAG, langchain agent, and chat interface.
I am in a couple research groups, where we meet twice a month and 2-3 members prepare a5 minute presentation to describe their findings then we have a feedback and roundtable discussion. I think this format would work well for a beginner friendly open source project as well.
I tried to post this on r/LocalLLaMA but they never approved my post. To avoid spamming this sub, just leave a comment here or message me directly and we will arrange a metting.
Post Details
- Posted
- 1 year ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/LLMDevs/com...