This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
Hey LocalLLaMA,
[EDIT] - thanks for all the awesome additions and feedback everyone! Guide has been updated to include textgen-webui, koboldcpp, ollama-webui. I still want to try out some other cool ones that use a Nvidia GPU, getting that set up.
I reviewed 12 different ways to run LLMs locally, and compared the different tools. Many of the tools had been shared right here on this sub. Here are the tools I tried:
- Ollama
- 🤗 Transformers
- Langchain
- llama.cpp
- GPT4All
- LM Studio
- jan.ai
- llm (https://llm.datasette.io/en/stable/ - link if hard to google)
- h2oGPT
- localllm
My quick conclusions:
- If you are looking to develop an AI application, and you have a Mac or Linux machine, Ollama is great because it's very easy to set up, easy to work with, and fast.
- If you are looking to chat locally with documents, GPT4All is the best out of the box solution that is also easy to set up
- If you are looking for advanced control and insight into neural networks and machine learning, as well as the widest range of model support, you should try transformers
- In terms of speed, I think Ollama or llama.cpp are both very fast
- If you are looking to work with a CLI tool, llm is clean and easy to set up
- If you want to use Google Cloud, you should look into localllm
I found that different tools are intended for different purposes, so I summarized how they differ into a table:
I'd love to hear what the community thinks. How many of these have you tried, and which ones do you like? Are there more I should add?
Thanks!
Subreddit
Post Details
- Posted
- 9 months ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/LocalLLaMA/...