This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
Hi all, been getting back into LLMs lately - I've been working with them for about two years, locally off and on for the past year. My local server is a humble Xeon 64gb 3060 12gb. And, as we all know, what was SOTA three months ago might not be SOTA today. So I'd like your opinions: for scientific-oriented text generation (maybe code too, but tiny models aren't the best at that imo?), what's the best performing model, or model and quant, for my little LLM server? Huggingface links would be most appreciated too 🤗
Subreddit
Post Details
- Posted
- 4 months ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/LocalLLaMA/...