This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
132
This paper seems very exciting
Post Flair (click to view more posts with a particular flair)
Post Body
https://arxiv.org/pdf/2405.16528
Github/code (pre release): https://github.com/sebulo/LoQT
It looks like its possible to combine quantization with LorAs well enough to allow full model training. The upshot being you could fully train from start to finish a modern 7b-size model on a 4090. Same approach would also work for fine tuning (retaining all the memory benefits).
Comments
[not loaded or deleted]
Author
Account Strength
50%
Account Age
5 months
Verified Email
Yes
Verified Flair
No
Total Karma
2,272
Link Karma
453
Comment Karma
1,819
Profile updated: 1 week ago
Subreddit
Post Details
We try to extract some basic information from the post title. This is not
always successful or accurate, please use your best judgement and compare
these values to the post title and body for confirmation.
- Posted
- 2 months ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/LocalLLaMA/...
Eventually LLMs will be good enough to look through papers and implement various methods into a single project.