This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
103
Introducing QA-LoRA - Quantization-aware low-rank adaptation of LLMs
Post Flair (click to view more posts with a particular flair)
Post Body
Fascinating new method from Qi Tian et al that combines parameter efficient fine-tuning (used in LoRAs) with parameter quantization in a new way with major efficiency gains - lower computation, faster inference, lower memory requirements.
Of particular interest is it performs very well with low parameter models, and low bit models like 2-bit, where it appears to be SOTA. Potentially useful for running Falcon 180B on home hardware.
Duplicate Posts
2 posts with the exact same title by 1 other authors
View Details
Author
Account Strength
90%
Account Age
4 years
Verified Email
Yes
Verified Flair
No
Total Karma
1,489
Link Karma
666
Comment Karma
809
Profile updated: 3 days ago
Subreddit
Post Details
We try to extract some basic information from the post title. This is not
always successful or accurate, please use your best judgement and compare
these values to the post title and body for confirmation.
- Posted
- 1 year ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/LocalLLaMA/...