Coming soon - Get a detailed view of why an account is flagged as spam!
view details

This post has been de-listed

It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.

103
Introducing QA-LoRA - Quantization-aware low-rank adaptation of LLMs
Post Flair (click to view more posts with a particular flair)
Post Body

Fascinating new method from Qi Tian et al that combines parameter efficient fine-tuning (used in LoRAs) with parameter quantization in a new way with major efficiency gains - lower computation, faster inference, lower memory requirements.

Paper page - QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models (huggingface.co)

Of particular interest is it performs very well with low parameter models, and low bit models like 2-bit, where it appears to be SOTA. Potentially useful for running Falcon 180B on home hardware.

https://preview.redd.it/bbgprv5xyrqb1.png?width=1677&format=png&auto=webp&s=b64d6a10cdf29a50548e13000c3ca5cd657142a5

Duplicate Posts
2 posts with the exact same title by 1 other authors
View Details
Author
Account Strength
90%
Account Age
4 years
Verified Email
Yes
Verified Flair
No
Total Karma
1,489
Link Karma
666
Comment Karma
809
Profile updated: 3 days ago

Subreddit

Post Details

We try to extract some basic information from the post title. This is not always successful or accurate, please use your best judgement and compare these values to the post title and body for confirmation.
Posted
1 year ago