Coming soon - Get a detailed view of why an account is flagged as spam!
view details

This post has been de-listed

It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.

82
Rho-1: Not All Tokens Are What You Need. (A very efficient way to train SOTA models)
Post Flair (click to view more posts with a particular flair)
Post Body

I'll give the highlights here, but the paper (https://arxiv.org/pdf/2404.07965.pdf) is very detailed and includes a lot of interesting exploration. The weights are also freely available on huggingface: https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1

Rho-1 matches DeepSeekMath with only 3% of the pretraining tokens. They achieve this by using a reference model (in this case, a small model they trained on 1.9B tokens from open source datasets) to calculate perplexity on a corpus of data and filter the data on a per-token level. This means they can very aggressively filter training data while increasing model performance. They call this Selective Language Modeling (SLM).

diagram showing this in action

benchmarks comparing their models

They also used this method to continue pre-training on mistral, seeing a 3-point bump on the above benchmarks by training on their dataset, and a further 3 point bump by training on their filtered dataset. They also train these models on tool usage using ToRA, the benchmarks speak for themselves here.

benchmarks for tool use

Finally, an interesting graph of how much data they can cut from a 5B sample of training data when training a 1B model, this graph suggests that for their dataset they can cut out almost 1/2 of the pre-training data and get 2-3x the performance.

token select ratio vs GSM8K and Math benchmarks

They mention in the discussion that although the increase in benchmarks is significant for smaller models, it may be that larger models don't benefit as much, as they may have inductive biases for compressing useful data already. However, from what I can tell, their method would still massively cut down the amount of time taken to train a large model.

Author
Account Strength
100%
Account Age
4 years
Verified Email
Yes
Verified Flair
No
Total Karma
2,554
Link Karma
122
Comment Karma
2,360
Profile updated: 6 days ago
Posts updated: 6 days ago

Subreddit

Post Details

We try to extract some basic information from the post title. This is not always successful or accurate, please use your best judgement and compare these values to the post title and body for confirmation.
Posted
9 months ago