This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
394
Jamba 1.5 is out!
Post Body
Hi all! Who is ready for another model release?
Let's welcome AI21 Labs Jamba 1.5 Release. Here is some information
- Mixture of Experts (MoE) hybrid SSM-Transformer model
- Two sizes: 52B (with 12B activated params) and 398B (with 94B activated params)
- Only instruct versions released
- Multilingual: English, Spanish, French, Portuguese, Italian, Dutch, German, Arabic and Hebrew
- Context length: 256k, with some optimization for long context RAG
- Support for tool usage, JSON model, and grounded generation
- Thanks to the hybrid architecture, their inference at long contexts goes up to 2.5X faster
- Mini can fit up to 140K context in a single A100
- Overall permissive license, with limitations at >$50M revenue
- Supported in transformers and VLLM
- New quantization technique: ExpertsInt8
- Very solid quality. The Arena Hard results show very good results, in RULER (long context) they seem to pass many other models, etc.
Blog post: https://www.ai21.com/blog/announcing-jamba-model-family
Models: https://huggingface.co/collections/ai21labs/jamba-15-66c44befa474a917fcf55251
Comments
[not loaded or deleted]
Author
Account Strength
90%
Account Age
3 years
Verified Email
Yes
Verified Flair
No
Total Karma
5,774
Link Karma
2,898
Comment Karma
2,803
Profile updated: 1 day ago
Subreddit
Post Details
We try to extract some basic information from the post title. This is not
always successful or accurate, please use your best judgement and compare
these values to the post title and body for confirmation.
- Posted
- 5 months ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/LocalLLaMA/...
By the time you finish testing an LLM a better one is already out. This is like the 90's when computers were being made obsolete months after production.