Coming soon - Get a detailed view of why an account is flagged as spam!
view details

This post has been de-listed

It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.

2
Local 13B Model in Apple Silicon
Post Flair (click to view more posts with a particular flair)
Post Body

Local 13B model Apple Silicon

Hi, I just tried to install locally from Hugging Face using Llama.cpp a 13B model into a MacBook Pro with M3Pro and 18Gb RAM but am getting: ggml_metal_graph_compute: command buffer 0 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory).

I thought 18Gb RAM would be sufficient for a 13B model.

BTW I am running Llama 3.1 8B with no issues.

Author
Account Strength
100%
Account Age
11 years
Verified Email
Yes
Verified Flair
No
Total Karma
2,819
Link Karma
273
Comment Karma
2,546
Profile updated: 6 days ago
Posts updated: 1 month ago

Subreddit

Post Details

We try to extract some basic information from the post title. This is not always successful or accurate, please use your best judgement and compare these values to the post title and body for confirmation.
Posted
3 months ago