New filters on the Home Feed, take a look!
view details

This post has been de-listed

It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.

105
GPTs understanding of its tokenization.
Image
Comments

I want to know why LLMs are sometimes able to realize they are wrong, but other times can't. There doesn't seem to be a pattern or reason for it. It just seems random.

Author
Account Strength
100%
Account Age
4 years
Verified Email
Yes
Verified Flair
No
Total Karma
61,177
Link Karma
44,333
Comment Karma
16,054
Profile updated: 5 days ago
the one and only

Subreddit

Post Details

We try to extract some basic information from the post title. This is not always successful or accurate, please use your best judgement and compare these values to the post title and body for confirmation.
Posted
2 months ago