This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
How many songs about kissing a cat have you heard? https://suno.com/song/f56aef8e-3b7c-48ca-8b93-cd300c06eb4b
Ok sure a lot because people love cats. How about a cat that wants to be a cow? https://suno.com/song/4b33b24f-5bb9-4566-b222-739f42ff8665
Ok I just love cats and wanted you to listen to these. 😸
"Honey, Let's Bone!" is currently my favorite song on Udio. It's a Halloween love glam rock song.
Some off lyrics ( Call me your Wolfman, cause I need them bones ) but fairly cromulent! The creator likely copied the lyrics in from a different LLM so we can blame them for not catching it.
Subreddit
Post Details
- Posted
- 1 month ago
- Reddit URL
- View post on reddit.com
- External URL
- v.redd.it/loy2wxn5ui5e1
It's been shown that modern AI models are capable of producing better output than their training data some of the time. That some of the time data can be captured and used to train the next version of the model to increase it's ability to produce good output. Each successive version ends up being better than the previous because of better training data, among other optimizations and discoveries.
If you've used an older image generator you might have seen this where you get lots of garbage output, but every once in a while you get something really good. If you were to gather up all the good output out of the bad output you could train a new model on that good output and create a better model.