This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
I've been using stable diffusion for just under a month now and I'm totally in love with it, I am also in love with all the fine-tuned checkpoints, LoRAs, Hypernetworks, and Textual Inversion that the community is cranking out, this is art bliss for me personally and the only thing I hate is that my GPU hardware is a bit dated and slow to crank through art or do my own fine-tuning.
I've been following the Andersen et al lawsuit against Stability.ai, and also reading some of the counter arguments against AI art.
It really seems like their primary argument against it is the fact that the LAION 5B data set contains some copyrighted material that didn't get explicit permission first to be used as training data, the other arguments whining that "AI gonna steal artists jerbs" really seems unrealistic and fear-mongering. I get that artists are concerned, AI is coming for my job too (programmer), but I'm choosing to embrace it and get good with the tool rather than be scared of it and watch as the world passes me by. Also, the models as they exist now are like a huge advertisement for the artists whose names get used to produce awesome works... sorry about the free advertising, fame and notoriety?
From my perspective it seems this whole argumentation in the lawsuit brought by Andersen can be short-circuited by training a new base model using an image set of images that the artists and photographers have either given explicit permission or have opted in to the data set. Given that almost everyone has a digital camera and is connected to the internet, it seems like a big task but we could collectively attack it like a school of piranha eating a cow carcass, we just need an organizing framework to guide us.
Wouldn't it be possible for us to get together, pool our resources, and build a 100% open-source base model with a complete paper trail of opt-in art? I know the tech hardware is a limiting factor to getting that done, but if the lawsuits end up dismantling the existing companies that have provided the base models (Stability.ai, OpenAI, and Midjourney), we're going to end up doing this anyway if we want to push AI art forward.
What are your thoughts?
Subreddit
Post Details
- Posted
- 1 year ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/DefendingAI...