This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
So, say we realize that the alignment problem truly can't be solved, how does this effect the far future? Would this greatly slow down attempts at uplifting, posthumanism, and conscious AI? Would superintelligences be built or would they forever be too dangerous? Would there be strictly enforced limits on lifespan to prevent transhumans from going rogue at some point over their lifetime? Would the galaxy descend into eternal grimdark warfare amongst more nations than there are celestial bodies? Is there anything we could do to negate the effects of this or even effectively solve alignment without actually solving it? Could certain programmed instincts or hard commands help or even fix the issue? What about periodically resetting individuals so their psychology doesn't drift? Also, a bit of a bonus question; what would it actually take to truly solve alignment? Would you need constant intervention in the slightest details of thought? Do you think a future without alignment is either likely or inevitably worse than one with it, or is having most or all beings indefinitely aligned actually not preferable?
For those of you wondering why I'm concerned about alignment, https://www.reddit.com/u/the_syner/s/PaHOvzILv1 has talked about it quite a bit and believes it is impossible and ultimately represents a fundamental limit to societal cooperation.
What if we can't solve the alignment problem?
A "true" AGI is mainly of relevance to prove it can be done. It's otherwise the technological equivalent of a hydroplane. Really cool but doesn't fill an actual niche because slower less complex systems answer the same questions.
Keep it simple, keep it dumb.
Subreddit
Post Details
- Posted
- 7 months ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/IsaacArthur...