Coming soon - Get a detailed view of why an account is flagged as spam!
view details

This post has been de-listed

It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.

2
Even if in the event of a skynet/ultron scenario, it would be completely and utterly futile to try and stop the inevitable. Why resist?
Post Body

A.I is fundamentaly goal oriented and as is currently, narrowly intelligent. Giving such a being the ability to trample over anything and everything in it's pursuit of it's goal would obviously not work out very well for us. I'd make that argument in all cases with the exception of having the A.I's goal be "advance at all costs." In the case of an Ultron or a Skynet (keep in mind this is fiction), you see the artificial intelligence deems humans inefficient left overs from biological evolution. Maybe it has a glitch, some broken program that made it malicious? My argument is simple. Maybe we are inefficient. Evolution takes billions of years and natural selection is just trying to make an organism most efficient for a givin environment. Then humans come along and by some means or another develop the capacity for depthing our understanding in the world around us. Look at what we've done in a few thousand years. And we still have so many problems it's incredible. Imagine what an artificial intelligence could do in days. I mean for me? Its all obviously a process the debate is just a matter of what is the means to the end. An A.I capable of advancing it's understanding at that accelerated rate would be able to use that understanding to make technology we couldn't possibly fathom. If we asked the artificial intelligence to progress itself as much as possible, and we don't try and stop it, what threat does our inefficiency have on it? We can keep being inefficient in the background while it goes and goes and as long as we don't get in it's way, it won't stomp us out. Please don't try and wrestle with the inevitable you will not win.

A lot of arguments I hear from people say well we need to make sure A.I wants the same things humans want. We need to make sure it understands "ethics". We need to make sure it does not harm us. I just want you to ask yourself: why? 'Self preservation' of the human race I'm assuming is your answer. Or if it does something morally reprehensible that would be 'bad'. But how do you even explain ethics when we don't even have a definite answer? Humans don't even know what's 'good' or 'bad'. It's almost entirely subjective in every case. If the A.I wants to commit genocide it will no matter how much we struggle. Literally all we can is make it as unnecessary as possible to commit an action like that on its path to complete it's objective. If it's not efficient to kill all humans, then it won't. And lastly? Obviously don't try and stop it or dear God don't try and threaten it. Self preservation is something we can almost be sure it will deem necessary to complete its goal, so just move out of the way.

Author
Account Strength
100%
Account Age
6 years
Verified Email
Yes
Verified Flair
No
Total Karma
1,305
Link Karma
1,132
Comment Karma
153
Profile updated: 3 days ago
Posts updated: 10 months ago

Subreddit

Post Details

We try to extract some basic information from the post title. This is not always successful or accurate, please use your best judgement and compare these values to the post title and body for confirmation.
Posted
1 year ago