Therefore, I think that if artificial intelligence does progress to the point at which robots may pass the Turing Test, there should be some sort of law such as the Hamilton Act. What do yall think? Are you comfortable having sentient robots running around with the same freedoms as yourselves, or would you like some measures in place to keep them in check?
Tuesday, November 3, 2009
So after writing my first paper, reading RIVER OF GODS, and reading other short stories in this class, I have pondered immensely over the possibility of advanced artificial intelligence having a strong presence in our society in the future. If someone would have asked me a few years ago how I felt about such a possibility, I probably would have been all for it. But as I have increasingly thought about it I have become unsettled with the idea of nuts and bolts having sentience while humans become seemingly obsolete.
Subscribe to:
Post Comments (Atom)
It's getting late, so you know what that means...
ReplyDeleteIf such A.I had sentience, and equal or greater intelligence to us, how would we know the difference?
I think if such A.I. (or aeai) were allowed to gain sentience, then we'd have to acclimate very quickly, or else we'd quickly have a "race" war on our hands. After all, what would be the point of making a distinction between A.I. and humans when they're both sentient? You can't really argue that because they're created that way that makes them inferior, as billions of people already believe the reason we're sentient is because of a creator being.
Anyways, to answer your question, I don't think that A.I. should be limited via some Hamilton law. Why cling to some romantic notion of what "humans" are, lets go all out and see how far we can go down the rabbit hole. Even if my consciousness gets left behind or gets deemed obsolete, maybe "I" will get to live again as some other more advance A.I. being. Or not. But if comes down to some sort of reincarnation through an A.I. "body" of some sort "I" won't even know the difference between whether or not I used to be a flesh blood human or if I was always some sort of advanced digital being.
Let's see how deep this stream of consciousness is. I'm game.
This situation was kind of addressed in one of the stories we read last week, The Last American. In that story they tried to enforce laws to keep technology in check and keep holy the typical view of what makes a human a human. Of course, in that story technology won out because in the end it helps everyone out and makes people's/posthuman's/aeai's lives easier. I don't think we should try to enforce any type of Hamilton Laws, but instead embrace the technological advancements of our time.
ReplyDeleteWhy don't we just circumvent the whole dilemma by pursuing a possibility known as "intelligence amplification," where we create computer interfaces that can enhance our own selves in order to achieve superior intellectual or cognitive ability...It seems a whole lot more likely than creating an entirely new intelligence, and maybe less dangerous too?
ReplyDeleteIf robots grow smart enough to gain the capacity to rule over humans then there's nothing we can really do about it. It's basically survival of the fittest and that's how humans went to the top of the food chain--by dominating over all lesser creatures.
ReplyDeleteGranted, if robots start to treat us like batteries--"The Matrix"--we have an obligation to resist. Of course, the happy ending would be a peaceful coexistence between maker and makee. To a certain extent, humans coexist with the their own makers--whether they are God or Vishnu.