|
|
#18
|
|
Coda
Developer
|
@Claire Bear: Computers have been passing the Turing test for several years now. It's rather a poor test. It's really not hard for a chatbot to pretend to be a human, at least at a passing glance. Meanwhile, the stricter Turing test doesn't actually work, because real live humans have been known to FAIL Turing tests.
Teaching logic to an AI is... really kinda strange. They're BUILT on logic. They understand nothing BUT logic. So if your sister's boyfriend is applying for a job that actually exists, I'd be curious to learn more about what it is he's actually doing.
Keep in mind that we're nowhere CLOSE to general intelligence or seed AI. We don't even really know where to START, so that's all entirely science fiction at the moment.
Like Suze said, you can't teach morality to a computer the way you teach it to a human child. It's a similar process, yes, in that you have to train the AI on repeated examples of good and bad behavior, with punishments and rewards, but humans are evolved with social behaviors and a certain set of base instincts that an AI simply won't have. So instead, what you have to do is figure out how to express morality in terms of logic. Asimov was attempting to do that when he formulated the Three Laws.
AI doesn't need to be as scary as science fiction makes it out to be. The fact that we HAVE those science-fiction stories as a warning is going to guide the people who are working on those projects to having suitable safeguards from the beginning.
Games by Coda (updated 4/8/2025 - New game: Marianas Miner)
Art by Coda (updated 8/25/2022 - beatBitten and All-Nighter Simulator)
Mega Man: The Light of Will (Mega Man / Green Lantern crossover: In the lead-up to the events of Mega Man 2, Dr. Wily has discovered emotional light technology. How will his creations change how humankind thinks about artificial intelligence? Sadly abandoned. Sufficient Velocity x-post)
|
|
Posted 12-01-2017, 09:33 PM
|
|
|