![]() |
Log In |
Home | Forums | Shops | Trade | Avatar | Inbox | Games | Donate |
Not Logged In |
![]() |
|
Thread Tools |
Espy
![]() Wanderer
![]() ![]() |
![]() |
#17 | ||
-dies laughing- Oh man, if you think this is talking heavily about one subject...
STONEWALL WAS A RIOT | ||||
![]() | Posted 12-01-2017, 02:44 PM |
![]() |
![]() |
#18 |
Coda
![]() Developer
![]() ![]() |
||
@Claire Bear: Computers have been passing the Turing test for several years now. It's rather a poor test. It's really not hard for a chatbot to pretend to be a human, at least at a passing glance. Meanwhile, the stricter Turing test doesn't actually work, because real live humans have been known to FAIL Turing tests.
Teaching logic to an AI is... really kinda strange. They're BUILT on logic. They understand nothing BUT logic. So if your sister's boyfriend is applying for a job that actually exists, I'd be curious to learn more about what it is he's actually doing. Keep in mind that we're nowhere CLOSE to general intelligence or seed AI. We don't even really know where to START, so that's all entirely science fiction at the moment. Like Suze said, you can't teach morality to a computer the way you teach it to a human child. It's a similar process, yes, in that you have to train the AI on repeated examples of good and bad behavior, with punishments and rewards, but humans are evolved with social behaviors and a certain set of base instincts that an AI simply won't have. So instead, what you have to do is figure out how to express morality in terms of logic. Asimov was attempting to do that when he formulated the Three Laws. AI doesn't need to be as scary as science fiction makes it out to be. The fact that we HAVE those science-fiction stories as a warning is going to guide the people who are working on those projects to having suitable safeguards from the beginning. Games by Coda (updated 4/8/2025 - New game: Marianas Miner)
Art by Coda (updated 8/25/2022 - beatBitten and All-Nighter Simulator) Mega Man: The Light of Will (Mega Man / Green Lantern crossover: In the lead-up to the events of Mega Man 2, Dr. Wily has discovered emotional light technology. How will his creations change how humankind thinks about artificial intelligence? Sadly abandoned. Sufficient Velocity x-post) | ||||
![]() | Posted 12-01-2017, 09:33 PM |
![]() |
Espy
![]() Wanderer
![]() ![]() |
![]() |
#19 | ||
Speaking of the Three Laws, got any thoughts on those?
EDIT: Sorry, Quiet, hijacking yer thread. STONEWALL WAS A RIOT
Last edited by Espy; 12-01-2017 at 09:40 PM.
| ||||
![]() | Posted 12-01-2017, 09:35 PM |
![]() |
Coda
![]() Developer
![]() ![]() |
![]() |
#21 | ||
It's a good first start, especially considering that they were written very early on in the genre. However, they're shortsighted on their own, and if you bring the Zeroth Law into the picture they give a risk of scenarios like robots enslaving humans for their own good. There are certainly better laws that could be written.
Games by Coda (updated 4/8/2025 - New game: Marianas Miner)
Art by Coda (updated 8/25/2022 - beatBitten and All-Nighter Simulator) Mega Man: The Light of Will (Mega Man / Green Lantern crossover: In the lead-up to the events of Mega Man 2, Dr. Wily has discovered emotional light technology. How will his creations change how humankind thinks about artificial intelligence? Sadly abandoned. Sufficient Velocity x-post) | ||||
![]() | Posted 12-01-2017, 09:42 PM |
![]() |
![]() |
#22 |
Espy
![]() Wanderer
![]() ![]() |
||
Er... what's the zero-th law? I'm actually not really well-versed in, uh, "classical" sci-fi.
STONEWALL WAS A RIOT | ||||
![]() | Posted 12-01-2017, 09:45 PM |
![]() |
Claire Bear
![]() Magic
![]() ![]() |
![]() |
#23 | ||
the job is to teach something my sister said is called 'ontology' to ai so robots can "learn to reason with common sense". maybe logic was the wrong word, i'm not as smart as my sister so i simplified.
i dont want no robots with common sense. dont trust it. ya'll should be more scared of scifi scenarios, this shit happens faster than you think. twenty years ago you wouldn't have belevied me if i told you everyone would carry a computer in their pocket so quickly. science moves fast. robots will end up smarter than us someday. some robots probably are smarter than us, beating us at chess and whatnot. i do not trust the kinda people who can afford to advance science. we're gonna end up in a world like Snowcrash one of these days. ![]() ![]() | ||||
![]() | Posted 12-01-2017, 09:46 PM |
![]() |
![]() |
#24 |
Quiet Man Cometh
![]() We're all mad here.
![]() ![]() |
||
Quote:
Are those three laws things what they referenced in I, Robot? And Suze, I suspect the meaning search is part of the reason I put up with my school grievances. I remember first encountering the 'humans need hardship' think in Appleseed, some time ago. The paperclip thing falls into the "what if" when it comes to self-improving AI. That was another detail from somewhere. (I should just find where the teacher got that clip and post it). They didn't go as far as taking over the planet with a factory, but the hypothesis is that if you had a self improving AI, and gave it a job with the purpose that it find the most efficient way to do it, it could come up with a response that puts it into competition with people for resources. I tempted to skip the debate form and just write a mini essay on why debating a particular idea is hard! That way I can blather about a pile of things, and hopefully still get credit. ;) | ||||
![]() | Posted 12-01-2017, 09:51 PM |
![]() |
Suzerain of Sheol
![]() Desolation Denizen
![]() ![]() |
![]() |
#25 | ||
To paraphrase Sam Harris (and he might be quoting someone else like Bostrom for this, but), "The only thing scarier than the potential threat of AI is the potential loss of not developing it." I know some people are okay with writing off things like Alzheimer's and cancer as facts of life, or the inevitability of a supervolcano or meteor impact annihilating civilization as we know it as something beyond our control, but that's pretty much exactly the point. Our best chance of making it to the next stage of civilization is with AI as a ladder. Check out the concept of the Great Filter as something related to this point, as well. (Though it's also possible that AI itself is a filter.)
@Claire, it's not that we're saying there's no reason to be afraid or worry -- far from it, it's just that the things to be concerned over aren't as basic as evil conquering robots with guns and red eyes taking over the world. They're a lot more subtle and insidious, working on the level of societal and economic shifts that could unbalance civilization, and the fact that it's not something most people worry about is actually part of what makes it so scary. It's a hurdle that we, as a species and global community, have to make it over, more difficult than arguably any that have come before, and the consequences could be catastrophic. But it's going to happen, to one degree or another depending on the limits of what's actually possible, and we need to figure out how to handle it. (Not us specifically here on Trisphee, obviously :P) Cold silence has a tendency to atrophy any sense of compassion between supposed lovers. Between supposed brothers. | ||||
![]() | Posted 12-01-2017, 11:21 PM |
![]() |
![]() |
#26 |
Espy
![]() Wanderer
![]() ![]() |
||
...I don't think the rest of the world would want anything to do with any of our potential ideas on how to handle such a scenario.
STONEWALL WAS A RIOT | ||||
![]() | Posted 12-01-2017, 11:25 PM |
![]() |
Quiet Man Cometh
![]() We're all mad here.
![]() ![]() |
![]() |
#27 | ||
Quote:
Unless an AI can tell us how it did something, then us verifying an answer could take as long as finding one in the first place. We do do things without knowing exactly what we are doing because sometimes, the result is more valuable than how it works. | ||||
![]() | Posted 12-01-2017, 11:35 PM |
![]() |
![]() |
#28 |
Quiet Man Cometh
![]() We're all mad here.
![]() ![]() |
||
Quote:
| ||||
![]() | Posted 12-01-2017, 11:38 PM |
![]() |
Quiet Man Cometh
![]() We're all mad here.
![]() ![]() |
![]() |
#29 | ||
I found the thing! It's just under 10 minutes. It's the definition that we've been using for class. I'm sure I can wrangle something now, but it's fun to keep going. ;)
https://www.youtube.com/watch?v=kWmX3pd1f10
Last edited by Quiet Man Cometh; 12-02-2017 at 12:00 AM.
| ||||
![]() | Posted 12-01-2017, 11:47 PM |
![]() |
![]() |
#30 |
Suzerain of Sheol
![]() Desolation Denizen
![]() ![]() |
||
Quote:
I'd be wary of any reactions based on an aversion to "unnatural" methods, though. Natural methods have had a long time to find cures and failed thus far. That's a very close step to just admitting defeat and saying "best not to meddle in God's domain" or the like (an "argument" my father is annoyingly fond of). It might be that problems like incredibly deadly diseases and disorders are beyond human capacity to cure, sort of how a lot of the sciences have to use advanced math to express their ideas because human intuition isn't built to process information beyond a certain level of complexity. Cold silence has a tendency to atrophy any sense of compassion between supposed lovers. Between supposed brothers. | ||||
![]() | Posted 12-01-2017, 11:53 PM |
![]() |
Quiet Man Cometh
![]() We're all mad here.
![]() ![]() |
![]() |
#31 | ||
I'm not really talking natural, just organic. I trust flesh more than metal and plastic at this point. I have an kidney that is not mine already. I'd be perfectly okay if somewhere we managed to create some manner of flesh thread and organ recipe and Dr. Frankenstein a new kidney.
Of course, new ways to open and close skin holes would also be of benefit, too. Not sure if this requires and AI but it might help? | ||||
![]() | Posted 12-02-2017, 12:08 AM |
![]() |
![]() |
#32 |
Coda
![]() Developer
![]() ![]() |
||
The Zeroth Law, in Asimov's writing, is an advanced form of the First Law that sufficiently advanced robots can formulate on their own. It states that a robot may not allow harm to come to humanity as a whole, or through inaction allow humanity to come to harm. This was an outgrowth of how AI had to reason about how a small harm to a human could prevent a larger harm (for example, punishing a child caused the child to be upset, which is harm, but it taught the child how to prevent bigger injuries).
Regarding AI and diseases: Any treatment that an AI comes up with would have to go through the same evaluation procedure that human-created treatments do. If a test population improves without side effects, you move on into further tests and then on into mass production. If a test population suffers horrendous side effects or gets worse, you abandon the drug and go back to the drawing board. So in that regard, AI is really just a tool for researchers to use in pursuit of what they're already doing. Nanotechnology is a separate argument from AI. In particular, the problem with nanotechnology is the fact that they CAN'T have robust AI because they're tiny little things that act more like chemistry than like robots. There's no room for advanced computation. So while it's a worthwhile avenue of discussion, it's a DIFFERENT avenue of discussion. Re: whacking metal with a crowbar -- You'd be surprised how much you can learn from whacking metal with a crowbar. It's a crude analogy, but the same techniques human engineers use to evaluate stresses in human-designed structures will work for evaluating stresses in AI-designed structures. The AI isn't going to invent brand-new laws of physics (unless they do! that would be super interesting!) -- the more likely surprise is going to come by developing a novel way of using things that can subsequently be analyzed to see what you can learn from it. That's how medicine has ALWAYS worked -- we discover molecules with medicinal properties first, then we start researching the pharmacology behind how it does its thing. Going the other way around was a flop -- pharmacologists spent a couple decades doing "rational drug design" and trying to engineer molecules specifically to pursue a hypothesized mechanism of action... it didn't really work any better than the typical method of "let's take this active molecule, swap out parts of it, and see what happens" and it cost a lot more. So an AI doing rational drug design would essentially be creating new base molecules for pharmacologists to study. Games by Coda (updated 4/8/2025 - New game: Marianas Miner)
Art by Coda (updated 8/25/2022 - beatBitten and All-Nighter Simulator) Mega Man: The Light of Will (Mega Man / Green Lantern crossover: In the lead-up to the events of Mega Man 2, Dr. Wily has discovered emotional light technology. How will his creations change how humankind thinks about artificial intelligence? Sadly abandoned. Sufficient Velocity x-post) | ||||
![]() | Posted 12-02-2017, 12:08 AM |
![]() |
![]() |
Currently Active Users Viewing This Thread: 1 (0 members and 1 guests) | |
Thread Tools | |
|
|