Home Forums Shops Trade Avatar Inbox Games Donate
  
Not Logged In
Reply
 
Thread Tools
Espy Espy is offline
Wanderer
Default   #17  
Quote:
Originally Posted by Crystalkitsune85 View Post
Boy this subject has sure opened a big can of worms. I've never seen people talk so heavily about one subject on a forum site like this lol. I hope you get what you need for your project QMC.
-dies laughing- Oh man, if you think this is talking heavily about one subject...
STONEWALL WAS A RIOT

Old Posted 12-01-2017, 02:44 PM Reply With Quote  
Default   #18   Coda Coda is online now
Developer
@Claire Bear: Computers have been passing the Turing test for several years now. It's rather a poor test. It's really not hard for a chatbot to pretend to be a human, at least at a passing glance. Meanwhile, the stricter Turing test doesn't actually work, because real live humans have been known to FAIL Turing tests.

Teaching logic to an AI is... really kinda strange. They're BUILT on logic. They understand nothing BUT logic. So if your sister's boyfriend is applying for a job that actually exists, I'd be curious to learn more about what it is he's actually doing.

Keep in mind that we're nowhere CLOSE to general intelligence or seed AI. We don't even really know where to START, so that's all entirely science fiction at the moment.

Like Suze said, you can't teach morality to a computer the way you teach it to a human child. It's a similar process, yes, in that you have to train the AI on repeated examples of good and bad behavior, with punishments and rewards, but humans are evolved with social behaviors and a certain set of base instincts that an AI simply won't have. So instead, what you have to do is figure out how to express morality in terms of logic. Asimov was attempting to do that when he formulated the Three Laws.

AI doesn't need to be as scary as science fiction makes it out to be. The fact that we HAVE those science-fiction stories as a warning is going to guide the people who are working on those projects to having suitable safeguards from the beginning.
Games by Coda (updated 4/15/2024 - New game: Call of Aether)
Art by Coda (updated 8/25/2022 - beatBitten and All-Nighter Simulator)

Mega Man: The Light of Will (Mega Man / Green Lantern crossover: In the lead-up to the events of Mega Man 2, Dr. Wily has discovered emotional light technology. How will his creations change how humankind thinks about artificial intelligence? Sadly abandoned. Sufficient Velocity x-post)
Old Posted 12-01-2017, 09:33 PM Reply With Quote  
Espy Espy is offline
Wanderer
Default   #19  
Speaking of the Three Laws, got any thoughts on those?

EDIT: Sorry, Quiet, hijacking yer thread.
STONEWALL WAS A RIOT

Last edited by Espy; 12-01-2017 at 09:40 PM.
Old Posted 12-01-2017, 09:35 PM Reply With Quote  
Default   #20   Worm Worm is offline
Two Fish
Lurking here.
I love this conversation so much, especially as I would not have thought about a lot of your points.
I am currently reading the Foundation series by Asimov and he mentions the Three Laws there as well as in I, Robot. Stoked to see where this goes. *u*

~worms away~
Old Posted 12-01-2017, 09:40 PM Reply With Quote  
Coda Coda is online now
Developer
Default   #21  
It's a good first start, especially considering that they were written very early on in the genre. However, they're shortsighted on their own, and if you bring the Zeroth Law into the picture they give a risk of scenarios like robots enslaving humans for their own good. There are certainly better laws that could be written.
Games by Coda (updated 4/15/2024 - New game: Call of Aether)
Art by Coda (updated 8/25/2022 - beatBitten and All-Nighter Simulator)

Mega Man: The Light of Will (Mega Man / Green Lantern crossover: In the lead-up to the events of Mega Man 2, Dr. Wily has discovered emotional light technology. How will his creations change how humankind thinks about artificial intelligence? Sadly abandoned. Sufficient Velocity x-post)
Old Posted 12-01-2017, 09:42 PM Reply With Quote  
Default   #22   Espy Espy is offline
Wanderer
Er... what's the zero-th law? I'm actually not really well-versed in, uh, "classical" sci-fi.
STONEWALL WAS A RIOT

Old Posted 12-01-2017, 09:45 PM Reply With Quote  
Claire Bear Claire Bear is offline
Magic
Default   #23  
the job is to teach something my sister said is called 'ontology' to ai so robots can "learn to reason with common sense". maybe logic was the wrong word, i'm not as smart as my sister so i simplified.

i dont want no robots with common sense. dont trust it. ya'll should be more scared of scifi scenarios, this shit happens faster than you think. twenty years ago you wouldn't have belevied me if i told you everyone would carry a computer in their pocket so quickly. science moves fast. robots will end up smarter than us someday. some robots probably are smarter than us, beating us at chess and whatnot.

i do not trust the kinda people who can afford to advance science. we're gonna end up in a world like Snowcrash one of these days.
Old Posted 12-01-2017, 09:46 PM Reply With Quote  
Default   #24   Quiet Man Cometh Quiet Man Cometh is offline
We're all mad here.
Quote:
Originally Posted by Espy View Post
Speaking of the Three Laws, got any thoughts on those?

EDIT: Sorry, Quiet, hijacking yer thread.
Oh, hijack away, this is pretty cool.

Are those three laws things what they referenced in I, Robot?

And Suze, I suspect the meaning search is part of the reason I put up with my school grievances. I remember first encountering the 'humans need hardship' think in Appleseed, some time ago.

The paperclip thing falls into the "what if" when it comes to self-improving AI. That was another detail from somewhere. (I should just find where the teacher got that clip and post it). They didn't go as far as taking over the planet with a factory, but the hypothesis is that if you had a self improving AI, and gave it a job with the purpose that it find the most efficient way to do it, it could come up with a response that puts it into competition with people for resources.

I tempted to skip the debate form and just write a mini essay on why debating a particular idea is hard! That way I can blather about a pile of things, and hopefully still get credit. ;)
Old Posted 12-01-2017, 09:51 PM Reply With Quote  
Suzerain of Sheol Suzerain of Sheol is offline
Desolation Denizen
Default   #25  
To paraphrase Sam Harris (and he might be quoting someone else like Bostrom for this, but), "The only thing scarier than the potential threat of AI is the potential loss of not developing it." I know some people are okay with writing off things like Alzheimer's and cancer as facts of life, or the inevitability of a supervolcano or meteor impact annihilating civilization as we know it as something beyond our control, but that's pretty much exactly the point. Our best chance of making it to the next stage of civilization is with AI as a ladder. Check out the concept of the Great Filter as something related to this point, as well. (Though it's also possible that AI itself is a filter.)

@Claire, it's not that we're saying there's no reason to be afraid or worry -- far from it, it's just that the things to be concerned over aren't as basic as evil conquering robots with guns and red eyes taking over the world. They're a lot more subtle and insidious, working on the level of societal and economic shifts that could unbalance civilization, and the fact that it's not something most people worry about is actually part of what makes it so scary. It's a hurdle that we, as a species and global community, have to make it over, more difficult than arguably any that have come before, and the consequences could be catastrophic. But it's going to happen, to one degree or another depending on the limits of what's actually possible, and we need to figure out how to handle it.

(Not us specifically here on Trisphee, obviously :P)
Cold silence has a tendency
to atrophy any sense of compassion
between supposed lovers.
Between supposed brothers.
Old Posted 12-01-2017, 11:21 PM Reply With Quote  
Default   #26   Espy Espy is offline
Wanderer
Quote:
Originally Posted by Suzerain of Sheol View Post
But it's going to happen, to one degree or another depending on the limits of what's actually possible, and we need to figure out how to handle it.

(Not us specifically here on Trisphee, obviously :P)
...I don't think the rest of the world would want anything to do with any of our potential ideas on how to handle such a scenario.
STONEWALL WAS A RIOT

Old Posted 12-01-2017, 11:25 PM Reply With Quote  
Quiet Man Cometh Quiet Man Cometh is offline
We're all mad here.
Default   #27  
Quote:
Originally Posted by Coda View Post
There are many, many, MANY problems in this world that are difficult to FIND an answer for, but much more feasible to VERIFY the answer once you have it. You set the computer up to figure out what mankind could never figure out on its own, and then you have humans check it.

To take AI out of the picture for an example: It requires an engineer to figure out how to make a car frame stronger, but any old schmoe can smash a car to see if it worked.
Whacking metal with a crowbar isn't going to tell anyone why the new method works, just that it does.

Unless an AI can tell us how it did something, then us verifying an answer could take as long as finding one in the first place. We do do things without knowing exactly what we are doing because sometimes, the result is more valuable than how it works.
Old Posted 12-01-2017, 11:35 PM Reply With Quote  
Default   #28   Quiet Man Cometh Quiet Man Cometh is offline
We're all mad here.
Quote:
Originally Posted by Suzerain of Sheol View Post
To paraphrase Sam Harris (and he might be quoting someone else like Bostrom for this, but), "The only thing scarier than the potential threat of AI is the potential loss of not developing it." I know some people are okay with writing off things like Alzheimer's and cancer as facts of life, or the inevitability of a supervolcano or meteor impact annihilating civilization as we know it as something beyond our control, but that's pretty much exactly the point. Our best chance of making it to the next stage of civilization is with AI as a ladder. Check out the concept of the Great Filter as something related to this point, as well. (Though it's also possible that AI itself is a filter.)
I'm never sure what my thoughts on AI and Cancer would be, unless it's information processing and pattern recognition and stuff. I get the image of nanobots or something that attack cancer cells, but then I imagine those bots going haywire for whatever reason and then kinda disintegrating from the inside out. Not a big fan of foreign objects but I have no real issue with new and curious organic things (would be rrreeaaaaally awkward if I did.)
Old Posted 12-01-2017, 11:38 PM Reply With Quote  
Quiet Man Cometh Quiet Man Cometh is offline
We're all mad here.
Default   #29  
I found the thing! It's just under 10 minutes. It's the definition that we've been using for class. I'm sure I can wrangle something now, but it's fun to keep going. ;)

https://www.youtube.com/watch?v=kWmX3pd1f10
Last edited by Quiet Man Cometh; 12-02-2017 at 12:00 AM.
Old Posted 12-01-2017, 11:47 PM Reply With Quote  
Default   #30   Suzerain of Sheol Suzerain of Sheol is offline
Desolation Denizen
Quote:
Originally Posted by Quiet Man Cometh View Post
I'm never sure what my thoughts on AI and Cancer would be, unless it's information processing and pattern recognition and stuff. I get the image of nanobots or something that attack cancer cells, but then I imagine those bots going haywire for whatever reason and then kinda disintegrating from the inside out. Not a big fan of foreign objects but I have no real issue with new and curious organic things (would be rrreeaaaaally awkward if I did.)
I unfortunately don't have any statistics to argue about the hypothetical safety of a hypothetical cure for cancer created by a hypothetical super-intelligence. :P

I'd be wary of any reactions based on an aversion to "unnatural" methods, though. Natural methods have had a long time to find cures and failed thus far. That's a very close step to just admitting defeat and saying "best not to meddle in God's domain" or the like (an "argument" my father is annoyingly fond of). It might be that problems like incredibly deadly diseases and disorders are beyond human capacity to cure, sort of how a lot of the sciences have to use advanced math to express their ideas because human intuition isn't built to process information beyond a certain level of complexity.
Cold silence has a tendency
to atrophy any sense of compassion
between supposed lovers.
Between supposed brothers.
Old Posted 12-01-2017, 11:53 PM Reply With Quote  
Quiet Man Cometh Quiet Man Cometh is offline
We're all mad here.
Default   #31  
I'm not really talking natural, just organic. I trust flesh more than metal and plastic at this point. I have an kidney that is not mine already. I'd be perfectly okay if somewhere we managed to create some manner of flesh thread and organ recipe and Dr. Frankenstein a new kidney.

Of course, new ways to open and close skin holes would also be of benefit, too. Not sure if this requires and AI but it might help?
Old Posted 12-02-2017, 12:08 AM Reply With Quote  
Default   #32   Coda Coda is online now
Developer
The Zeroth Law, in Asimov's writing, is an advanced form of the First Law that sufficiently advanced robots can formulate on their own. It states that a robot may not allow harm to come to humanity as a whole, or through inaction allow humanity to come to harm. This was an outgrowth of how AI had to reason about how a small harm to a human could prevent a larger harm (for example, punishing a child caused the child to be upset, which is harm, but it taught the child how to prevent bigger injuries).

Regarding AI and diseases: Any treatment that an AI comes up with would have to go through the same evaluation procedure that human-created treatments do. If a test population improves without side effects, you move on into further tests and then on into mass production. If a test population suffers horrendous side effects or gets worse, you abandon the drug and go back to the drawing board. So in that regard, AI is really just a tool for researchers to use in pursuit of what they're already doing.

Nanotechnology is a separate argument from AI. In particular, the problem with nanotechnology is the fact that they CAN'T have robust AI because they're tiny little things that act more like chemistry than like robots. There's no room for advanced computation. So while it's a worthwhile avenue of discussion, it's a DIFFERENT avenue of discussion.

Re: whacking metal with a crowbar -- You'd be surprised how much you can learn from whacking metal with a crowbar. It's a crude analogy, but the same techniques human engineers use to evaluate stresses in human-designed structures will work for evaluating stresses in AI-designed structures. The AI isn't going to invent brand-new laws of physics (unless they do! that would be super interesting!) -- the more likely surprise is going to come by developing a novel way of using things that can subsequently be analyzed to see what you can learn from it. That's how medicine has ALWAYS worked -- we discover molecules with medicinal properties first, then we start researching the pharmacology behind how it does its thing. Going the other way around was a flop -- pharmacologists spent a couple decades doing "rational drug design" and trying to engineer molecules specifically to pursue a hypothesized mechanism of action... it didn't really work any better than the typical method of "let's take this active molecule, swap out parts of it, and see what happens" and it cost a lot more. So an AI doing rational drug design would essentially be creating new base molecules for pharmacologists to study.
Games by Coda (updated 4/15/2024 - New game: Call of Aether)
Art by Coda (updated 8/25/2022 - beatBitten and All-Nighter Simulator)

Mega Man: The Light of Will (Mega Man / Green Lantern crossover: In the lead-up to the events of Mega Man 2, Dr. Wily has discovered emotional light technology. How will his creations change how humankind thinks about artificial intelligence? Sadly abandoned. Sufficient Velocity x-post)
Old Posted 12-02-2017, 12:08 AM Reply With Quote  
Reply


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

All content is copyright © 2010 - 2024 Trisphee.com
FAQ | E-Mail | Terms of Service | Privacy Policy | Forum Rules
Twitter | Facebook | Tumblr
Return to top
Powered by vBulletin®