Heroes of Might and Magic Community
visiting hero! Register | Today's Posts | Games | Search! | FAQ/Rules | AvatarList | MemberList | Profile


Age of Heroes Headlines:  
5 Oct 2016: Heroes VII development comes to an end.. - read more
6 Aug 2016: Troubled Heroes VII Expansion Release - read more
26 Apr 2016: Heroes VII XPack - Trial by Fire - Coming out in June! - read more
17 Apr 2016: Global Alternative Creatures MOD for H7 after 1.8 Patch! - read more
7 Mar 2016: Romero launches a Piano Sonata Album Kickstarter! - read more
19 Feb 2016: Heroes 5.5 RC6, Heroes VII patch 1.7 are out! - read more
13 Jan 2016: Horn of the Abyss 1.4 Available for Download! - read more
17 Dec 2015: Heroes 5.5 update, 1.6 out for H7 - read more
23 Nov 2015: H7 1.4 & 1.5 patches Released - read more
31 Oct 2015: First H7 patches are out, End of DoC development - read more
5 Oct 2016: Heroes VII development comes to an end.. - read more
[X] Remove Ads
LOGIN:     Username:     Password:         [ Register ]
HOMM1: info forum | HOMM2: info forum | HOMM3: info mods forum | HOMM4: info CTG forum | HOMM5: info mods forum | MMH6: wiki forum | MMH7: wiki forum
Heroes Community > Other Side of the Monitor > Thread: Artificial Intelligence
Thread: Artificial Intelligence This thread is 4 pages long: 1 2 3 4 · «PREV / NEXT»
mvassilev
mvassilev


Responsible
Undefeatable Hero
posted June 25, 2009 04:10 AM

I agree with this John Searle guy Woock quoted.
____________
Eccentric Opinion

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Corribus
Corribus

Hero of Order
The Abyss Staring Back at You
posted June 25, 2009 05:43 AM

Regarding the Searle quote:

Quote:
Searle is widely credited for having stated what is called a "Chinese room" argument, which purports to prove the falsity of strong AI. (Familiarity with the Turing test is useful for understanding the issue.) Assume you do not speak Chinese and imagine yourself in a room with two slits, a book, and some scratch paper. Someone slides you some Chinese characters through the first slit, you follow the instructions in the book, write what it says on the scratch paper, and slide the resulting sheet out the second slit. To people on the outside world, it appears the room speaks Chinese -- they slide Chinese statements in one slit and get valid responses in return -- yet you do not understand a word of Chinese. This suggests, according to Searle, somehow that no computer can ever understand Chinese or English, because, as the thought experiment suggests, being able to 'translate' Chinese into English does not entail 'understanding' either Chinese or English: all which the person in the thought experiment, and hence a computer, is able to do is to execute certain syntactic manipulations.[20]

I agree that this demonstrates why computers are not conscious now, but I don't see how this implies that computers will always be a "Chinese room".  

Furthermore, another problem with this argument is the fact that it is trying to imply that the person inside the room doesn't understand Chinese because he's following a set of instructions that were given to him.  But what if the person inside the room didn't need to use a book to translate the messages that were coming into the room?  Say he was already fluent in Chinese.  Is there really any difference between being fluent in Chinese and using a book to be effectively fluent in Chinese?  If nobody could see that you were using a book, the result would be the same.  Understanding a language is really the same thing as having the book in one's head, isn't it?
____________
I'm sick of following my dreams. I'm just going to ask them where they're goin', and hook up with them later. -Mitch Hedberg

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
JollyJoker
JollyJoker


Honorable
Undefeatable Hero
posted June 25, 2009 07:33 AM
Edited by JollyJoker at 08:26, 25 Jun 2009.

@ Death

You have to read my post the right way. It goes:
1) What on first look is looking like everyone is so intelligent today, is just the result of the fact that we have a couple-thousand years of history and had it stored.
2) HOWEVER (you have to read an HOWEVER into it - I thought obviously, especially with a view on conclusion) that's not INTELLIGENCE. Instead, for me at least, intelligence is, as I said, the ability of abstraction: to define the nature, the properties, the idea of a certain something, the PATTERN behind it, and then recognize that pattern somewhere else and even transfer that pattern. That's why I said that genius would be UNOBVIOUS pattern recognition.

And since I don't think that this can be simulated by algorithm it follows that I don't see any AI in the making for a long time to come.

@ Corribus

The Chinese Room problem is actually a good example for this. Following directions is the algorithm way. Having learned a language and speaking it fluently is pattern recognition. That's why the translation programs suck - the programs are not "grasping" the meaning of a certain text; they are just following a routine. In the best case thinking is done in that language, so meaning and language merge.

EDIT: Maybe an example will help. Of course I don't know how exactly the wheel was invented, but I'd suppose, it was done by pattern recognition for the concept or idea of ROLLING. Round pebbles would - randomly - roll down a hill, for example. And someone, maybe someone who saw it just in the right moment, with himself someone else just carrying a heavy pack, went a step ahead and abstracted the concept of rolling, inserting that concept or pattern into a new surrounding.
Newton, who supposedly saw an apple falling down: PATTERN recognition. Abstracting the idea, the defining properties, the nature and transferring it to something else.
Isn't that what fuels progress? Someone sees something pretty mundane, and suddenly it goes, hey, we could use something like for this and that. Abstracting a pattern and transferring it to something else to be used in a completely different context.

Obviously, very, very obviously, this is no algorithmic process. It's not possible.
If you think about how humans might have become "masters of fire", you'd suppose that for whetever the reason eventually someone will have found the strength to went near a natural fire - after a forest fire or lightnng strike, maybe, in the immediate vicinity of a cave the clan lived in, maybe because the heat was palpable and cold had been a killer the year before, maybe because some animals burned and they found the meat tasty - and then kept it. But what if it went out? There may have been even a job like fire keeper, who may have had the task to keep the fire going - which may have become the priest later -, but how will they have found a way to MAKE it? Did someone see an avalanche strike sparks on flintstone? Watch sun rays make tinder smoke?
How did people invent traps? Watching spiders and their webs? Someone falling into a hole, having a bad time not coming out until his mates found him?
It all amounts to the same. Abstracting the gist of something and put it into a new context.


 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
lucky_dwarf
lucky_dwarf


Promising
Supreme Hero
Visiting
posted June 25, 2009 02:52 PM

Quote:
(4) If AIs become self-aware, do we (the creators) have anything to fear from them?


Well of course you need to fear them! Its our nature to fear the unknown, its why we built houses (because we were afraid of the wild), hunted for food(because we were scared of hunger) and we will fear them not only for having independent thoughts, but also because if the AI becomes smarter than us, all we had struggled for thousands of years could be eradicated in a war lasting only 1 year(asuming the AI hacks into the computer main frame and sets nukes on all of us). Not only that but we expect the AI to be wiser than we were. Not start stupid war for religion, not to enslave their fellow man, and not cause irreversible damage like we did. They need to know what happened to us acros history or else it is doomed to repeat itself. If we failed and created conan the barbarian robot and survived history would record us as fools and dumb dumbs. So you see? There is one more thing humans hold fear of. Fear of Shame.
____________
So much has changed in my absence.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted June 25, 2009 04:15 PM

Quote:
@ Death

You have to read my post the right way. It goes:
1) What on first look is looking like everyone is so intelligent today, is just the result of the fact that we have a couple-thousand years of history and had it stored.
2) HOWEVER (you have to read an HOWEVER into it - I thought obviously, especially with a view on conclusion) that's not INTELLIGENCE. Instead, for me at least, intelligence is, as I said, the ability of abstraction: to define the nature, the properties, the idea of a certain something, the PATTERN behind it, and then recognize that pattern somewhere else and even transfer that pattern. That's why I said that genius would be UNOBVIOUS pattern recognition.

And since I don't think that this can be simulated by algorithm it follows that I don't see any AI in the making for a long time to come.
Yes but I think people have a bad idea about the whole algorithm thing, they think only of classical algorithms.

For instance, suppose we understand perfectly how the Universe works, and suppose that we were given a magically powerful computer to surpass the Universe (just bear with me). If we model the Universe with an algorithm, we may simulate another Big Bang and the Universe itself. Does that mean that the algorithm is "complex" and is "intelligent" because the simulation has, let's say, spawned life? Does that mean that the algorithm has anything to do with humans, just because the simulation spawned humans? No, the algorithm is just the model, the 'engine' so to speak, on which the Universe works and follows the laws -- what results THEN in complex reactions isn't stored in the algorithm at all.

However neurons are not as complex as the Universe obviously, and they still probably follow the laws of the Universe. If we can model just THAT with an algorithm, then our algorithm will simply 'simulate' the neurons with data. The algorithm will be the "engine", but the vehicle that runs on it can have very different characteristics -- it can be a car, a plane, a boat, a robot, a machine, etc etc.

This data, or neurons, will do the rest. The algorithm is there just for simulation, the actual intelligence comes from the artificial neurons -- not the algorithm.

For instance, even a cellphone these days has the capacity to recognize your voice. You don't seriously think that a programmer has programmed an algorithm for every possible customer's voice do you? It auto-modifies the artificial neurons to recognize it, based on a simple algorithm that has nothing to do with the 'intelligence' itself, just a MODEL on which the data (neurons) is based.

Quote:
Isn't that what fuels progress? Someone sees something pretty mundane, and suddenly it goes, hey, we could use something like for this and that. Abstracting a pattern and transferring it to something else to be used in a completely different context.
Pattern recognition is exactly at what artificial neural networks excel at.

You can give it handwriting and it will get you the respective ASCII character, if trained well. That is, just like a human, it must first "learn" or "see", for instance "seeing something pretty mundane". You must let it know your handwriting first, then once it learns it, it's able to recognize it, and your handwriting -- and this is simple, it needs a couple hundred neurons to work efficiently (it is done even on few cellphones mind you).

In the end it's pretty much like how humans learn and build up knowledge.

Quote:
How did people invent traps? Watching spiders and their webs? Someone falling into a hole, having a bad time not coming out until his mates found him?
It all amounts to the same. Abstracting the gist of something and put it into a new context.
That's how artificial neural networks learn, by watching examples (obviously at this time, only we 'feed' them the examples, like a babysitter).
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Corribus
Corribus

Hero of Order
The Abyss Staring Back at You
posted June 25, 2009 04:19 PM

@JJ

I guess what I'm wondering is whether there is any difference between a sufficienty complex algorithm (series of instructions, say) and true consciousness.  I.e., how do we know that consciousness isn't just another point of varying complexity along the "algorithm continuum"?
____________
I'm sick of following my dreams. I'm just going to ask them where they're goin', and hook up with them later. -Mitch Hedberg

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted June 25, 2009 04:23 PM

A computer, given enough (magical) power/speed and knowledge about the Universe, can simulate it with instructions. In the end everything is instructions but that doesn't make it a long algorithm.

For instance, a loop in some code that loops quadrillion times doesn't mean that it is quadrillion long, even though it does use quadrillion instructions at least (because it repeats itself with different data, loops are fundamental to a Turing machine).
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
JollyJoker
JollyJoker


Honorable
Undefeatable Hero
posted June 25, 2009 04:54 PM

Well, I think what I mean goes a little deeper. The stress is on ABSTRACTION or finding the IDEA or the INNER NATURE, the characteristic defining properties behind something.
What we are doing now: we are trying to abstractly define intelligence: trying to find the nature and characteristic properties - how would a machine be able to do this? How would it even get the idea?

And this abstraction allows the TRANSFER to completely unrelated things to find ingenious solutions, make new inventions and so on.



 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted June 25, 2009 04:56 PM

Yes that's right, abstraction and imagination is impossible to verify, we can't know if an AI will have it or not because we can't look at people and see them have it either -- we only assume they have it because we have it.

I'm not sure if more neurons and better understanding of how they work will solve that. And problem is, that I'll probably never be sure, even if they "seem" to be intelligent, that doesn't mean, that they will necessarily have imagination. Worst thing, again, is that it is virtually impossible to see.
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
JollyJoker
JollyJoker


Honorable
Undefeatable Hero
posted June 25, 2009 06:33 PM

Quote:
@JJ

I guess what I'm wondering is whether there is any difference between a sufficienty complex algorithm (series of instructions, say) and true consciousness.  I.e., how do we know that consciousness isn't just another point of varying complexity along the "algorithm continuum"?


I think, there's something like an easy "paradox" here to show the border. As long as an "AI" is following its programming, its series of instructions, as coded, without changing it (without being programmed to do so) it's not conscious.
Imagine humanity would really create something like an "AI" whether robotic, androidic, mobile or stationary, and let's say out of fear there would be something anchored there like Asimov's 3 laws for Robots or ANY other base programming to obey to, no matter what.
This is the pluck-the-apple-scenario: you'd consider that AI conscious the moment it would rebel against its base programming and become guilty of the "original sin", disobeying their creator.
I think it's that easy - or complicated, depending on how you see it.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
mvassilev
mvassilev


Responsible
Undefeatable Hero
posted June 25, 2009 06:40 PM

Quote:
As long as an "AI" is following its programming, its series of instructions, as coded, without changing it (without being programmed to do so) it's not conscious.
Couldn't you say that humans are just following their "programming"?
____________
Eccentric Opinion

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted June 25, 2009 06:46 PM

Quote:
I think, there's something like an easy "paradox" here to show the border. As long as an "AI" is following its programming, its series of instructions, as coded, without changing it (without being programmed to do so) it's not conscious.
Imagine humanity would really create something like an "AI" whether robotic, androidic, mobile or stationary, and let's say out of fear there would be something anchored there like Asimov's 3 laws for Robots or ANY other base programming to obey to, no matter what.
This is the pluck-the-apple-scenario: you'd consider that AI conscious the moment it would rebel against its base programming and become guilty of the "original sin", disobeying their creator.
I think it's that easy - or complicated, depending on how you see it.
The algorithm will only dictate how the neurons should behave. Like in our minds, real neurons are governed by chemical interactions. These "chemical interactions" are the algorithm, and the actual neurons -- the stuff that makes us intelligent.
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
alcibiades
alcibiades


Honorable
Undefeatable Hero
of Gold Dragons
posted June 25, 2009 07:05 PM

Isn't this essentially what the movie "I, Robot?" is about?

That's a really good movie, by the way.
____________
What will happen now?

 Send Instant Message | Send E-Mail | View Profile | PP | Quote Reply | Link
Lord_Woock
Lord_Woock


Honorable
Undefeatable Hero
Daddy Cool with a $90 smile
posted June 25, 2009 07:08 PM

My take on the Chinese Room:

The purpose of the Chinese Room is to demonstrate that one cannot derive semantics from syntax.

Searle claims that as fluent computers may be in handling any syntactic task, they will never have a true understanding of their actions due to a lack of access to semantics of any sort. They operate on ones and zeroes devoid of proper meaning.

The fallacy lies in that the human brain, which obviously thinks (after all, do we not experience thought?), at the basic level also operates on electric impulses that do not themselves have the meaning that emerges from complex neural interaction.

The difference lies not in the basic levels of things but in the computer being deprived of context.
Searle claims a computer will never be able to understand the meanings of words like chair, tomato or what have you. Well, a human deprived of any senses wouldn't be able to understand either. After all, how do you describe a color to someone who could never see?

I say that whether or not it is possible for a computer to think will not be learned until we give one sensory ability on par with our own and then build language comprehension based on that.

Or, as Bohenski says: any philosopher is doomed to eventually become overly enchanted with the desired conclusion and perform large logical leaps to "prove" it.
____________
Yolk and God bless.
---
My buddy's doing a webcomic and would certainly appreciate it if you checked it out!

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted June 25, 2009 08:02 PM
Edited by TheDeath at 20:04, 25 Jun 2009.

Quote:
They operate on ones and zeroes devoid of proper meaning.
This is a misunderstanding of information theory.

Here few terms:

"ones and zeroes devoid of proper meaning" = data
protocol = what to do with the data

protocol + data = information

information has meaning because of the protocol (protocol = algorithm). If you use protocol X on data X, and protocol Y on data Y, and arrive at the same output, then both have the SAME information (even though the data is different). Basically, with the proper protocol, every data can be 'tied' to another data by same information.

For example:

protocol X = just plain read data from a file
data X = uncompressed file

protocol Y = zip decompression
data Y = zip compressed data X

>> result is the same information in both cases, even though the data Y is compressed, but because you do not use protocol X on data Y, but you use a different protocol Y, you do arrive at the same information.

Information has meaning, data hasn't. And protocol, aka the algorithm, is what gives it meaning.

Quote:
I say that whether or not it is possible for a computer to think will not be learned until we give one sensory ability on par with our own and then build language comprehension based on that.
Most animals obviously think, yet they do not understand language (maybe just a few words).
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Binabik
Binabik


Responsible
Legendary Hero
posted June 27, 2009 12:13 AM

Quote:
(5) Are self-aware AIs alive?
Quote:
(2) If AIs become self aware, would it be morally wrong to terminate them?


Hmmm, just thinking ahead and speculating.....

Well, just in case, I'll have mine medium rare with cheese, lettuce, tomato, pickles and topped with this.



____________

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted June 27, 2009 02:22 AM

http://news.yahoo.com/s/livescience/firstimageofamemorybeingmade
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
JollyJoker
JollyJoker


Honorable
Undefeatable Hero
posted June 27, 2009 09:44 AM

Quote:
http://news.yahoo.com/s/livescience/firstimageofamemorybeingmade


Quote:
The experiment also revealed some surprising aspects of memory formation, which remains a somewhat mysterious process.
...
While the details aren't clear, scientists suspect that the new proteins help strengthen synapses, which are the connections between neurons.
...
One of the surprising revelations of the new study is that more regions of RNA, a protein-building instruction manual similar to DNA, are required to form the new proteins than previously thought.
The researchers also saw that both sides of the synapse (called the pre- and post- sides) are involved in forming the memory, rather than just one, as some experts thought.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted June 27, 2009 05:04 PM

Thanks for putting the highlights of the story for the lazy ones

From the other thread, you said:
Quote:
You teach everything social basically by example. If you, as parents, don't LIVE manners, you can't teach them to your children. If you DO live manners, your children will learn them anyway.
If you eat with your fingers it would by folly to try and teach your children to eat with knife and fork - if you do eat with knife and fork it will be natural for a child to try it and the only thing you have to do is to make sure that it's tried long enough, if it doesn't work immediately.
And I agree 100%. Incidentally, that's how our current neural networks "learn", by example.
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
JollyJoker
JollyJoker


Honorable
Undefeatable Hero
posted June 27, 2009 05:26 PM

But that's - and I might add an of course here - not intelligence, because learning (by example or whatever) doesn't imply progress or development. It could stop at repetition and copying.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Jump To: « Prev Thread . . . Next Thread » This thread is 4 pages long: 1 2 3 4 · «PREV / NEXT»
Post New Poll    Post New Topic    Post New Reply

Page compiled in 0.0900 seconds