Heroes of Might and Magic Community
visiting hero! Register | Today's Posts | Games | Search! | FAQ/Rules | AvatarList | MemberList | Profile


Age of Heroes Headlines:  
5 Oct 2016: Heroes VII development comes to an end.. - read more
6 Aug 2016: Troubled Heroes VII Expansion Release - read more
26 Apr 2016: Heroes VII XPack - Trial by Fire - Coming out in June! - read more
17 Apr 2016: Global Alternative Creatures MOD for H7 after 1.8 Patch! - read more
7 Mar 2016: Romero launches a Piano Sonata Album Kickstarter! - read more
19 Feb 2016: Heroes 5.5 RC6, Heroes VII patch 1.7 are out! - read more
13 Jan 2016: Horn of the Abyss 1.4 Available for Download! - read more
17 Dec 2015: Heroes 5.5 update, 1.6 out for H7 - read more
23 Nov 2015: H7 1.4 & 1.5 patches Released - read more
31 Oct 2015: First H7 patches are out, End of DoC development - read more
5 Oct 2016: Heroes VII development comes to an end.. - read more
[X] Remove Ads
LOGIN:     Username:     Password:         [ Register ]
HOMM1: info forum | HOMM2: info forum | HOMM3: info mods forum | HOMM4: info CTG forum | HOMM5: info mods forum | MMH6: wiki forum | MMH7: wiki forum
Heroes Community > Other Side of the Monitor > Thread: Artificial Intelligence
Thread: Artificial Intelligence This thread is 4 pages long: 1 2 3 4 · «PREV
ihor
ihor


Supreme Hero
Accidental Hero
posted August 14, 2009 10:53 AM

As for me for system to be intelligent means to have ability to improve itself. "To determine what is important, and what is not in the given position" is not the ability which makes machine more intelligent. The machine couldn't determine itself such a rules. The problem is that machine now could only do what human said. Machine can't think that human ordered wrong unless another human foresees such a possibility.
The only progress we have with AI that earlier was:
Human declares exact instructions what to do - machine executes. (Make next move of the knight, if there are no moves then backtrack to the situations when moves are available - trying all the situations).
Now we have:
Human thinks and gives some generalized abstract rules - machine executes these rules like exact instructions. The THINKING of the machine is bounded by these rules. It's can't invent something more.
(Human invents the logic of the place where next move should be - the machine "thinks" and choses what to do).
This is very simplified example and I am not aware of neural networks but the truth is that the level of the AI now is too low. The greatest progress in AI will be if human will only gives AI standart tools to receive and process information, learn new and remember that like human has eyes, ears and brain. Now it's too sad to watch on TV how japanese scientists creates with proud new and new robots every day but those robots are nothing, they can't imrove themselves.
Hope after some time (maybe 30 years) there will be absolutely new progressive technologies and we will indeed get success in the AI science.
____________
Your advertisement could be
here only for 100$ per day.



 Send Instant Message | Send E-Mail | View Profile | PP | Quote Reply | Link
Corribus
Corribus

Hero of Order
The Abyss Staring Back at You
posted August 14, 2009 03:24 PM

Dimis, I have to say before anything else - you're doing a great job taking the lead and guiding the discussion.  Most threads could use a moderator like you (I don't mean moderator of the forum, but someone who actually moderates the direction of the thread).  I started the thread because I'm interested in the topic, but I don't know enough about it to guide it along.  So, thanks!

I'll come back and maybe reply to some of the recent posts in a bit.  I just wanted to say that. QP well earned.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
dimis
dimis


Responsible
Supreme Hero
Digitally signed by FoG
posted August 14, 2009 09:14 PM
Edited by dimis at 20:07, 08 Nov 2009.

Arguments

Quote:
Now nothing.
That honest reaction JJ is the one that shocked me because it was the first unexpected thing (for me) during the entire thread. And it was unexpected, because even when you kept justifying your position, I thought the same thing as Corribus; namely, that you were about to connect emotions or morals with intelligence, or introduce some sort of authority / authenticity that has the right answers. So, in all honesty from my side, I still try to understand your last posts and the line of reasoning. Is it because that I don't have a child? I really do not know.

In the rest of your post it seems to me that you are backtracking, speculating the details, and struggling for something because something inside you seems to have collapsed. That is perfectly fine, but I will not address it now, since others still do not admit that something has collapsed. In fact, read below where we stand. (I also want further comments on generalization, but I think it is not the right time yet.)



Quote:
It seems that y'all have missed a significant part of intelligence: working on a problem you've never seen before. ...
Quote:
Intelligence here is rather a capability of solving NEW problems, which were not described or input before.
And now I ask you this:

I just created a game named supercalifragilisticexpialidocious. Moreover, I am also polite enough to let you have the first move. So, it is your turn.

What is your move gentlemen ?

If you think I am exaggerating, go back and read.



And ihor, NO, in general.
Quote:
As for me for system to be intelligent means to have ability to improve itself. ... The machine couldn't determine itself such a rules.
Well, you did not actually read what I wrote before in the post titled Self-Training. The machine came up with the moves it had to play in certain positions without prior injection of rules that were supposed to be good. And that development was the result of playing against itself (not even a human expert). In fact, it has altered the way of thinking in the game. Just go back to the reference and scroll down to Figure 2 and Figure 3 and read the captions.
As far as the narrative of where AI standed, where it is now, where it goes, and what is the truth (btw. what truth? whose truth?), well, basically you are supporting your arguments on what I believe I disproved above.



Further comments ?
____________
The empty set

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted August 14, 2009 09:18 PM

Quote:
Intelligence here is rather a capability of solving NEW problems, which were not described or input before. Of course ability to reduce very complex issue to basic operations already known is vital. However as computer uses only maths and logic, some real life problems seem unlikely to solve in this way.
A neural simulation does use "only maths" but if we perfect it it may become like real neurons

Self-training is a property of neural networks and I have outlined that several times. The important factor of training is that it must know what is "good" and what is "bad" -- i.e like in a game where you get a 'score' for doing something good and penalty for something bad. Good scores also should lead to motivation.

This isn't unlike how humans learn mind you.
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Rarensu
Rarensu


Known Hero
Formerly known as RTI
posted August 15, 2009 09:41 AM
Edited by Rarensu at 09:56, 15 Aug 2009.

Quote:
Self-training is a property of neural networks

This is not strictly accurate. As a neural network simulation is being created, there is another program which is training it. After this initial training is complete, the neural network simulation runs perfectly well without retraining itself. You can, of course, make a continuously self-retraining neural network, but it is not required to fit the definition.

~~~~~~~~~~~~~~~~~~~~~~~~

A neural network is just a different way to do calculations. In this, it is like any other form of computation: in order for it to have any value, you first need a human-like intelligence to look at the problem and decide what the important values are that need to be considered.

For example, if you have a neural network that is trying find the best race car design, you need a human to decide that the shape of the car, the size of the engine, the texture of the wheels... etc... are important and make those values the inputs to the neural network. He will be ignoring things such as the color of the car and the astrological sign under which the car was built. He will be separating values in ways that make sense. The front two tires will be considered together. The steering wheel is not part of the exhaust system.

This means that the final solution will be partially a product of the "intelligence" of the neural network, but mostly a product of the "intelligence" of the programmer. It is also important to draw this distinction when looking at other forms of AI.

In another example: if you have a program that plays a game by comparing the relative positions of pieces on a game board, instead of by analysing the flavour of the cookie that the opponent happens to be eating, then one should note that this decision was made by the programmer. It was his intelligence that understood what was important about the game, not the program's. He decided that each piece inhabits exactly on one square of a chessboard, and that rotations are irrelevant.

You might be thinking, "but those other things are totally stupid... they have nothing to do with the problem at hand...". I challenge, then, thus: describe for me a program that would replicate the human intuition of throwing away all these "totally stupid" things that have nothing to do with the problem at hand. You will discover that it is a surprisingly difficult problem. Simply making a list of stupid things that should be thrown away and things that are usually interesting, is no good, because then your program is once again dependant on the intelligence of it's programmer and not able to think for itself.

If you go in for creationism, you have an automatic counter-argument: since the human brain was designed by God, we ourselves fail this independence test I have mentioned and my definition of artificial intelligence is therefore meaningless.
____________
Sincerely,
A Proponent of Spelling, Grammar, Punctuation, and Courtesy.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
JollyJoker
JollyJoker


Honorable
Undefeatable Hero
posted August 15, 2009 11:56 AM

Quote:
Quote:
Now nothing.
That honest reaction JJ is the one that shocked me because it was the first unexpected thing (for me) during the entire thread. And it was unexpected, because even when you kept justifying your position, I thought the same thing as Corribus; namely, that you were about to connect emotions or morals with intelligence, or introduce some sort of authority / authenticity that has the right answers. So, in all honesty from my side, I still try to understand your last posts and the line of reasoning. Is it because that I don't have a child? I really do not know.

In the rest of your post it seems to me that you are backtracking, speculating the details, and struggling for something because something inside you seems to have collapsed. That is perfectly fine, but I will not address it now, since others still do not admit that something has collapsed. In fact, read below where we stand. (I also want further comments on generalization, but I think it is not the right time yet.)



I have to admit that I don't have the slightest idea what you are trying to say. I furthermore can't imagine what is so difficult to understand with my last posts.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
dimis
dimis


Responsible
Supreme Hero
Digitally signed by FoG
posted August 15, 2009 09:55 PM
Edited by dimis at 01:54, 16 Aug 2009.

Optimal Brain Damage

First of all, just for the fun of it, in the question
"What is your move gentlemen ?"
nobody resigned. But of course, although this wins the argument, it propagates the problem at another level.



Rarensu, you base the intelligence on the amount of input signals and from what I understand to some biological procedure. What about a man who at some point in his lifetime has an accident and from that point on he is blind for the rest of his life? Is his intelligence affected because he can no longer see? That is one thing.

The other thing is that if you allow more signals and these are irrelevant, then, after training, all these will have weight very close to zero or ideally zero; in other words it will be discovered that these factors are irrelevant with the real problem at hand. What changes is the period of time for which the NN requires training. But this (more time) is not what you are saying and it actually addresses something different.

Here I can probably make another comment. Is it the same when you play chess against a computer compared to when playing against a human on the same table? Or to go one step beyond, is it the same when you play against a human friend of yours chess online compared to what you play when your friend is on the same table as you are?
My experience is that the answer is no, because there are other elements involved, like psychology, emotions ...

But anyway, in order to address the challenge in an easy way, and backup the fact that the irrelevant inputs will be recognized as irrelevant, for now, I have a reference with the funny title Optimal Brain Damage (read at least the abstract) which gives the title for this post.

Finally, let's try not to restrict the discussion on NNs because primarily this is not the subject; instead we want to talk about intelligence.



JJ I will make another post to clarify.
____________
The empty set

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
dimis
dimis


Responsible
Supreme Hero
Digitally signed by FoG
posted August 16, 2009 12:23 AM

Generalization revisited

Well, I was a little bit confused with the gambit example, so I will start from there. I do understand the example that you give, by experience, the justification, and the generalization that you make, but I am not sure if this is intelligence, or at least necessarily of good form and I will clarify immediately.

Keeping the gaming language, we have another rule which says that the best way to refute a gambit is to accept it. Isn't this a conflicting rule with the abstract lesson that the human derived after losing the game because he accepted the gambit ? And what if I generalize the rule the best way to refute a gambit is to accept it in other manifestations of life? Chaos and in particular in chess, I think which one will be applied in the end also depends on your emotions on the day of the game (at least for the non-standard gambits that you play as part of your opening repertoire). Your emotions, at least partially, will convince you that you are doing the right thing when you grab that extra pawn during the game. This is one thing and I am trying to say that my experience has taught me that there are every now and then exceptions to the rule of overly-generalizing in the sense that it is not always good to try to find a pattern between two unrelated (from a first sight and perhaps eventually) matters/problems because it leads you to wrong conclusions. Perhaps this is intelligence but it is not necessarily good. On the other hand, so that you don't get me wrong, I strongly support the generalization of a specific problem because by generalizing you arrive at conclusions faster by speculating what seem to be the important ingredients of the problems.

Then, the argument with the baby for which my intuition is clearly more limited. Well, what I think when I consider babies learning languages is also some sort of (silly? maybe ...) admiration for "the mystery of life" whatever the phrase "the mystery of life" is supposed to mean. And that admiration is an emotion and might interfere with my judgement no matter how hard I try to avoid it. This is one.
Another thing is that a child also tries to imitate its surroundings and in particular its parents in the ability of talking. So initially you have all those incomprehensible sounds, and at some point, the teachers (parents) spend time face-to-face with the child and try to teach the language by excessive repetitions of the same words. This brings us to the concept of learning (which I have tried to avoid). But so far I don't see a generalization technique. In fact I see a very "low level" interaction which highly depends on the ability of the child to hear, see, have memory, and to some extent to its emotions; in other words it depends to some biological process, the training examples by the parents, and the amount (and quality) of "inputs" (vision, hearing, and so on). So, one question is where is the generalization? Another thing is that this example also incorporates (to my understanding) some biological process, in other words, it should be a good example about human intelligence. But essentially we want the gist because in the end we want to talk about the title of the thread which is artificial intelligence.
And this is why I don't get why and what urged you to stop there, since others are still attacking with that thing in mind and there is an influence of some biological process in their arguments.


So, what am I trying to say ? That we are looking for a definition. But we have that definition already. It is just that nobody attacked the definition yet. So, I believe, we should all go back and read again the answers given by McCarthy in the previous page.

I hope this is a satisfactory answer to your question JJ.
____________
The empty set

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Rarensu
Rarensu


Known Hero
Formerly known as RTI
posted August 16, 2009 12:23 AM
Edited by Rarensu at 00:29, 16 Aug 2009.

Quote:
Rarensu, you base the intelligence on the amount of input signals and from what I understand to some biological procedure. What about a man who at some point in his lifetime has an accident and from that point on he is blind for the rest of his life? Is his intelligence affected because he can no longer see? That is one thing.

The other thing is that if you allow more signals and these are irrelevant, then, after training, all these will have weight very close to zero or ideally zero.

For the first thing, I have no idea what you think I was thinking; at least one of us performed a failure to communicate. What does blind have to do with the input neurons of an NN?

For the second thing, you have successfully shown how to throw away an input. But this does not show how to choose a list of inputs that might be meaningful. I want you to talk about how a program could decide what is likely to be meaningful before it begins working on the problem. Otherwise the program starts with a theoretically infinite number of inputs and then it doesn't matter how good your algorithms are, no progress will be made.

PS. I deeply and truly believe that McCarthy's definition is worthless.
____________
Sincerely,
A Proponent of Spelling, Grammar, Punctuation, and Courtesy.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted August 16, 2009 03:01 AM

Quote:
This is not strictly accurate. As a neural network simulation is being created, there is another program which is training it. After this initial training is complete, the neural network simulation runs perfectly well without retraining itself. You can, of course, make a continuously self-retraining neural network, but it is not required to fit the definition.
But the stuff that performs the neural network is like the "law" of chemical bounds in a real brain. If you think about it, yes it is an algorithm, but the universe may have an algorithm itself -- what we call the laws of physics. So that algorithm in the neural nets tries to approximate the laws that happen in a brain.
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Rarensu
Rarensu


Known Hero
Formerly known as RTI
posted August 16, 2009 06:22 AM

Quote:
But the stuff that performs the neural network is like the "law" of chemical bounds in a real brain. If you think about it, yes it is an algorithm, but the universe may have an algorithm itself -- what we call the laws of physics. So that algorithm in the neural nets tries to approximate the laws that happen in a brain.

? I am confused. What are does this have to do with self-training NN sims?
____________
Sincerely,
A Proponent of Spelling, Grammar, Punctuation, and Courtesy.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
JollyJoker
JollyJoker


Honorable
Undefeatable Hero
posted August 16, 2009 10:10 AM

@ dimis
Yes, the answer is satisfactory, although I disagree.

1) The gambit. That was a specific example I just mentioned for an example of a "learning" chess machine. What I simply said is that the machine will just store away the sequence of turns for further reference, either to play a winning strategy itself or to play a different sequence of moves the next time.
That's not learning, that's "storing".
Learning is a bit more complex than storing - at least intelligent learning. It is understanding the underlying principle, with the aim to be able to use the learned in every situation it might be used (and with the best effect in those that are not obvious.

2) Finding that "underlying principle" IS intelligence. THat's what I mean with abstraction and you with generalization, I suppose. I think, that is happening as well when a child learns a language. Learning of a language is no linear process. It takes a pretty long time until something within the brain makes the connection that sounds are more than just sounds when uttered from that nice warm thing, that sounds "mama", but once that connection is made, it becomes a torrent because it becomes a general principle: if a specific sound stands for this, then another specific sound may stand for that - and suddenly it's falling into place.

I think, that ability to generalize, to reduce things to their abstract nature, THAT is intelligence.
Which is the reason why it is possible to program a very decent chess AI, but not a heroes AI. Heroes needs a lot of "reduction" because you have to identify priorities depending on map specifics, opponents and starting position - you need to grasp the underlying principles of the game to be able to apply them to any combination of map, opponents, starting position and victory condistion.
In other words, you need real intelligence instead of specific algorithms that can be executed.
That's a fundamentally different thing.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted August 16, 2009 05:12 PM

Quote:
? I am confused. What are does this have to do with self-training NN sims?
Ah I misunderstood it then. Most neural nets however exploit the fact that they can be easily programmed (i.e learn), so most are re-trainable. Like recognizing your voice (most speech recognition improves with time as it learns your voice patterns -- however, you can of course "reset" it if you want too).
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
dimis
dimis


Responsible
Supreme Hero
Digitally signed by FoG
posted August 17, 2009 12:18 AM

Tackling infinity

Quote:
For the first thing, I have no idea what you think I was thinking; at least one of us performed a failure to communicate. What does blind have to do with the input neurons of an NN?
Because Rarensu, your senses generate most of the input for your brain. And I am asking: Let me remove one of them. (Actually, technically, it seems that I stabilize the input to a pre-specified blind (black?) value.) And the question is, can we say something about this guy's intelligence from now on, and if so, what ?

Quote:
For the second thing, you have successfully shown how to throw away an input. But this does not show how to choose a list of inputs that might be meaningful. I want you to talk about how a program could decide what is likely to be meaningful before it begins working on the problem. Otherwise the program starts with a theoretically infinite number of inputs and then it doesn't matter how good your algorithms are, no progress will be made.
First of all, everything in this world seems to be finite and actually discrete. So, in particular, our brains have a finite (integer number) of inputs. Part of those inputs is filled-in with values (signals) that our senses send, and partially from internal mechanisms of the brain. Regardless how this is split, when you face a problem, e.g. read and understand the meaning of a sentence like this one, you do not really have to eliminate an infinity of inputs. Therefore, I don't understand why you should have such a strong requirement for artificial approaches. Moreover, under the same reasoning (finite objects), we can have an easy answer (at least for this question) on how we, humans, decide what is meaningful; in other words, elimination of the irrelevant input.

However, I think your argument primarily addresses the mechanics of NNs and it is not a real argument on what you consider to be intelligence. So, if you want to talk about that part, be my guest, but it is like we agreed on intelligence and we are trying to figure out how to implement efficiently artificial intelligence.

Finally, I will just state it here without reference yet, that, there are also results on how to select the best candidate algorithm for a specific task among an infinite pool of possible algorithms (countably infinite). So, in particular, I wouldn't be surprised if somebody worked on the question of yours above and came up with a solution on tackling even an infinity of inputs in order to determine which ones are important and which ones are not. And the idea that I have in mind has to do with the fact that not every infinite series diverges. Many of them converge as you already know. Moreover, I don't think that you should really have to throw away all the irrelevant input. It should be enough if the relevant inputs have more impact than the irrelevant ones (although these can be infinite again).

But as I said above, I believe that this kind of discussion is about the mechanisms and how you actually implement artificial intelligence. Is this what we want at the moment?

Quote:
PS. I deeply and truly believe that McCarthy's definition is worthless.
Attack the definition then, or give us something better.
____________
The empty set

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Jump To: « Prev Thread . . . Next Thread » This thread is 4 pages long: 1 2 3 4 · «PREV
Post New Poll    Post New Topic    Post New Reply

Page compiled in 0.0945 seconds