Heroes of Might and Magic Community
visiting hero! Register | Today's Posts | Games | Search! | FAQ/Rules | AvatarList | MemberList | Profile


Age of Heroes Headlines:  
5 Oct 2016: Heroes VII development comes to an end.. - read more
6 Aug 2016: Troubled Heroes VII Expansion Release - read more
26 Apr 2016: Heroes VII XPack - Trial by Fire - Coming out in June! - read more
17 Apr 2016: Global Alternative Creatures MOD for H7 after 1.8 Patch! - read more
7 Mar 2016: Romero launches a Piano Sonata Album Kickstarter! - read more
19 Feb 2016: Heroes 5.5 RC6, Heroes VII patch 1.7 are out! - read more
13 Jan 2016: Horn of the Abyss 1.4 Available for Download! - read more
17 Dec 2015: Heroes 5.5 update, 1.6 out for H7 - read more
23 Nov 2015: H7 1.4 & 1.5 patches Released - read more
31 Oct 2015: First H7 patches are out, End of DoC development - read more
5 Oct 2016: Heroes VII development comes to an end.. - read more
[X] Remove Ads
LOGIN:     Username:     Password:         [ Register ]
HOMM1: info forum | HOMM2: info forum | HOMM3: info mods forum | HOMM4: info CTG forum | HOMM5: info mods forum | MMH6: wiki forum | MMH7: wiki forum
Heroes Community > Other Side of the Monitor > Thread: Artificial Intelligence
Thread: Artificial Intelligence This thread is 4 pages long: 1 2 3 4 · «PREV / NEXT»
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted June 27, 2009 05:36 PM

Of course not -- I'm sorry if that's what my post implied. Learning, however, is an essential part of intelligence. We have to start with the basics don't we
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
JollyJoker
JollyJoker


Honorable
Undefeatable Hero
posted June 27, 2009 07:23 PM

Agreed without any buts.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
dimis
dimis


Responsible
Supreme Hero
Digitally signed by FoG
posted August 12, 2009 11:05 AM bonus applied by angelito on 13 Aug 2009.
Edited by dimis at 11:17, 12 Aug 2009.

Trying to get my thoughts together ...

I really wanted to participate in this thread for a long time now, but there are so many other things that I feel should be mentioned, that I kept postponing it. In any case I apologize because it is a long post. These are just some thoughts of a guy who has worked on AI, and is working on AI at the moment.



Regarding the first question
Quote:
(1) Can AIs become self-aware?
first, let's define what "self-aware" means. Unfortunately, this is not as easy as it may seem, because in the end we will argue for the very nature of "self-awareness". But I will leave this on the side for the moment. As of the rest of the questions, they highly depend on this one, so I will keep my mouth shut; at least until we agree on what is "self-awareness".

At the moment, let me make a step back.



What is AI?

In all honesty, let's go and ask the guy who coined the term in this world.

So, Professor John McCarthy, What is AI?

Now, let me quote a significant portion of the link above since it will be good to have some fundamental questions and answers right here. I will also write in italics some key (to my opinion) phases in the answers.
Q. What is artificial intelligence?
A. It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

Q. Yes, but what is intelligence?
A. Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines.

Q. Isn’t there a solid definition of intelligence that doesn’t depend on relating it to human intelligence?
A. Not yet. The problem is that we cannot yet characterize in general what kinds of computational procedures we want to call intelligent. We understand some of the mechanisms of intelligence and not others.

Q. Is intelligence a single thing so that one can ask a yes or no question “Is this machine intelligent or not?”?
A. No. Intelligence involves mechanisms, and AI research has discovered how to make computers carry out some of them and not others. If doing a task requires only mechanisms that are well understood today, computer programs can give very impressive performances on these tasks. Such programs should be considered “somewhat intelligent”.

Q. Isn’t AI about simulating human intelligence?
A. Sometimes but not always or even usually. On the one hand, we can learn something about how to make machines solve problems by observing other people or just by observing our own methods. On the other hand, most work in AI involves studying the problems the world presents to intelligence rather than studying people or animals. AI researchers are free to use methods that are not observed in people or that involve much more computing than people can do.

Q. What about IQ? Do computer programs have IQs?
A. No. (Further information at the link above.)

Q. What about other comparisons between human and computer intelligence?
A. (Further information at the link above.)

Q. When did AI research start?
A. After WWII, a number of people independently started to work on intelligent machines. The English mathematician Alan Turing may have been the first. He gave a lecture on it in 1947. He also may have been the first to decide that AI was best researched by programming computers rather than by building machines.

Q. Does AI aim to put the human mind into the computer?
A. Some researchers say they have that objective, but maybe they are using the phrase metaphorically. The human mind has a lot of peculiarities, and I’m not sure anyone is serious about imitating all of them.

Q. What is the Turing test?
A. Alan Turing’s 1950 article Computing Machinery and Intelligence [Tur50] discussed conditions for considering a machine to be intelligent.
(Further information at the link above, but also later on in this post.)

Q. Does AI aim at human-level intelligence?
A. Yes. The ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans. However, many people involved in particular research areas are much less ambitious.

Q. How far is AI from reaching human-level intelligence? When will it happen?
A. A few people think that human-level intelligence can be achieved by writing large numbers of programs of the kind people are now writing and assembling vast knowledge bases of facts in the languages now used for expressing knowledge.
However, most AI researchers believe that new fundamental ideas are required, and therefore it cannot be predicted when human-level intelligence will be achieved.

Q. Are computers the right kind of machine to be made intelligent?
A. Computers can be programmed to simulate any kind of machine.
(Further information at the link above.)

Q. Are computers fast enough to be intelligent?
A. Some people think much faster computers are required as well as new ideas. My own opinion is that the computers of 30 years ago were fast enough if only we knew how to program them. Of course, quite apart from the ambitions of AI researchers, computers will keep getting faster.

Q. What about parallel machines?
A. Machines with many processors are much faster than single processors can be. Parallelism itself presents no advantages, and parallel machines are somewhat awkward to program. When extreme speed is required, it is necessary to face this awkwardness.

Q. What about making a “child machine” that could improve by reading and by learning from experience?
A. This idea has been proposed many times, starting in the 1940s. Eventually, it will be made to work. However, AI programs haven’t yet reached the level of being able to learn much of what a child learns from physical experience. Nor do present programs understand language well enough to learn much by reading.

Q. Might an AI system be able to bootstrap itself to higher and higher level intelligence by thinking about AI?
A. I think yes, but we aren’t yet at a level of AI at which this process can begin.

Q. What about chess?
Q. What about Go ?
A. (Further information at the link above, but I will come back to games below.)

Q. Don’t some people say that AI is a bad idea?
A. The philosopher John Searle says that the idea of a non-biological machine being intelligent is incoherent. He proposes the Chinese room argument www-formal.stanford.edu/jmc/chinese.html The philosopher Hubert Dreyfus says that AI is impossible. The computer scientist Joseph Weizenbaum says the idea is obscene, anti-human and immoral. Various people have said that since artificial intelligence hasn’t reached human level by now, it must be impossible. Still other people are disappointed that companies they invested in went bankrupt.

Q. Aren’t computability theory and computational complexity the keys to AI? [Note to the layman and beginners in computer science: These are quite technical branches of mathematical logic and computer science, and the answer to the question has to be somewhat technical.]
A. No. These theories are relevant but don’t address the fundamental problems of AI.
(Further information at the link above, but unfortunately, at some point we might have to come back to this one and be more technical.)




The Essence of Modern AI - Part I

Perhaps the shortest route to the essence of modern (current) AI is to look at some of the examples where people have applied it; and there are plenty of them. Some examples are given in the link of McCarthy above.

However, the main application of current AI is found in the notion of (efficient) Searching (which is camouflaged, primarily, as "game playing" in McCarthy's link above). Examples that fall under this concept are all sorts of games like checkers, tic-tac-toe, chess, Go, solving Rubik's cube ... Other applications of the same notion are solving/generating crosswords, solving/generating Sudokus (sic), all sorts of contraint satisfaction problems (e.g. Crew assignment problem; how to distribute and assign consistently pilots and air hostesses to various flights) and the list goes on and on ...

Now, I could stop right here and say that basically this is it; searching. It wouldn't be a lie since this is the area that has the major impact in modern industry and that is how usually people conceive the "advances" in sciences. However, as you probably already know, this kind of reasoning is almost b/s in order to classify how "deep" or "fundamental" a theorem or a method is. And let's not forget the existence of negative results (e.g. the famous Gödel's Incompleteness Theorem) which, by their very nature, can not be applied; instead they prohibit applications. Once again though the thoughts are getting out of hand ...

So, allow me another step back.



Computer Science in a Nutshell

So, what is Computer Science about? Besides, AI is considered primarily just a branch of Computer Science; so, there has to be some common background between the two.

In order to be as brief and simple as possible, Computer Science has to do mainly with

* Algorithms and their Complexity.

"Algorithms" basically means (unambiguous) procedures that perform certain tasks and "Complexity" basically classifies these procedures according to their efficiency (i.e. how fast is the job done). A very interesting read here (at least the first seven pages are recommended ...) is Computer Science and Its Relation to Mathematics, by Donald E. Knuth.
Another ingredient that is hidden here is the notion of Data Representation, which is what algorithms use as cornerstone silently.

Anyway, let's stop the backtracking, and make a step forward this time for a change.



The Essence of Modern AI - Part II

So, what do we mean by (efficient) "Searching"?
The easy part is clearly the word "efficient" and it has to do with the "Complexity" of "Searching".
The tough part is the word "Searching". Basically means a method that finds a solution to some sort of problem (go figure ...). Vague as this may seem, this is it. Usually, the process that is used to accomplish the goal (find a/the solution) is well defined. This is directly translated to some algorithm, since the whole process is mechanical and, in principle, relies to some sort of computation.

But the purpose of algorithms in sciences and in general, essentially implies solutions to problems, and typically, one has to perform some sort of "search" among candidate solutions. However, this would imply that all sorts of "search problems" fall under the umbrella of AI; in other words, almost everything that the heart of Computer Science touches is part of AI. Now, on one hand, this is again not far away from the truth, because there are general techniques in AI that can be applied to almost any problem that comes into my mind at the moment (with the appropriate representation). However, many such problems are traditionally thought of not belonging to AI. Let me give you an example. Compare the following two problems:

(1) Sort a list of 5 integers in ascending order.
(2) Given a boolean formula (e.g. (x1 or (not x5)) and x2), is it satisfiable, or not? Can we assign truth values to the variables x1, x2, x5, so that the entire "thing" is True? Think about bigger formulas; can you think of an easy way out?

There is a different flavor in these problems. The second can be classified as AI problem, but the first one? Almost nobody will tell you that it is an AI problem, although with general (AI) search techniques we can solve the problem. And the reason is the computational efficiency in which we can solve each problem in its generality. And right at this point we have touched the word "efficiency", or in other words complexity.

Perhaps it is time to make once more a step back and observe.



Computational Complexity and P vs. NP

As we said earlier, Complexity tries to capture the effectiveness of methods (primarily in terms of time) in order to solve problems. To put it differently,
Complexity tries to classify problems according to their (seemingly or inherent) difficulty. The two best known (sic) classes of problems are P and NP. For problems that we can "efficiently" verify their solution, we say that they belong in NP, while if we further have the ability to generate their solution efficiently we say that they belong in P. So, for example, *both* problems (1) and (2) above belong in NP because we can easily verify solutions if someone presents them to us. Moreover, (1) is in P because we can also compute a solution fast. But what about (2) ? The truth is that we don't know. Of course I haven't defined "efficient/fast" but this can probably wait for another post.

Let's make another step forward again.



What is all this fuss about game playing and searching?

Is it a coincidence that the most famous applications of AI so far are related to game playing and what we broadly call "search problems"? To my opinion, no. And probably the easiest justification is the heritage that we have on well understood problems as well as (mathematical) principles that we apply in game playing. Under this perspective, it is reasonable that most attempts are made in this direction with the guidance of experts on various fields in some cases. The role that experts play here (e.g. grandmasters when programming a chess program) is that they try to quantify their intuition and describe the key ingredients of certain positions that lead them to prefer some positions over others. This intuition is what people try to pass in heuristics. But again, this becomes overly specific and we are prone to miss the big picture.



What about the other applications of AI?

Again, under the danger of oversimplification other applications are in a sense a way of searching plus mainly applications of Logic. It is just that we apply searching to some not so well understood concepts; e.g. create a robot that learns by reading some human language. In general, people that work on AI actually work on problems that are conjenctured to be hard, or very hard sometimes. And in most of the cases they just build up ingenious solutions for some simple cases, so they have some guarantees in some cases, but not in general.



What is learnable?

Anyway, another key-word that appeared above is "learning", which brings us to a seminal paper in the Theory of Machine Learning, by Leslie Valiant, A Theory of the Learnable, also known as PAC-Learning. PAC comes from the initials of Probably Approximately Correct. The justification of the title is (modulo again some technical details) that once the "training" phase of the learner is over, then the learner will be able to predict with high probability (hence 'probably') a correct answer or almost correct answer (hence 'approximately correct') during a subsequent "testing" phase (there are very simple examples here if you feel like going through some examples). Of course under the general term Machine Learning, there are other concepts; e.g. artificial neural networks, support vector machines, boosting, ...

So far, I have focused on well examined territories and I have probably been kind of unfair by neglecting to refer to some branches of AI clearly because of my interests. However, I think that I should also mention at least one of the most recent trends in AI.

So, let's make a step beyond; finally!



Evolvability

In 2007, Valiant striked again with his paper Evolvability. Basically he sets a learning framework for evolution, or if you prefer he tries to make precise statements of the general observations that Darwin made in his theory about evolution. In a nutshell, the whole process is, from a first sight, a restricted version of what is so called genetic algorithms; another technique used in AI but not mentioned so far (which is close, but different to genetic programming in McCarthy's paper above). The idea is that mutations arise naturally to an organism according to his/her "fitness" in the environment. If anybody feels like it, we can talk about this one too. There are positive and negative results here already.



Epilogue

I hope that you found the journey interesting. Closing, it would have been my omission if I didn't mention the probably first paper there was in Artificial Intelligence and perhaps most appropriate for this discussion. And that is Computing Machinery and Intelligence, by the greatest guy Computer Science ever had, Alan Turing. If nothing else, this is something that you should read (the Theological Objection is also mentioned).
Finally, I will close with a seemingly innocent quote from Turing's paper (p. 15 / p. 446) which I think addresses the heart of the problem of "self-awareness":
Quote:
... Likewise according to this view the only way to know that a man thinks is to be that particular man. It is in fact the solipsist point of view. It may be the most logical view to hold but it makes communication of ideas difficult. A is liable to believe ' A thinks but B does not ' whilst B believes ' B thinks but A does not '. Instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.

____________
The empty set

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Warmonger
Warmonger


Promising
Legendary Hero
fallen artist
posted August 12, 2009 11:37 AM
Edited by Warmonger at 11:37, 12 Aug 2009.

As a beginning programmer I'm very interested in AI and want to develop it efficiently. Thank you for great reference.

The problem here is to resemble the way of human proper thinking, and the secondary is to translate these rules into number, as they are all computers can process. Formulas, algoithms, logic, probabilistics they all operate on very basic rules and it is not clear how to reproduce complex issues based on it. I do believe the first step to creating a thing we can truly call an intelligence is to understand the way of our own thinking.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Corribus
Corribus

Hero of Order
The Abyss Staring Back at You
posted August 12, 2009 03:44 PM
Edited by Corribus at 19:50, 12 Aug 2009.

That was a nice post, dimis.  I didn't have time to fully digest it, but I wanted to respond briefly to this:

Quote:
However, most AI researchers believe that new fundamental ideas are required [to obtain AI intelligence], and therefore it cannot be predicted when human-level intelligence will be achieved.

As I understand it, currently all we can basically do is feed into a computer what is essentially an instruction, and then the computer does the computation and spits out the result.  After all, we still call it a "computer" - something that computes.  To my mind, this isn't intelligence, or near at all to it.  This is... following a recipe.  Certainly, technology has given us the ability to make the input instructions more and more complicated, and to increase the computation speed, but I'm not sure I would qualify any of this as real intelligence.  Certainly human brains are more than simple computation devices, aren't they?

I guess what I'm saying is that as it stands, artificial intelligence relies upon human intelligence.  Without human intelligence to input the instructions, there is no computation.  In that sense, computer (now) do not think.  They are merely tools, extensions of our own minds, whose purpose is to solve our problems quicker than we could on our own.  They have no independence.

So, I feel it is true that for real "AI" to come along, new fundamental ideas are requried.  In other words, merely increasing the computation speed more and more will not lead to some magical point where the computer becomes "intelligent".  I don't think the line is arbitrary or based upon some artificial standard of "complexity", at least as it pertains to the complexity of instructions the device can handle.

But then again, I'm not expert.


____________
I'm sick of following my dreams. I'm just going to ask them where they're goin', and hook up with them later. -Mitch Hedberg

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
JollyJoker
JollyJoker


Honorable
Undefeatable Hero
posted August 12, 2009 04:07 PM

Fully agreed, Corribus, and, in my humble opinion a QP worthy post, dimis.
Reminded me of the tragic fate of Alan Turing - a shame, really. Fat service his country did him after everything he achieved in the war.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted August 12, 2009 07:27 PM

Quote:
As I understand it, currently all we can basically do is feed into a computer what is essentially an instruction, and then the computer does the computation and spits out the result.  After all, we still call it a "computer" - something that computes.  To my mind, this isn't intelligence, or near at all to it.  This is... following a recipe.  Certainly, technology has given us the ability to make the input instructions more and more complicated, and to increase the computation speed, but I'm not sure I would qualify any of this as real intelligence.  Certainly human brains are more than simple computation devices, aren't they?

I guess what I'm saying is that as it stands, artificial intelligence relies upon human intelligence.  Without human intelligence to input the instructions, there is no computation.  In that sense, computer (now) do not think.  They are merely tools, extensions of our own minds, whose purpose is to solve our problems quicker than we could on our own.  They have no independence.

So, I feel it is true that for real "AI" to come along, new fundamental ideas are requried.  In other words, merely increasing the computation time more and more will not lead to some magical point where the computer becomes "intelligent".  I don't think the line is arbitrary or based upon some artificial standard of "complexity", at least as it pertains to the complexity of instructions the device can handle.

But then again, I'm not expert.
That is correct Corribus, because dimis's article and his post used a broader definition of intelligence. However, to make human-like intelligence we'll need to make them think. Thinking does require a fundamentally different approach.

For example, it is well known that even simple Neural Networks are very very good at detecting patterns. I mean, even cellphones or iPods or other portable stuff like that with their weak CPUs have them! (remember when it learns your voice, or your handwriting etc... it learns to recognize patterns). Recognizing patterns is by itself difficult to do in an algorithm, but making it learn and adapt to it is remarkable.

However, this still isn't human-like intelligence. To get real one, we need to make them THINK.

I don't know if more neurons (and implicitly more processing power) can solve this, but I think that we'll need some kind of self-adaptive feedback on the neurons themselves, rather than just exterior data. That is, the neurons shouldn't just be fed ONLY external data (e.g: five senses) but also a HUGE feedback (possibly infinite!) on themselves... and this is a feedback for long periods of time, not just small as is usually done (to recognize patterns).

Well for both we'll certainly need more processing power, that's for sure.

However it is difficult to predict how AIs will behave. I mean, they may not have OUR emotions (but other emotions or none at all), so it's natural for humans to say "burn the 'witch'", after all humans always discriminate. Gah I'm ranting again so I'll stop

Also I disagree with genetic algorithms. They are totally so not intelligence: they may be useful for evolutionary simulations but have little significance to real AI -- on the other hand they can be a good compromise for underpowered computers, but if we finally get a computer with enough complexity we could just use "the real thing" instead. In short, genetic algorithms are just a "workaround", not "the real thing". Neural Nets are probably as close as one can get to "the real thing".
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
dimis
dimis


Responsible
Supreme Hero
Digitally signed by FoG
posted August 13, 2009 07:43 AM
Edited by dimis at 08:14, 13 Aug 2009.

Knight's Tour + random thoughts

True, a computer is something that computes, follows instructions blindly, is a tool and in general a blind servant. But, what do we really imply with these two words combined
1) artificial
2) intelligence
?
Let me give an example of how complex and fuzzy the answer can be. For the moment, let us just leave the computers on the side; let us stay on the human level. But you really have to try what I am about to propose and in the end a smile will shine in your face (I hope).




Knight's Tour Problem
Let's see a problem that has a long history in chess. It is called the Knight's Tour Problem, and the purpose is to traverse the entire 8x8 chessboard with a knight so that the knight lands precisely once on each square of the board. Of course the chessboard is 8x8 since this is how chess is played, but let's drop that restriction and try to attack this problem on an nxn chessboard. For those who do not know, we need to explain how the knight "moves" on the chessboard. So,
knight's manouvre: A knight's move is decomposed into two phases. During the first phase we count 2 squares horizontally or vertically (in some direction), and then during the second phase we count one square to a direction which is perpendicular to the direction we followed during the first phase. Of course the move is legitimate if we end up in a square of the board; i.e. we are not allowed to cross the boundaries of the board and the board does not "wrap around".
Ok, example. Let's place a knight N in the middle of a 5x5 board and mark with s1, s2, ..., s8 the squares where the knight can move in his next step:

|----|----|----|----|----|
|    | s8 |    | s1 |    |
|----|----|----|----|----|
| s7 |    |    |    | s2 |
|----|----|----|----|----|
|    |    |  N |    |    |
|----|----|----|----|----|
| s6 |    |    |    | s3 |
|----|----|----|----|----|
|    | s5 |    | s4 |    |
|----|----|----|----|----|

So, for example we can move to s1 by counting 2 squares to the north and then count one square towards the east. Of course during that second phase we could count 1 square towards the west and the knight would end up in s8. Similarly, for all other squares. Hence, s1, s2, s3, ..., s8 are all "candidate" squares where the knight is allowed to be placed in the next step; that constitutes a move by the knight. Of course if the knight is closer to the edge of the board the options are limited. For example in the following position (knight is N), we have only 3 available options for our next move:

|----|----|----|----|----|
|    |    |    |    |    |
|----|----|----|----|----|
|    |    |    |    |    |
|----|----|----|----|----|
| s3 |    | s1 |    |    |
|----|----|----|----|----|
|    |    |    | s2 |    |
|----|----|----|----|----|
|    |  N |    |    |    |
|----|----|----|----|----|

I think this is enough with examples. Let's see the problem.

Problem: Given an nxn board (grid), and a starting position for the knight, find a sequence of n^2-1 steps in which steps you visit different squares each time.

Let's start with some easy cases and examples of what the problem is.
5x5 board
Let's have a look on the solution given below for a 5x5 board starting from the middle square:

|----|----|----|----|----|
| 25 |  4 | 15 | 10 | 23 |
|----|----|----|----|----|
| 14 |  9 | 24 |  5 | 16 |
|----|----|----|----|----|
|  3 | 18 |  1 | 22 | 11 |
|----|----|----|----|----|
|  8 | 13 | 20 | 17 |  6 |
|----|----|----|----|----|
| 19 |  2 |  7 | 12 | 21 |
|----|----|----|----|----|

Let's see another solution starting from a different square:

|----|----|----|----|----|
| 25 |  2 | 13 |  8 | 19 |
|----|----|----|----|----|
| 12 |  7 | 18 |  1 | 14 |
|----|----|----|----|----|
| 17 | 24 |  3 | 20 |  9 |
|----|----|----|----|----|
|  6 | 11 | 22 | 15 |  4 |
|----|----|----|----|----|
| 23 | 16 |  5 | 10 | 21 |
|----|----|----|----|----|


I believe these two examples are enough to make the problem precise. Now try on your own to solve a 6x6 board; say the knight is placed on the north-western-most square.
How long did it take you to solve the problem? Do you feel confident to move to another "level"? What about a 7x7 board? 8x8 which was originally the real thing? What about 10x10? What about 20x20?  What about 50x50? and I can keep on asking ...

Please, try to spend at least 20-30 minutes on the 6x6 or higher dimension problem. If your wife or a friend of yours is nearby, even better. Try together! It really is mandatory because I am about to give you a killing machine once you do that. Now try to solve the 6x6 case, and only after at least 20 minutes have passed keep on reading.





























Hint: Now it is time for the hint. I hope you have been honest and tried the problem because you will appreciate the hint now. Next time that you try to find a solution, follow this strategy. Place the knight to that square at each step which has the property of giving to the knight the minimum available choices on the next step (if there are more than one that achieve the minimum, pick one at random). Sounds complicated, so let's see how my first example above started:

|----|----|----|----|----|
|    | s1 |    |    |    |
|----|----|----|----|----|
|    |    | s2 |    |    |
|----|----|----|----|----|
|  3 |    |  1 |    |    |
|----|----|----|----|----|
|    |    | s3 |    |    |
|----|----|----|----|----|
|    |  2 |    |    |    |
|----|----|----|----|----|

After two moves with the knight we reach the square marked with 3, where we have 3 available options s1, s2, and s3.
Let's look at s1:

|----|----|----|----|----|
|    | s1 |    |    |    |
|----|----|----|----|----|
|    |    |    |  x |    |
|----|----|----|----|----|
|  3 |    |  1 |    |    |
|----|----|----|----|----|
|    |    |    |    |    |
|----|----|----|----|----|
|    |  2 |    |    |    |
|----|----|----|----|----|

We only have 1 available option at a subsequent step (marked with an x).
What about s2 and s3?
If we follow s2 we have 5 available options (so as not to re-visit a previous square).
If we follow s3 we have again 5 available options. So the strategy implies that we should follow the square which is named 's1'. And indeed this is what I did in my example above.
(However, in my second example, I didn't do that from square 2 to square 3 because I didn't want the hint to be there on *both* examples ).

Now go back on a piece of paper and try the 6x6 case. What about if you try the standard 8x8 chessboard? Try something fuzzy like 6x7. Check this out, I just did it on a piece of paper (and I wasn't counting for squares most of the time; just intuition by the above heuristic, I backtracked 2 times, and thought in total for about 30-60 seconds at some point):

|----|----|----|----|----|----|----|
|  1 | 12 | 25 | 38 |  9 | 14 | 23 |
|----|----|----|----|----|----|----|
| 26 | 39 | 10 | 13 | 24 | 31 |  8 |
|----|----|----|----|----|----|----|
| 11 |  2 | 37 | 30 | 41 | 22 | 15 |
|----|----|----|----|----|----|----|
| 36 | 27 | 40 | 19 | 32 |  7 | 42 |
|----|----|----|----|----|----|----|
|  3 | 18 | 29 | 34 |  5 | 16 | 21 |
|----|----|----|----|----|----|----|
| 28 | 35 |  4 | 17 | 20 | 33 |  6 |
|----|----|----|----|----|----|----|


Nice feeling huh?









And now let me ask you guys this. Are you more "intelligent" since you can exhibit quite remarkable skills on this particular problem? For me at least, it is really hard to say if the answer is yes or no. Because from a first sight nothing really changed in you. On the other hand, the above heuristic might come in handy in other (same or similar) problems in the future. Of course it might also *not* come in handy because you will not face again a problem of this nature. However, this is only one part of the argument. The other part is the following scenario. Now that you know the "trick", pick a friend of yours who doesn't, or even better if you pick an international master in chess. How would an external observer classify your (plural) intellectual abilities based on your performance in this problem? That's the second part of the argument. Because clearly you are not cheating. However, you are following a recipe and the end result is quite remarkable.



So far I have only addressed your first paragraph Corribus and I know that I owe you some answers. I will come to them soon. It is just going to happen in a slow pace.



Let's give it some rest for a while; that is my suggestion. Try to read McCarthy's manuscript "What is AI?". Then spend some time on Turing and his paper; besides, this is how it all started; these are the roots of AI. Then if you guys feel like it, read the first 7 pages of Knuth's article.
____________
The empty set

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Binabik
Binabik


Responsible
Legendary Hero
posted August 13, 2009 09:54 AM
Edited by Binabik at 09:56, 13 Aug 2009.

Quote:
And now let me ask you guys this. Are you more "intelligent" since you can exhibit quite remarkable skills on this particular problem?


It comes down to definition. I would say no, sort of. I think I would look at it as applied intelligence as opposed to raw intelligence. I think raw intelligence would be the ability to recognize patterns or forms and apply previous knowledge or experience to new applications. What you've learned from your examples is more like a tool. When wielded, that tool can be more or less useful depending on the intelligence of the wielder. (understanding that we're dealing with a narrow facet of intelligence to start with)

I cheated and didn't really spend 20 minutes like I was supposed to. I didn't spend ANY time trying to do it. But when I saw where you were heading with it, I'm fairly sure I would have taken the same approach...not immediately, but I would have seen the sense in it fairly quickly.

So how would I (or someone else) arrive at an approach to the problem, compared to a computer arriving at a solution, or approach to a solution?

For me, I recognize it as being in the form of "I don't want to paint myself into a corner" type of problem. There are many examples of spatial or topological problems which are similar but sufficiently different.

Consider the type of puzzle where you must draw a figure without lifting your pencil and without retracing already existing lines.

Good thing for MS Paint, if I had to break out AutoCAD I would never have done a graphic.

In the simple examples below, I know from a glance that the top one can be drawn and the bottom one can't. I know this from experience and applying a simple rule. The bottom one has more than two odd number vertices, so it can't be done. This is one way a computer can determine *IF* it can be done. I won't get into HOW it can be done now, because that's not the point.



Is applying a rule like that intelligence? Hmmm, I don't know, but if it is, it's certainly not a high level of intelligence.

Is seeing the similarity between this and the knight problem intelligence? I would say that's a definite yes. Can a computer do that without using a set of rules? Can a computer do that without first querying "is it this type of problem?" or "is it that type of problem?" Can a computer, simply by messing with it a little (trial and error) suddenly have a "leap of faith" and say "aha, this is a paint myself into the corner problem"?

Then the questions become, how do *I* do it, and do I do it differently than a computer? When I do it, it "feels" like a leap of faith. There is a sudden recognition with no "apparent" steps in between. But are there steps in between that I'm not aware of? I can look at a rectangle and instantly recognize it as a rectangle, without applying any rules such as counting the number of sides or corners. Can a computer instantly recognize something?

This reminds me of a conversation I had with an engineer who was a camera expert and had worked on vision recognition systems. I had worked with vision systems in food processing. How can you tell the difference between a nut and a rock? The engineer had worked on vision systems for cars. How can you tell the difference between a tree lying across the road and the shadow of a tree lying across the road...under any lighting condition, any angle of approach, any size or shape of tree, etc? And can this be done without querying a set of rules? And when we (usually) instantly recognize a tree vs a shadow of a tree, are we doing it any differently? It appears that the answer is yes, we do it differently, even though I can't identify the mechanism.

As a side note, I've been thinking a lot lately about a thread on memory and how it works. In the process of following the various threads of thought, I've noticed that it seems like we don't remember so much as we pull up a suggestion of a memory...if that makes any sense. There's still a similar "leap of faith" in there like the recognition of the form of a problem. The "suggestion" of the memory appears to be a process or a function, one which can be observed. But from the suggestion, the memory itself is fleeting and without substance...and seems to come from nowhere observable.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
JollyJoker
JollyJoker


Honorable
Undefeatable Hero
posted August 13, 2009 11:30 AM

I've written something about pattern recognition or "reducing" real things and events into abstract pattern - their "essence" - and compare things on that pattern level.

Think about the most basic human demonstration of intelligence and what that is: a baby learning a language. A child is doing that by simple LISTENING. Pattern recognition seems to be at work here, and since there is no language at that point, it seems to be a HARD-WIRED ability of the brain: Quite obviously the brain is able to discern a "pattern" in language, and to make the connection between uttered sounds and implied meaning. With understanding comes the second step of reproducing the recognized pattern.

That is obviously the most basic expression of what intelligence actually is. A "virgin" human brain is able to "decode" a language and reproduce it. This needs an intrinsic ability of imagination - the ability of the brain to reproduce or "map" reality, and now we are back to abstraction or reduction of something to its very basic and defining nature or properties.

Transfer that to AI and you are there.
If you talk with your PC each day, without installing a language tool, if it answers you one day, it's intelligent!

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Corribus
Corribus

Hero of Order
The Abyss Staring Back at You
posted August 13, 2009 04:08 PM

Nice example, dimis, though the detail wasn't necessary, I don't think, to get your point across.
Quote:
Are you more "intelligent" since you can exhibit quite remarkable skills on this particular problem?

No.  If I may echo Bin's answer - knowing a strategy for solving a type of problem does not make you more intelligent.  After all, a strategy is just a tool, because it makes solving the problem more efficient.  I would say intelligence is measured by one's ability to come up with the tool in the first place. A computer can only use tools; it cannot create them.  Something that can only use tools is a tool itself.  Therefore, a computer is itself not intelligent.

As an analogy, consider a small construction business consisting of a sort of foreman and a sort of helper.  The problem is the building of a house.  The foreman devises the strategy for how the house will be built.  I.e., he decides that first the foundation will be poured, second the frame will be built, then so-on until the house is done.  He's the source of intelligence because he's devising the strategy for building the house using knowledge, experience and, importantly, creativity.  He issues orders to the helper, who actually does the work.  The helper pours the cement, the helper hammers the nails, etc.  But really, the helper's just a tool and is not using any intelligence to do the job (at least, in this idealized analogy).  He uses tools, but he's also a tool himself.  The instructions come from the foreman.  The helper just follows them to the letter.  Certainly, the helper could be very skilled and have a vast pool of knowledge about how to do things, and in that since he would be capable of handling very complex instructions, and so the casual observer he might appear to be very intelligent.  But the capability to follow complex instructions and having a great deal of tools at one's disposal does not make one intelligent.  As JJ has indicated, intelligence is a matter of creativity, the ability to devise strategies for oneself, often based upon that knowledge and experience.

Of course, in the real world, even a helper is intelligent because he's a human being.  Though he's following orders and is in some sense a tool, he can still use creativity and intelligence to accomplish the tasks given to him by his boss.  The boss might give him a task (pour the concrete), but no boss can be completely responsible for all the creativity that goes into a project.  There will be little problems along the way that the helper will have to solve by himself using no instructions from the foreman.  A computer can't really do this, because a computer isn't intelligent.  A computer and only do exactly what the foreman (programmer) tells it to do, so the program has to think of everything beforehand.
____________
I'm sick of following my dreams. I'm just going to ask them where they're goin', and hook up with them later. -Mitch Hedberg

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
dimis
dimis


Responsible
Supreme Hero
Digitally signed by FoG
posted August 13, 2009 06:35 PM
Edited by dimis at 03:47, 14 Aug 2009.

Self-Training (Samuel - Checkers, Tesauro - Backgammon)

As I said, I am not sure if it is intelligence or not. But based on your answers, I have no choice, but to stay on the other side for a while.

So, let's see.

First, none of you didn't really answer the question on how an external observer would classify yourself (once you know the trick on the knight's tour problem) and another guy (or yourself) who does not know the trick based on your performance on the problem. I understand this is a superfluous classification of two people, but essentially something similar is done with IQ tests and we use them for humans. And humans have been around on this planet for a long time now ... Anyway, of course the IQ test is not the real argument here, rather than the first question with the classification based on the performance of a certain task.

Second, the argument that I think you propose for intelligence, is actually the ability to come up with the tool that solves a problem efficiently (whatever that is supposed to mean) and you do not consider intelligent the ability to use the tool (which is again a lie and in many cases in your life you have said "this guy is not so bright" or something similar). However, in the previous particular example, the tool was, let's call it, a strategy on traversing the board. Let's dive again back in history.



Samuel and the game of checkers

In 1959 Arthur Samuel wrote Some studies in machine learning using the game of checkers; unfortunately I couldn't find this article for free online, so I will give you a link to a subsequent paper of his (1967) named Some studies in machine learning using the game of checkers. II--Recent progress.

Anyway, in a nutshell, what the guy proposed and actually implemented in a computer with success, was a program that learned by self-training. The level of expertise is of course irrelevant back in the 60's* since this is (to the best of my knowledge; I don't want to look it up in the bibliography right now) the very first attempt of such a method. However, the critical part is that we have a program that learns by itself. How about that for intelligence ? Are we there ? Getting closer ? And please don't tell me that the absence of a human-like robot that performs the moves is critical because that's the easiest thing to implement.



Tesauro and TD-Gammon

I think it would have been my omission not to mention another similar celebrated attempt. I am talking about Gerald Tesauro and his program named TD-Gammon. The logic is similar here, but with impeccable performance. We have a program that plays backgammon at the highest human level. For more information have a look in Temporal Difference Learning and TD-Gammon.



Brief Overviews
Perhaps this link will be more appreciated since it has brief overviews for both of the above.



EDIT: * Of course if you read the biography of A. Samuel in the link above - actually the third paragraph right here - you will realize that already in the 60's the program performed very well.
____________
The empty set

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
JollyJoker
JollyJoker


Honorable
Undefeatable Hero
posted August 13, 2009 09:10 PM

Dimis, I think, that you are tackling the problem from the wrong side, so-to-speak. You are trying to dissolve "intelliegence" into "elements", then implement the elements to get intelligence.

I think it's the other way round: there is this hard-wired "ability" that allows all those "elements" which are manifestations of it.
Learning is one, but the thing with learning is that intelligence allows to transfer the learned thing. Since we are at chess, think of a gambit. Let's say an intelligent human loses a game due to a gambit. What DID he learn?
What did the machine learn? The machine learns that a specific series of turns will give a bad result and the gambit either should not be taken, at least not immediately. Additionally, depending on the program, it will file the gambit for use if the roles are reversed.
Back to the human. The human learns that gifts may be poisoned and you may overstretch in taking them, making yourself vulnerable. Material gain may result in loss of initiative and divert forces. You see the difference? You learn an abstract lesson as well, and you may make use of it, for example in your professional life. THAT is intelligence and learning.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Corribus
Corribus

Hero of Order
The Abyss Staring Back at You
posted August 13, 2009 09:29 PM

It seems, JJ, and correct me if I'm wrong, that your definition of intelligence is coming close to incorporating emotion and morals.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
JollyJoker
JollyJoker


Honorable
Undefeatable Hero
posted August 13, 2009 10:39 PM

Not at all. Sorry, when my choice of words leads to that conclusion.

I mean, Corrbus, you have a little child (mine is older, but I can remember quite well); the interesting thing seems to be that the (virgin) brain of a human baby is obviously able to decode language, that is: the brain (intelligence) can conclude that certain sounds have a certain "meaning". That a sound is a "placeholder" for something else: mom, for example.
That is astonishing.
I think you need an intrinsic ability of abstraction for this, of imagination. Of abstract mapping.
And no matter what you look at, I think that ability is underlying everything. Pattern recognition - which is big part of the genious - reducing something to it's abstract properties and finding unusual similarities.
In this case learning is not just the saving of a ver specific thing for further reference in the given context, it's learning the abstract principle behind it: in case of a gambit this would be, that a seemingly harmless sacrifice may divert important forces and cost you initiative the sacrificing party can use: ANYWHERE - and free of moral. Gaming language.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Rarensu
Rarensu


Known Hero
Formerly known as RTI
posted August 14, 2009 07:40 AM
Edited by Rarensu at 08:33, 14 Aug 2009.

Intelligence Does Not Equal Learning.

It seems that y'all have missed a significant part of intelligence: working on a problem you've never seen before. Your "learning" programs are able to improve a skill that they already have. But they cannot learn a completely new skill. Suppose you ask them to play Reversi. Not only can they not play, they can't even begin to learn how to play.

Human intelligence is different. You can show a human ANY PROBLEM and they will be able to turn their intelligence on that problem. It may be that they will never master the skill of solving the problem, but they can at least attempt to learn the skill. That is to say, intelligence is not ability to learn a set of skills, but an ability which can be applied to the learning of any skill.

The hard-wiring of skills (such as language) does not get around this challenge. In the specific case of language, it is not even certain that intelligence is even involved. For example, certain William's syndrome (see Chatterbox syndrome) patients learn any language fluently and can use complex grammar and a large vocabulary, and yet, have an almost non-existent ability to problem-solve or even to cope with ordinary day-to-day situations. They are just like your learning machines: able to learn language, unable to learn anything else.

Even if you hard-wired in the ability to learn every single skill that a human could ever possibly need, you still wouldn't have true human intelligence, because a human doesn't come hard-wired to learn chess. We come hard-wired to learn how to learn. What does that mean? I don't know. But it means that I agree with those other fellows who were mentioned: we need a whole new fundamental theory of intelligence before we can begin to replicate it.
____________
Sincerely,
A Proponent of Spelling, Grammar, Punctuation, and Courtesy.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
JollyJoker
JollyJoker


Honorable
Undefeatable Hero
posted August 14, 2009 08:47 AM

I don't think that the ability to learn a language is hard-wired - it's the ability of abstraction, of mapping reality into the mind or something like that, that allows it. Probably the ability of the brain to "fill in the blanks" of what the eyes actually perceive and "produce" a full picture plays a role.

In any case, once you've learned a language, learning another one works differently, it seems.

Intelligence, for me, still seems to amount to the ability to reduce or simplify a given issue to its defining properties or "nature". It's like those fast portrait drawers, who can draw a face with a minimum of strokes, so that everyone will know it immediately for what it is: Reduction of a given issue to its defining properties.

This needs the ability of abstraction or imagination, an ability to see "potential" which is something complete abstract or immaterial.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
dimis
dimis


Responsible
Supreme Hero
Digitally signed by FoG
posted August 14, 2009 09:09 AM
Edited by dimis at 09:34, 14 Aug 2009.

Generalization

* read slowly *

Ok, I know, I still owe answers to questions from many posts, because there are many ideas in the air; I will come to them and I promise I will do my best to answer all of them in their entirety. But for the moment, in this post, I will focus on the last proposed deviation and keep on playing my game.

Well, first of all, I keep a somehow slow posting rate because I really hope this helps the whole process and might also be motivating to go through (perhaps one more time) some of the initial references I have above (angelito thanks for the qp by the way; I hope you enjoy reading while moderating). But in any case my real purpose is to clarify our goals and our needs from this discussion.

So, we started talking about intelligence and the first example was an efficient tool and its use. Then we agreed - roughly - that this is more or less b/s, because it is not the ability to use the tool what we imply with intelligence; but, it should be closer to the ability of first creating the tool and on a subsequent level using it; i.e. both phases have to do with intelligence and it is better if they are performed by the same entity. Under this perspective, another example (actually 2 of them) followed, where we acknowledge some methods that efficiently create tools and are also efficiently using them in order to solve problems. But once again, this was not enough.

If I understand correctly, what JollyJoker and Binabik are trying to say with the (technical for the AI) term Pattern Recognition, is that we are missing a generalization technique from the picture (yes, I went through the thread and saw that you mentioned it earlier JJ, but I still think that we had to go through this process). Now, again going back through the thread, TheDeath has already answered this by giving an example - neural networks. And this is (again, to my opinion) a rather critical part for all those deviations that we are making during this process. In particular, I will stick on Tesauro's TD-Gammon.

[parenthesis here]
By the way, we haven't had a description of neural networks so far; I plan to do that at some point (because of all the reasons in the world, but probably, primarily, because technology should not be feared and instead should be uncovered). However, the purpose of the post is different this time, so I will sweep another thing under the carpet.
[/parenthesis here]

So, what do we have here (TD-Gammon) ?
A machine which is mainly composed by modules named artificial neural networks (ANNs or NNs for short), plays against itself (self-training), and during this process of training the machine is able to create a general purpose tool (strategy) and play on human level competitions (the same entity performs all these and with good results). Well, you might be wondering, how the hell is that supposed to be a generalization technique ? Besides, - hello - it is only playing backgammon; just one (1) game! Well ... this might be true (1 game), however, not all units in this universe come with the same weight in life. Unfortunately, the justification is somehow technical, but you will all get it. The reason is, that there are too many different combinations of positions and dice rolls to be kept in the memory of a single computer (actually of all the computers on this planet together). So, in particular, when the program plays against a human, with high probability enters a position that it has not encountered in the past, and yet - with the aid of neural nets - it has the ability
(1) to determine what is important, and what is not in the given position,
(2) adjust instantly its strategy, and
(3) perform the move.
This is generalization.

Now what ?



I really have an avalanche of comments now, but I will stop this post here. Like David Hilbert once said
Wir müssen wissen, wir werden wissen!
____________
The empty set

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
JollyJoker
JollyJoker


Honorable
Undefeatable Hero
posted August 14, 2009 09:57 AM

Now nothing.
Let's go back to the tools a minute (you'll see immediately why). Let's say you have stored a finite, but extremely high number of tools in your memory (the sum of what either an AI or a human learned).
An issue is presenting itself that has to be solved, the intelligent thing is now finding the right tool.
For the game, read "move" instead of tool.
The AI way now is to try and error every possible move or tool and compare the results.
The HI way is to rule out the overwhelming number of tools (moves) immediately as not applicable or inferior, let's say the general idea that all moves that will leave one stone alone in reach of the opponent are bad.
You see the problem? The actual issue has to be "reduced" to the properties that allow a pre-coice, ruling out all useless tools (or moves).
This gets more pronounced with games with pretty unlimited moves (games like Backgammon are fairly limited) like Heroes, where you have to make something like an ACTIVE strategy (as opposed to working through a number of options).

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Warmonger
Warmonger


Promising
Legendary Hero
fallen artist
posted August 14, 2009 10:15 AM
Edited by Warmonger at 10:16, 14 Aug 2009.

Intelligence here is rather a capability of solving NEW problems, which were not described or input before. Of course ability to reduce very complex issue to basic operations already known is vital. However as computer uses only maths and logic, some real life problems seem unlikely to solve in this way.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Jump To: « Prev Thread . . . Next Thread » This thread is 4 pages long: 1 2 3 4 · «PREV / NEXT»
Post New Poll    Post New Topic    Post New Reply

Page compiled in 0.2308 seconds