Heroes of Might and Magic Community
visiting hero! Register | Today's Posts | Games | Search! | FAQ/Rules | AvatarList | MemberList | Profile


Age of Heroes Headlines:  
5 Oct 2016: Heroes VII development comes to an end.. - read more
6 Aug 2016: Troubled Heroes VII Expansion Release - read more
26 Apr 2016: Heroes VII XPack - Trial by Fire - Coming out in June! - read more
17 Apr 2016: Global Alternative Creatures MOD for H7 after 1.8 Patch! - read more
7 Mar 2016: Romero launches a Piano Sonata Album Kickstarter! - read more
19 Feb 2016: Heroes 5.5 RC6, Heroes VII patch 1.7 are out! - read more
13 Jan 2016: Horn of the Abyss 1.4 Available for Download! - read more
17 Dec 2015: Heroes 5.5 update, 1.6 out for H7 - read more
23 Nov 2015: H7 1.4 & 1.5 patches Released - read more
31 Oct 2015: First H7 patches are out, End of DoC development - read more
5 Oct 2016: Heroes VII development comes to an end.. - read more
[X] Remove Ads
LOGIN:     Username:     Password:         [ Register ]
HOMM1: info forum | HOMM2: info forum | HOMM3: info mods forum | HOMM4: info CTG forum | HOMM5: info mods forum | MMH6: wiki forum | MMH7: wiki forum
Heroes Community > Other Side of the Monitor > Thread: Singularity
Thread: Singularity This thread is 5 pages long: 1 2 3 4 5 · «PREV / NEXT»
Galev
Galev


Famous Hero
Galiv :D
posted June 18, 2009 11:38 AM

Nevermind that I'm still Galev

By the way, I offered TheDeath to open an other thread for this topic, but he said it is not off. I believed him

If it disturbs you, I'm really sorry. I myself doesn't like monstrous post like these too much, but I always addressed those to Death, because probably he is the only one interested, because I talk to him, and anyone else who just would like to understand the other's discussion need not to read my posts as I react only to TheDeath. It might not be the best way, I accept.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
JollyJoker
JollyJoker


Honorable
Undefeatable Hero
posted June 18, 2009 11:47 AM

LOOOOOOL:

Quote:


Total oops. I have no idea how I saw JJ instead of Galiv TWICE.
...
"@ TheDeath and Galiv


 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted June 18, 2009 09:28 PM

Quote:
(Sentience = Emergent Property: Yes or No?)
I don't understand the question fully.
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Rarensu
Rarensu


Known Hero
Formerly known as RTI
posted June 19, 2009 02:41 AM

I'm going to quote myself here so you don't have to go digging.
Quote:
Emergent property. Noun. A property resulting from the complex interactions of simple elements. // "the whole is greater then the sum of its parts." // For example, an ant can move a small object. 1000 ants can not only move 1000 objects, but they can also build a nest which allows them to store food and reproduce. Therefore, an ant colony has emergent property of being able to store food and reproduce.

More examples.

When you mix water and ethanol, the volume adds up to less than the two did individually. W&E has the emergent property of being more compact.

A few humans don't have formal rules; they simply discuss things.
Lots of humans make governments. Governments are an emergent property of humans.

-----------------

Basically anytime that a group of things has different properties than the sum of their properties individually, the new properties are called emergent.

In the case of Sentience and Neurons, the key test is to make a computer large enough to simulate a complete human brain in real-time. If sentience is an emergent property of neural networks, then this computer simulation will be sentient also.

Same question as before, rephrased:
Will an accurate computer simulation of a human neural network be sentient?
____________
Sincerely,
A Proponent of Spelling, Grammar, Punctuation, and Courtesy.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted June 19, 2009 03:05 AM

Quote:
Will an accurate computer simulation of a human neural network be sentient?
I don't know because I'm not sure what sentience is precisely supposed to mean.
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Rarensu
Rarensu


Known Hero
Formerly known as RTI
posted June 19, 2009 03:41 AM

Quote:
I don't know because I'm not sure what sentience is precisely supposed to mean.

We also sometimes use the term "self-awareness". It's what (supposedly) sets humans (and possibly dolphins) apart from lower animals. This is different from Intelligence. We all agree that a human neural network simulation will be able to efficiently solve a variety of interesting problems. I'm just asking about whether or not it can actually think for itself.
____________
Sincerely,
A Proponent of Spelling, Grammar, Punctuation, and Courtesy.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted June 19, 2009 03:54 AM

"Thinking for itself" is a vague concept that personally, I can't give out an answer if I don't even know what it is -- even less if AIs will be capable of it.

Self-awareness though I think comes from the principle that the brain is writeable. (a concept in software). On the other hand, most of the body is read-only -- hardcoded in the DNA. (sure you can damage it, but then so you can damage a ROM chip too and corrupt it). The brain is designed to self-write itself and modify based on whatever factors (that's how we actually learn and think).

So yes an AI would be capable of writing data to itself or "learning", in fact that's what we already have as mentioned before (I think?). However, whether that leads to thinking I don't know, but I think it will lead to self-awareness if it can actually write data about observations on itself (so far we haven't because commercially, they have to solve human problems, not observe themselves... that would bring no benefit to humans right now -- yeah selfishness)
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Galev
Galev


Famous Hero
Galiv :D
posted June 20, 2009 01:26 PM

Quote:
@ Gavin and The Death

If an engineer builds a machine that produces Cheetos, and then leaves it in the care of an underpaid worker, and the underpaid worker pushes a button labeled "make Cheetos now", and the machine makes a batch of Cheetos, is it really fair to say that the underpaid worker created the Cheetos? I say, NO. The worker had control of when the Cheetos were created, but he could not control what the machine made or how. He does not understand how the machine does its work. The engineer is responsible for the creation of Cheetos, he is the one that designed the system, and, appropriately, he gets a bigger paycheck.

Making children is the same thing. Having sex is like pushing a button. Women have control of when children are made, but they cannot decide to make kittens instead. They do not decide how their body will perform this miracle and they do not understand how the system works. The creation of babies is the fault of the one who designed the system; be it God or Random Natural Selection, the principle is the same. We are not in control. Artificial insemination is not really that much different from having sex. It's the same effect once the fertilized egg gets inside a working womb.


@Rarensu
I beg your pardon...
I made a rather conceited post about people not paying attention. Now I deleted it with regret, because I didn't notice your post addressed to Death and me (if you meant me by Gavin :-). I did not behave as I should have. Though I really did not intend to kill this thread.

On your post: I would ask what you meant by "The creation of babies is the fault of..." but I think it is already "out of date".

@TheDeath
I don't think I will continue our discussion here -for the others. But I might open a new thread some time soon.
____________
Incidence? I think it's cummulative!

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Rarensu
Rarensu


Known Hero
Formerly known as RTI
posted June 21, 2009 10:20 AM

@ Galev - we're cool

^^ looky I spelt it right this time

I don't like the concept of fault. I think if fault were to just be erased from everyone's vocabulary the world would improve.

Responsibility? Sure. It's fine to have a system where certain people agree to insure certain things. That keeps life running smoothly.

Guilt? Sure. Guilt helps us learn to be better people.

Punishment? Sure. Some people need help learning to be better people.

But fault? Fault has no purpose. All it does is make people angry and vindictive. If there's a problem, fix it. Otherwise, the past is the past and let snow go already.

When I said that "The creation of babies is the fault of...", I meant only that the causal link is strongest in that direction. I didn't mean to imply that people who have sex aren't responsible for the child they create, or that we get to blame god when things go wrong.

@ TheDeath

Learning and self-awareness are not mutually inclusive. We can already make "learning" self-rewriting software. But it can only learn what we tell it to learn. It can't change it's fundamental nature. It has no idea what it's doing, it just does it. It's not self-aware.

Similarly, 10-second Tom doesn't really learn. He can problem-solve and remember data (for ten seconds), but his skill set is fixed. However, we still assume that he's self-aware.

You are right, though. We don't really have a good definition for self-awareness. We know what it does, but we don't know what it is. That's why I'm asking for your opinion. If we already knew the answer, I would just tell it to you.
____________
Sincerely,
A Proponent of Spelling, Grammar, Punctuation, and Courtesy.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted June 22, 2009 01:44 AM

Quote:
Learning and self-awareness are not mutually inclusive. We can already make "learning" self-rewriting software. But it can only learn what we tell it to learn.
Why? Of course it can only learn what it is fed/given, aren't we too, I mean we can only learn what we see around us -- be it internet, books, etc...
If it learns in a way to look for information, then it is self-aware I would say.

Quote:
It can't change it's fundamental nature. It has no idea what it's doing, it just does it. It's not self-aware.
tbh I have no idea about this. I have no idea how to quantify "if it has any idea what it is doing" and in computers you have to quantify everything. Therefore, I'm not gonna lie about this one, I'm clueless if it's going to be self-aware or not.
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
JollyJoker
JollyJoker


Honorable
Undefeatable Hero
posted June 22, 2009 08:04 AM

Quote:
Quote:
Learning and self-awareness are not mutually inclusive. We can already make "learning" self-rewriting software. But it can only learn what we tell it to learn.
Why? Of course it can only learn what it is fed/given, aren't we too, I mean we can only learn what we see around us -- be it internet, books, etc...
If it learns in a way to look for information, then it is self-aware I would say.

Nah. For one thing the base programming of a human mind involves a basic ability to learn by "listening" and "watching" (the first language you learn). Humans are a mobile entity with certain abilities, self-awareness means, that a human will recognize that without someone telling them.
Thirdly, learning is something else than storing in some memory and being able to vall on that memory. Intelligence means, that entities are not only able to conclude things by combining learned things, but they are able to "fill in" missing information as well, making intuitive leaps.
Lastly, humans can express the things they have on their minds by various means, writing, painting, music, poems, and on top of that are able to find NEW wayss there as well, which is all part of being self-aware.
It is not pssible to program anything like that, because they don't HAVE to do that.

In any case, Death, you may see artificial intelligence looming around the next corner, but I don't. I think we are ar least one dimension short.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
friendofgunnar
friendofgunnar


Honorable
Legendary Hero
able to speed up time
posted July 07, 2009 01:20 PM

Man, I'm really late to this party...

Anyway, I've always thought that the technological singularity is a ridiculous idea.  In one of the articles in the original post(s), the author expanded this to include human advancement but for right now I'm just going to write about the idea of computers growing smart enough to outstrip humans.  In a word...ha!.

People like to compare computer processing power to human intelligence but they're just not the same.  No matter how many petas a computer has it will never be capable of creative thought.  

Imagine if you will twelve 1 meter sticks attached together in the shape of a cube.  Imagine that on each stick there are hooks spaced every 1 cm.  Now imagine the following sequence of events:
-tie a string between any two hooks
-tie a second string between any other hook and any point on the first string.
-at the point that you just tied the second string to the first string, attach a third string and tie it to any other hook on the frame.  
-tie a fourth string to any point on the third string.
-you remember that second string? remove it and put in a rope instead.
-now clip the 1st string at any point and tighten up the remaining strings.

What you are imagining is the way that the brain stores and processes information.  There is an infinite number of ways that the strings on this "brain" can be configured.  The information is always changing too, depending on what stimuli is received by the brain (strings being attached to hooks on the frame,) what information is judged important (strings being converted into ropes,) and what information the brain allows to lapse (strings being cut).  This is the essence of creativity and you can't achieve it by processing bits, no matter how many you have.

This, in a nutshell, is why artificial intelligences will never take over the world. The universe is constantly changing and any organism, without exception, needs to be able to evolve to new circumstances.  The processing model that is based on logic gates is incompatible with this type of evolution.  It's finite resources will always come up short against the infinitely variable requirements of changing environments.  Sure you can have a computer sit at a chess board and think 32 moves ahead but you can't stop the human from reaching out and flipping the chess board over.

So that raises the question "What if somebody decides to create something with this organic type of processor?"  The problem here is that this type of structure is fundamentally unreliable.  It forgets things easily, it's easily distracted, and it has its own priorities which may or may not line up with the people that created it.  Since the entirety of the computer industry is geared towards trying to make tools that behave in predictable ways, it becomes even more of a stretch to think that anybody would invest serious resources in developing this type of processor.    

Even 20 years from now, when computers are frictastically fast and circuits are quantumly small you still have another fundamental problem.  And that problem is that all the hardware is still being programmed by humans.  Right now is probably a good time to introduce

The principle of mediocrity.
This is a far-reaching principle but for right now I'm going to focus on one aspect.  It can be summarized like this:   If excess capacity is created, people will find a way to waste it.  I'll give you some examples:
Example 1: On the NY Times website they have a section called "Bloggingheads videos" where people set up a camera and shoot themselves expounding on their own opinions.  If you were to make a transcript of what they said you'd find that the ratio of useless information (in the video file) to useful information (in the transcript) is about a million to one.
Example 2: High definition TV.  contributes nothing to the happiness and well-being of humanity.
Example 3: By historical standards, energy is astonishingly cheap.  So what do people do?  They build huge houses tens of miles away from their jobs, get big cars, and then complain mercilessly about the price of gas.
Final example:  Computers have gotten faster and faster over the last three decades but have you noticed that it takes roughly the same amount of time to boot a computer as it did 30 years ago?

Probably everybody reading this can think of a few examples to add to the list.  This principle of mediocrity is like the anti-moore's law.  This is such a relentless and everpresent principle that when something is really done efficiently, for example using a tiny computer to calculate orbits for the moon program, it is disbelieved and used as evidence that the moonshot never happened.

My point here is that you can have a fantastic computer revved up as fast as you want but bozos are still going to be writing the software for it.      

So you can see, the robot revolution already has two things against it.  The first is a fundamental set of limitations and second is that it needs humans to get it started in the first place.

 Send Instant Message | Send E-Mail | View Profile | PP | Quote Reply | Link
del_diablo
del_diablo


Legendary Hero
Manifest
posted July 07, 2009 02:08 PM

Quote:
Final example:  Computers have gotten faster and faster over the last three decades but have you noticed that it takes roughly the same amount of time to boot a computer as it did 30 years ago?


Depends on what your loading with it.
Hardware gets twice as powerfull each 8th moneth, it increases by ².
Software increases in quality/power by ¹,³ was it?

PS: got a boot time of about 9 seconds on my laptop, it got a sata HD and not a ssd one.
____________



 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted July 07, 2009 04:21 PM
Edited by TheDeath at 16:45, 07 Jul 2009.

Quote:
In any case, Death, you may see artificial intelligence looming around the next corner, but I don't. I think we are ar least one dimension short.
Well in any case let me make myself clear that I in no way am saying that they "will be similar to us". Especially regarding emotions. At all.

Quote:
What you are imagining is the way that the brain stores and processes information.  There is an infinite number of ways that the strings on this "brain" can be configured.  The information is always changing too, depending on what stimuli is received by the brain (strings being attached to hooks on the frame,) what information is judged important (strings being converted into ropes,) and what information the brain allows to lapse (strings being cut).  This is the essence of creativity and you can't achieve it by processing bits, no matter how many you have.
Uh, actually that's exactly what Artificial Neural Networks do.

The "infinite possibilities" is just a list or an array.
The PROBLEM here is that it requires tremendous amount of memory, which we don't have. Also processing power if you want it to be fast of course (and not uber slow).

With billions of neurons in a human head, that means if a neuron took just 1 byte, we would need Gigabytes of RAM (which is ridiculous, as a list has more elements, and an element would take probably 4 bytes at minumum, even more if we make more advanced neural networks). So looking at Terabyte or Petabyte range sounds more realistic. That is RAM, not harddisk, because that would be too slow.

Quote:
Example 2: High definition TV.  contributes nothing to the happiness and well-being of humanity.
I don't see what happiness and well-being of humanity High Definition or AIs are supposed to do.

If you mean that they'll take over most of our work, that is indeed correct, as they already did (e.g: complex calculations, algebra equations, etc...)

How do Nukes contribute to our selfish well-being? Right they don't exist either.

Here's something interesting, if you can make a computer to simulate an organism that has the same structure as a human, but with 1 millionth of the processing ability, there's no reason you can't "scale it up" in a computer too, as long as you have enough processing power and memory. Which is, after all, the obstacle here.

Quote:
Final example:  Computers have gotten faster and faster over the last three decades but have you noticed that it takes roughly the same amount of time to boot a computer as it did 30 years ago?
Why don't you use the old operating systems and see that they boot like blazingly faster?

Don't assume crappy newer Microsoft software and stupid eye-flashing Linux distros (not optimized light ones) will be in AIs.

However, here's the thing. These are stuff that previously were available only on supercomputers or big computer farms:

1) Computer Algebra Systems: in short, any person these days (and for FREE) can solve complex equations with a computer rather than manually. Also complex numerical computations, approximations, and arbitrary precision (that means as much as you want -- of course you might not have enough processing power for that).

2) 3D rendering
3) Movie editing (this was like... only in studios)
4) Music sequencing & mixing & editing (previously only on "super workstation computers" in big studios)
5) Facial Recognition
6) Watching a video live on the internet (streaming... previously unheard of)
7) +++ all the things you can today connect to a computer (digital camera for instance) which previously were too underpowered for that. You had to record to an analog tape and watch it on a VHS player.

Quote:
The first is a fundamental set of limitations and second is that it needs humans to get it started in the first place.
Yes it needs humans and as you can see from the first post many are interested and working on it

For instance, you may say that we can't create something smarter than us. But we already have extremely complex CPUs, and if you were to "know" every single transistor out of a billion (Intel hit that limit in 2008), you would fall asleep or go crazy. In short, there are some things that "just work" created by us without knowing exactly how each component will turn out. It's called automated scale engineering.
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Seraphim
Seraphim


Supreme Hero
Knowledge Reaper
posted July 07, 2009 04:32 PM

Dont worry guys about the future,it will end in a nuclear fest.

2050 or 2100,the world will not sustain itself.Nowadays the eco crisis stoped somewhat the development of cpu.


Dont think about fantascit things people,people in the 30's thought that in the year 2000 we were going to live in floating cities.Are we?
____________
"Science is not fun without cyanide"

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted July 07, 2009 04:42 PM

Quote:
Dont think about fantascit things people,people in the 30's thought that in the year 2000 we were going to live in floating cities.Are we?
Well that kinda defies current physics.
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
DagothGares
DagothGares


Responsible
Undefeatable Hero
No gods or kings
posted July 07, 2009 04:50 PM

Quote:
Dont worry guys about the future,it will end in a nuclear fest.

2050 or 2100,the world will not sustain itself.Nowadays the eco crisis stoped somewhat the development of cpu.


Dont think about fantascit things people,people in the 30's thought that in the year 2000 we were going to live in floating cities.Are we?
A hundred years ago people would've laughed at the idea of television.
____________
If you have any more questions, go to Dagoth Cares.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted July 07, 2009 04:51 PM

Quote:
A hundred years ago people would've laughed at the idea of television.
Or a metal exploding more powerful than kilotons of dynamite
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Lord_Woock
Lord_Woock


Honorable
Undefeatable Hero
Daddy Cool with a $90 smile
posted December 27, 2009 01:52 PM

By the power of necropost!

I've looked this thread up and read a good portion of it (skipping the seemingly irrelevant walls of text) what with me having to write a paper on the ontological/metaphysical consequences of weak/strong AI for my ontology class. Because of this, I've been reading Kurzweil's "The age of spiritual machines". As a quick aside, if anyone has any tips regarding said philosophical consequences, I'm all ears.

Now, back on topic, I can't pretend to not have noticed this thing here:
Quote:
Even 20 years from now, when computers are frictastically fast and circuits are quantumly small you still have another fundamental problem.  And that problem is that all the hardware is still being programmed by humans.


Actually, not all software currently in use is written by humans. Instances of evolved programs can be found as early as in several components of Windows 95.

What I mean by software evolution is that you start by randomly generating a million instances of the program or so, each with different, randomly selected rules for decision making (e.g. buy/sell triggers) and then feed them data (e.g. historical stock market records). Then you prune out the ones that perform below a certain threshold (e.g. returned a loss in the stock market simulation), copy the most successful programs to get back to a million instances, allow for some random mutations and repeat the whole process for a million generations or so. You end up with a program that outperforms anything a human could have consciously written.

Here's a quote from Kurzweil's book, published 1999.

Quote:
In the real world, a number of successful investment funds now believe that the surviving "creatures" from just such a simulated evolution are smarter than mere human financial analysts. State Street Global Advisors, which manages $3.7 trillion in funds, has made major investments in applying both neural nets and evolutionary algorithms to making purchase-and-sale decisions. This includes a majority stake in Advanced Investment Technologies, which runs a successful fund in which buy-and-sell decisions are made by a program combining these methods. Evolutionary and related techniques guide a $95 billion fund managed by Barclays Global Investors, as well as funds run by Fidelity and PanAngora Asset Management.

____________
Yolk and God bless.
---
My buddy's doing a webcomic and would certainly appreciate it if you checked it out!

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted December 27, 2009 10:18 PM
Edited by TheDeath at 22:19, 27 Dec 2009.

Here's a nice thing to say. Humans only design the device on which supposedly AI would exist, they design the rules how it works, the algorithms it uses. It's like designing some laws of physics for that device. They don't design the intelligence itself. They design the rules how a neural net operates, and then train it or let it train itself from data ('experiences').

You may only need the design of 1 neuron (the "laws" it should follow) to make it work correctly and then simply upscale that to billions and you'd have a device that can learn by itself.
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Jump To: « Prev Thread . . . Next Thread » This thread is 5 pages long: 1 2 3 4 5 · «PREV / NEXT»
Post New Poll    Post New Topic    Post New Reply

Page compiled in 0.0944 seconds