Heroes of Might and Magic Community
visiting hero! Register | Today's Posts | Games | Search! | FAQ/Rules | AvatarList | MemberList | Profile


Age of Heroes Headlines:  
5 Oct 2016: Heroes VII development comes to an end.. - read more
6 Aug 2016: Troubled Heroes VII Expansion Release - read more
26 Apr 2016: Heroes VII XPack - Trial by Fire - Coming out in June! - read more
17 Apr 2016: Global Alternative Creatures MOD for H7 after 1.8 Patch! - read more
7 Mar 2016: Romero launches a Piano Sonata Album Kickstarter! - read more
19 Feb 2016: Heroes 5.5 RC6, Heroes VII patch 1.7 are out! - read more
13 Jan 2016: Horn of the Abyss 1.4 Available for Download! - read more
17 Dec 2015: Heroes 5.5 update, 1.6 out for H7 - read more
23 Nov 2015: H7 1.4 & 1.5 patches Released - read more
31 Oct 2015: First H7 patches are out, End of DoC development - read more
5 Oct 2016: Heroes VII development comes to an end.. - read more
[X] Remove Ads
LOGIN:     Username:     Password:         [ Register ]
HOMM1: info forum | HOMM2: info forum | HOMM3: info mods forum | HOMM4: info CTG forum | HOMM5: info mods forum | MMH6: wiki forum | MMH7: wiki forum
Heroes Community > Other Side of the Monitor > Thread: Artificial Intelligence
Thread: Artificial Intelligence This thread is 4 pages long: 1 2 3 4 · NEXT»
Corribus
Corribus

Hero of Order
The Abyss Staring Back at You
posted June 23, 2009 09:57 PM
Edited by Corribus at 01:32, 24 Jun 2009.

Artificial Intelligence

There have been some calls recently for a thread on this topic, and it's one that I'm interested in by virtue of the fact that it's pretty much a staple of every science fiction story ever made.

I would say we can use this thread to talk about anything related to AIs but I'm (personally) more interested in talking about ethics/philosophy.

Here are a few questions to start us off.

(1) Can AIs become self-aware?

(2) If AIs become self aware, would it be morally wrong to terminate them?

(3) If AIs become self-aware, do the principles of evolution apply to them?  I.e., can self-aware AIs evolve?

(4) If AIs become self-aware, do we (the creators) have anything to fear from them?

(5) Are self-aware AIs alive?

(6) Can self-aware AIs feel emotion?

(7) Can you have a "society" of individual AIs on a single "piece" of computer hardware?  What are the properties of such a society?  Could it be like a human society?

(8) Do self-aware AIs need computer hardware to exist?

Thoughts?
____________
I'm sick of following my dreams. I'm just going to ask them where they're goin', and hook up with them later. -Mitch Hedberg

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
JollyJoker
JollyJoker


Honorable
Undefeatable Hero
posted June 23, 2009 10:07 PM

Thoughts? Sure, but this takes one hell of a lot of consideration.

I mean, if it's not really clear what intelligence is, how could we then argue about ARTIFICIAL intelligence.

So I'm going to post here at a later time.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Binabik
Binabik


Responsible
Legendary Hero
posted June 23, 2009 10:18 PM

JJ beat me to it, but yea, you need to define intelligence. Otherwise it will just be another debate with everyone speaking a different language.

Intelligence is one of those things that we can recognize in other people (in all it's many facets), but it's extremely hard to define.

One idea is that if we recognize it in other people, will we recognize it in machines? Currently there are no machines I would recognize as intelligent. I'll leave stand as it is for now.

Intelligence in people has many facets. I don't think most people would consider it necessary for someone to be intelligent in all of them to be considered intelligent. So with AI, could we consider it truly intelligent if it meets some or only one of those facets?

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Binabik
Binabik


Responsible
Legendary Hero
posted June 23, 2009 10:22 PM

Oh, and for the ethical questions it might not hurt to define biology. Does it need to be biolgical to be "alive"?

Off the top of my head I'd say the answer is yes. And if it isn't biological, then it has no "rights" that we would normally assign to living organisms.


____________

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Corribus
Corribus

Hero of Order
The Abyss Staring Back at You
posted June 23, 2009 10:26 PM
Edited by Corribus at 22:28, 23 Jun 2009.

Ok, good point about intelligence.  I guess what I mean when I say "intelligence", as it pertains to self-aware AIs, is the ability to operate (think) beyond the boundaries written by the original programmer.  Hmmm... I'm not really expressing myself well here, am I?  Adaptability might be a pertinent word.

And regarding the issue of biology, well I don't think it would necessarily have be organic in order to have rights.  If the AI can think for itself, if it has a personal identity of which it is aware, doesn't that mean it has some inherent rights to existence?  Humans/animals don't have rights because they are biological organisms, do they?
____________
I'm sick of following my dreams. I'm just going to ask them where they're goin', and hook up with them later. -Mitch Hedberg

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Geny
Geny


Responsible
Undefeatable Hero
What if Elvin was female?
posted June 23, 2009 10:47 PM

Even at this point in time there already are machines that can adapt or evolve i.e. learn. Heck, half of my friends at the faculty take a course about learning systems this semester and my brother studied learning AI in the past. That being said, they don't develop emotions or personalities (unless you count the logic strings encoded in it as personality). Therefore, in my mind they will always remain tools in the hands of humans with no feelings, emotions and therefore no rights.
____________
DON'T BE A NOOB, JOIN A.D.V.E.N.T.U.R.E.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Binabik
Binabik


Responsible
Legendary Hero
posted June 23, 2009 10:56 PM

This is an extremely difficult subject to get my arms around. But to be quite honest, I'm the type that basically believes that no, a machine is not alive and has no "rights". It's just a bunch of hardware.

In my mind this largely boils down to a subject we try to avoid around here (religion...shhhhhh). You mentioned awareness, but I don't consider awareness a brain (or other biological) function. And therefore a computer could never truly duplicate it. I'm not offering a suggestion as to WHAT awareness is, because I don't know. I'm just separating the two.



(1) Can AIs become self-aware?
see the above

(2) If AIs become self aware, would it be morally wrong to terminate them?
If condition not met, but no, just unplug the thing.

(3) If AIs become self-aware, do the principles of evolution apply to them?  I.e., can self-aware AIs evolve?
If condition not met, but I think awarness and evolution are independant of each other and that AIs could evolve, depending on how you define it.

(4) If AIs become self-aware, do we (the creators) have anything to fear from them?
If condition not met, but fear? That's pretty general. I already fear both existing and potential future aspects of computers. Again, define fear. Maybe the word concern would better describe what I feel about it

(5) Are self-aware AIs alive?
No, I tend to view things from the hardware side, and the functioning of the hardware doesn't change no matter what program is running.

(6) Can self-aware AIs feel emotion?
I think the two are independent of each other. But I would say no, AIs do not feel emotion, nor pain.

(7) Can you have a "society" of individual AIs on a single "bit" of computer hardware?  What are the properties of such a society?  Could it be like a human society?
Not sure what you mean by this. Also, I kind of view it like there is no such thing as a "bit" in a computer. I see electricity flowing one direction or another (or not at all). It's either sourcing or sinking or neither.

(8) Do self-aware AIs need computer hardware to exist?
I already mentioned awareness. Are you asking if an AI can "escape" the bounds of the computer?


____________

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Lord_Woock
Lord_Woock


Honorable
Undefeatable Hero
Daddy Cool with a $90 smile
posted June 23, 2009 10:56 PM

Feeding the fire

The following is a quote from the wikipedia article on John Searle:

Quote:
Searle is widely credited for having stated what is called a "Chinese room" argument, which purports to prove the falsity of strong AI. (Familiarity with the Turing test is useful for understanding the issue.) Assume you do not speak Chinese and imagine yourself in a room with two slits, a book, and some scratch paper. Someone slides you some Chinese characters through the first slit, you follow the instructions in the book, write what it says on the scratch paper, and slide the resulting sheet out the second slit. To people on the outside world, it appears the room speaks Chinese -- they slide Chinese statements in one slit and get valid responses in return -- yet you do not understand a word of Chinese. This suggests, according to Searle, somehow that no computer can ever understand Chinese or English, because, as the thought experiment suggests, being able to 'translate' Chinese into English does not entail 'understanding' either Chinese or English: all which the person in the thought experiment, and hence a computer, is able to do is to execute certain syntactic manipulations.[20]

Since then, Searle has come up with another argument against strong AI. Strong AI proponents claim that anything that carries out the same informational processes as a human is also conscious. Thus, if we wrote a computer program that was conscious, we could run that computer program on, say, a system of ping-pong balls and beer cups and the system would be equally conscious, because it was running the same information processes.[citation needed]

Searle argues that this is impossible, since consciousness is a physical property, like digestion or fire. No matter how good a simulation of digestion you build on the computer, it will not digest anything; no matter how well you simulate fire, nothing will get burnt. By contrast, informational processes are observer-relative: observers pick out certain patterns in the world and consider them information processes, but information processes are not things-in-the-world themselves. Since they do not exist at a physical level, Searle argues, they cannot have causal efficacy and thus cannot cause consciousness. There is no physical law, Searle insists, that can see the equivalence between a personal computer, a series of ping-pong balls and beer cans, and a pipe-and-water system all implementing the same program.[citation needed]


This isn't to say that I personally agree with anything Searle says.
____________
Yolk and God bless.
---
My buddy's doing a webcomic and would certainly appreciate it if you checked it out!

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
blizzardboy
blizzardboy


Honorable
Undefeatable Hero
Nerf Herder
posted June 23, 2009 11:06 PM
Edited by blizzardboy at 23:09, 23 Jun 2009.

This subject is difficult to comment on, because our understanding of human intelligence is still very limited. So we are comparing something which we haven't yet invented to something that we largely still don't understand.

On the matter of "should fear them" this is something that sparks interest for me. Thus far, mankind has progressed with the general opinion that progress always leads to something better even if it can be frightening, and thus far, that seems to have held true. But eventually, isn't progress bound to damn us?

The danger of technology is that given enough time, it will eventually become available to everybody. The creators of the nuclear warhead knew this and dreaded it. That trickling-down effect is still in the process of happening. Nuclear technology is trickling down to more and more nations, and thus is becoming more and more readily available to everybody and anybody. Nukes are a terrifying prospect just by themselves, and they become increasingly dangerous in the hands of many. The same can be said for A.I. The potential risks of it are limited when strongly controlled by a select few people, but what do we do when it becomes more widespread? What if people can program an A.I. to be "devious" by nature? Do humans become obsolete?
____________
"Folks, I don't trust children. They're here to replace us."

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Corribus
Corribus

Hero of Order
The Abyss Staring Back at You
posted June 23, 2009 11:36 PM
Edited by Corribus at 23:38, 23 Jun 2009.

@Bin

Quote:
machine is not alive and has no "rights"

Well I'm not sure "aliveness" is the minimum requirement for rights.  Or rather, "aliveness" certainly is a multifaceted thing, and not all of those facets are required for rights.  For that matter, humans certainly treat all living things as if they have rights.  We eradicate bacteria by the zillions every day (not to mention plants, fungi, etc., etc.), for example.  What creates DO we treat as if they have rights?  Clearly, it's the ones that, among other things, feel pain.  Probably most people would say it's the ones that are aware - conscious - of their existence and can interact with humans.  I.e., ones that we can humanize as having "feelings".  In that regard, I would think conscious AIs would qualify.

Quote:
If condition not met, but fear? That's pretty general. I already fear both existing and potential future aspects of computers. Again, define fear. Maybe the word concern would better describe what I feel about it

Well what I meant was - right now humans don't really have to worry about competition from other organisms.  The only organisms that are going to lead out our destruction is ourselves.  We are the top of the heap here.  There is a danger in creating something that could one day offer that competition, if not surpass us.  Especially considering how much we rely on computers in our daily lives.  Imagine the destruction a group of AIs could wage on our society.  Certainly enough works of fiction have been based on this idea.

Quote:
Not sure what you mean by this. Also, I kind of view it like there is no such thing as a "bit" in a computer. I see electricity flowing one direction or another (or not at all). It's either sourcing or sinking or neither.

Let's say that AIs were to become self aware.  Could you destroy the AI by destroying the hardware?  For instance, human consciousness is intimately tied to the hardware we know as a single human body.  Destroy the body, you destroy the identity, the person (well, let's assume that to be the case).  However, perhaps an AI is not so limited.  Given the extensive network (world wide web, internet, whatever), is it possible for an AI to delocalize its consciousness so that the only way to kill it would be to destroy the whole network (a virtual impossibility).  In that sense, perhaps an AI can obtain immortality.  

What I meant about the society thing was - can more than one AI exist on a single network, and if so, could the AIs form a society of sorts?

@Blizzard

Well we already have the concept of cyberterrorism - malevolent AIs could just be the logical extension of that.

@Woock

Interesting article.  I'll have to read it again when I have more time to digest it.
____________
I'm sick of following my dreams. I'm just going to ask them where they're goin', and hook up with them later. -Mitch Hedberg

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted June 23, 2009 11:53 PM
Edited by TheDeath at 23:55, 23 Jun 2009.

Intelligence: the ability to think/devise algorithms to solve new problems, not algorithms that it already knows to solve them -- we have that already. When it'll be able to devise its own, I'd qualify it as intelligent.

Also the ability to learn. We have that already, but too simple and underpowered.

Quote:
(1) Can AIs become self-aware?
not sure if awareness is ONLY the self-programming aspect although it would make it self-thinking.

Quote:
(2) If AIs become self aware, would it be morally wrong to terminate them?
Absolutely.
I think someone should really ought to 'unplug' us by now so we know how it feels.

Quote:
(3) If AIs become self-aware, do the principles of evolution apply to them?  I.e., can self-aware AIs evolve?
I think evolution, as a philosophy, is much more grand than the simplistic pass-down-your-genes thing. I think that it goes through steps. This was only one step (and there have been steps before mind you, to duplicate with the earliest organisms/bacteria, not to 'pass their genes'). Next step is, obviously, mental. Usually going from step A to step B requires leaving behind things in step A that prevent us from reaching B. Instincts are one such example for step A. AIs are another example for step B.

As for AIs evolving I clearly have no idea about it. I have no idea how the "next next" step (that is, after this next step) will be. You can only look at the next step, but never at the step after that. To be able to do so, you must first walk through the next step. I will have an idea once we reach that step, but right now, I'm clueless. One thing I know though, we'll NEVER know if we don't want to go to the next step.

Quote:
(4) If AIs become self-aware, do we (the creators) have anything to fear from them?
Only if we are tyrants, then yes.

Quote:
(5) Are self-aware AIs alive?
My opinion, yes. But it's just an opinion because the word alive is arbitrary.

Quote:
(6) Can self-aware AIs feel emotion?
not sure. I would say it's not a primary priority although I do tend to believe they will develop emotions. Maybe not in human sense, but who says everything has to be like us or else "it's lack of life"? That arrogant narrowminded view of life in general is what sickens me. If it ain't got what a human got, doesn't mean it ain't got a life -- yeah it's different, but who said that life == human traits?

Quote:
(7) Can you have a "society" of individual AIs on a single "bit" of computer hardware?  What are the properties of such a society?  Could it be like a human society?
no idea what you mean by this

Quote:
(8) Do self-aware AIs need computer hardware to exist?
Define "computer hardware"? Do you mean how we have it today? Clearly it's too underpowered for such a task.

Quote:
This is an extremely difficult subject to get my arms around. But to be quite honest, I'm the type that basically believes that no, a machine is not alive and has no "rights". It's just a bunch of hardware.
You're also "a bunch of hardware". What makes your hardware more special?

Quote:
In my mind this largely boils down to a subject we try to avoid around here (religion...shhhhhh). You mentioned awareness, but I don't consider awareness a brain (or other biological) function. And therefore a computer could never truly duplicate it. I'm not offering a suggestion as to WHAT awareness is, because I don't know. I'm just separating the two.
Awareness might be classified as the ability to self-program itself, that is the ability to understand writing of code in a logical fashion so it can do it without the programmer's aid.

Quote:
Not sure what you mean by this. Also, I kind of view it like there is no such thing as a "bit" in a computer. I see electricity flowing one direction or another (or not at all). It's either sourcing or sinking or neither.
That is a wrong interpretation of information theory. Information theory does not say HOW you have to store a bit. Electricity is just in the CPU, in the harddisk it is magnetic charge, in a flash disk it's electrons trapped, in a CD it's burned holes or pressed holes (in factory), etc... "bit" just means 1 binary digit data. The form it carries is irrelevant.
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Binabik
Binabik


Responsible
Legendary Hero
posted June 24, 2009 12:03 AM

TheDeath showed up, conversation over.

I'm outa here....


____________

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted June 24, 2009 12:19 AM

Except that your post wasn't much different. Except for the fact that, apparently, it didn't expect a reply.
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Galev
Galev


Famous Hero
Galiv :D
posted June 24, 2009 12:52 AM
Edited by Galev at 00:53, 24 Jun 2009.

If only I had more time... and less exams

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TitaniumAlloy
TitaniumAlloy


Honorable
Legendary Hero
Professional
posted June 24, 2009 09:25 AM

It would absolutely not be morally wrong to terminate them.

That is mental.



I don't care how advanced they are, there is nothing wrong with deleting some bot on Second Life, closing a Youtube video or shutting down MS Paint.
____________
John says to live above hell.

 Send Instant Message | Send E-Mail | View Profile | PP | Quote Reply | Link
JollyJoker
JollyJoker


Honorable
Undefeatable Hero
posted June 24, 2009 09:36 AM

I think that you are all making a very fundamental mistake here.
What have and see NOW is the result of a couple hundred thousand years of development, and if you don't like that, it's at least the result of a couple thousand years.
If every human would have to solve each problem anew we'd still be quite primitive.
So what may APPEAR to look like intelligence is - on second look - just the result of lots of time, memory and the transferring of a certain percentage of that memory to the next generation.

Most examples of the species human are definitely not able to find new solutions for new problems - they are not even able to identify a problem. Still, we agree that as a species we are intelligent, at least most of the time.

Intelligence is the ability of abstraction: to discern PATTERN or the IDEA or fundamental NATURE behind a certain something: Grasping the idea or "defining properties" or fundamental intrinsic nature of a circle (as in, what makes a circle a circle) will allow you to RECOGNIZE the PATTERN circle everywhere. I believe that this is the key to genius as well: UNOBVIOUS pattern recognition.
Actually this is what we are trying now (defining "intelligence") and this IS intelligence.

Based on that, we are ar least one full quality dimension away of real AI - I don't think that the algorithm way works in that respect, my guess would be more on quantum effects and jumps.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TitaniumAlloy
TitaniumAlloy


Honorable
Legendary Hero
Professional
posted June 24, 2009 10:00 AM

Quote:
Absolutely.
I think someone should really ought to 'unplug' us by now so we know how it feels.

The Death you are a fundamentally scary person. Maybe it's in the name, but would you be offended if I told you that I would very much like to never meet you?
____________
John says to live above hell.

 Send Instant Message | Send E-Mail | View Profile | PP | Quote Reply | Link
bixie
bixie


Promising
Legendary Hero
my common sense is tingling!
posted June 24, 2009 10:41 AM

Quote:

(1) Can AIs become self-aware?

(2) If AIs become self aware, would it be morally wrong to terminate them?

(3) If AIs become self-aware, do the principles of evolution apply to them?  I.e., can self-aware AIs evolve?

(4) If AIs become self-aware, do we (the creators) have anything to fear from them?

(5) Are self-aware AIs alive?

(6) Can self-aware AIs feel emotion?

(7) Can you have a "society" of individual AIs on a single "piece" of computer hardware?  What are the properties of such a society?  Could it be like a human society?

(8) Do self-aware AIs need computer hardware to exist?

Thoughts?


yeah

have you been watching the matrix trilogy or tron recently?
____________
Love, Laugh, Learn, Live.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
TheDeath
TheDeath


Responsible
Undefeatable Hero
with serious business
posted June 25, 2009 01:10 AM

Quote:
I think that you are all making a very fundamental mistake here.
What have and see NOW is the result of a couple hundred thousand years of development, and if you don't like that, it's at least the result of a couple thousand years.
If every human would have to solve each problem anew we'd still be quite primitive.
So what may APPEAR to look like intelligence is - on second look - just the result of lots of time, memory and the transferring of a certain percentage of that memory to the next generation.

Most examples of the species human are definitely not able to find new solutions for new problems - they are not even able to identify a problem. Still, we agree that as a species we are intelligent, at least most of the time.

Intelligence is the ability of abstraction: to discern PATTERN or the IDEA or fundamental NATURE behind a certain something: Grasping the idea or "defining properties" or fundamental intrinsic nature of a circle (as in, what makes a circle a circle) will allow you to RECOGNIZE the PATTERN circle everywhere. I believe that this is the key to genius as well: UNOBVIOUS pattern recognition.
Actually this is what we are trying now (defining "intelligence") and this IS intelligence.

Based on that, we are ar least one full quality dimension away of real AI - I don't think that the algorithm way works in that respect, my guess would be more on quantum effects and jumps.
You make valid points but I cannot say that I agree fully with what you said.

Because obviously, we already store most of our knowledge in computers these days, and access to the internet gives it all to a single computer -- but computers are obviously, unable to think what to do with it, unless they are pre-programmed in advance.

Intelligence and knowledge are two unrelated things mind you. Intelligence requires thinking or imagination, knowledge just requires a library of information (which we already have in computers -- actually that IS the best thing they excel at compared to us, they never forget, etc etc). Making SENSE out of the data is the problem here, not storing the data itself (knowledge)

Quote:
The Death you are a fundamentally scary person. Maybe it's in the name, but would you be offended if I told you that I would very much like to never meet you?
You think I have something personal with people or am a psychopath or something?
____________
The above post is subject to SIRIOUSness.
No jokes were harmed during the making of this signature.

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
baklava
baklava


Honorable
Legendary Hero
Mostly harmless
posted June 25, 2009 02:03 AM

I think that time and advancement will the line between what we call life, and what we call machines.

If it acts like an intelligent being, if it reacts like an intelligent being, if it reasons, communicates and even goes far enough to improve itself, reproduce, simulate love...
Then does it really matter whether that's just a set of actions and reactions inside an ever-calculating, cybernetic mind? Can we be certain that, in fact, we are more than that? That everything we feel isn't a similar, though so far incomprehensibly advanced process?

The distinction between organic and cybernetic life then seems relevant only when it comes to materials they're made of, and their origin.

And even origin is negotiable. As I see it (and feel free to correct me if I'm wrong), there are two main theories of our origin - higher power and chance. The theory of a higher power tells us that a being incomprehensible and eternal to us created us; just like we could create the cybernetic organisms. In fact, perhaps that higher power would want that, perhaps that would fit the greater plan perfectly; and perhaps that's one of the reasons we were created too. To eventually learn to create a type of life ourselves.
And there's the theory of chance, which teaches us that we were created through chemical processes and, well... chance. Which would mean that we are, just like those robots, comprised of complex chemical and physical processes and materials.

As for whether it would be right or wrong to use them or treat them as property...
We have always used less advanced living creatures like we use machines. We feed cows and then milk them; we feed chickens and then take their eggs, we grow crops, we plant and cut down forests, etc. We use them as organic machines since we cannot sustain ourselves otherwise. Whether that is wrong or not depends on your point of view; more people will say that something is wrong if it's more similar to us. You don't have organizations which protect sugarcane; you have organizations which protect the rights of cows and sheep etc. Why? Because cows and sheep are more advanced, and thus more similar to us.
Incidentally, people would still see nothing wrong with using an oven or a dishwasher but some (many?) would feel that there is something not right about treating robots who think and act like us as something which isn't even alive. That is only natural, perhaps some robotic reaction of our mind to feel a closer link to something which is similar.

It's a complicated issue anyhow, and though many would say that we are a long way away from that, we advance faster and faster. At this pace, that "long way" might come in less than a century, and catch us completely unprepared. If we don't think it all through on time, the consequences might be quite unpleasant.
____________
"Let me tell you what the blues
is. When you ain't got no
money,
you got the blues."
Howlin Wolf

 Send Instant Message | Send E-Mail | View Profile | Quote Reply | Link
Jump To: « Prev Thread . . . Next Thread » This thread is 4 pages long: 1 2 3 4 · NEXT»
Post New Poll    Post New Topic    Post New Reply

Page compiled in 0.0825 seconds