View Single Post
  #255  
Old March 24th, 2003, 01:30 PM
dogscoff's Avatar

dogscoff dogscoff is offline
General
 
Join Date: Mar 2001
Location: UK
Posts: 4,245
Thanks: 0
Thanked 0 Times in 0 Posts
dogscoff is on a distinguished road
Default Re: [OT] Plato\'s Pub and Philosophical Society

Quote:
If humans have free will, then do dogs have free will? How about fish? Insects? Bacteria? Where do we draw the line? In my opinion, if we say that humans have free will, then all life forms must have it also.
I think there is a line somewhere, albeit a very fuzzy one. It all comes down to the complexity of the brain. Chimps, elephants, dolphins, dogs, cats... these animals and others have a lot more 'human' qualities than are generally ascribed to them. For example, I think some animals definitely have a sense of humour and capacity for human-like emotion. I also thnk that complexity isn't enough- brain complexity is only potential intelligence/ free-will/ sentience/ self awareness/ whatever we're calling it. You need to fill that complexity with experience and memory for it to become self aware. For that reason I don't think all dogs are necessarily sentient- just the ones which have had sufficient mental stimulation and interaction to become self-aware. Likewise a new-born baby is not a sentient creature- it's just a potentially sentient one. (That doesn't mean I value babies any less than anyone else does, though.) At some point in their development they cross the barrier and become self-aware.

Quote:
If it is possible to arrange a collection of atoms in such a way as to have free will (as in a human brain), then in theory it must also be possible to construct a machine that has free will.
I think we will see AIs in our lifetimes. I agree with Quarian in that they will have to operate in vastly different ways to us. The difficulty will not be building the "brain" - the potential intelligence - but in filling that brain with useful experience and interaction. Until we can build a viable robotic body for an AI, it will exist inside some static mainframe-box and will have to get all its experiences second hand (through encyclopedia, the web, media, conversation with humans etc) or via virtual simulations. Either way, it will be very difficult for the emergent AI to relate to humans, because their store of experiences will be so different to ours. Also, early AI will not have any need for motivation and so will not be given any- this will make them even more alien to creatures like us that are driven by biological and societal motivations. Later AIs, especially more mobile ones, will be given desires and drives- self preservation, empathy, the desire to achieve, to learn etc.

This will eventually make them easier for us to accept, but the first few years of human/ AI relations will be very difficult. People will fear AIs as a threat, (I can see the "Frankenstein" headlines in parts of the Brtitish press now ) and their initial alien-ness will mean people either refuse to accept their intelligence and treat them as dumb machines (effectively consigning them to slavery) or block further development in AI tech, or both.

I would like to see human rights organisations pre-empt AI technology by defining NOW what constitutes an artificial intelligence for the purposes of assigning it certain rights and protections. Unfortunately I don't think this is likely to happen, and AIs will be used as cheap labour, no doubt programmed to obey (like in asimov's second law of robotics: "A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.")

Once we are able to put AIs in human-like bodies, they will be able to gather their experiences in much the same way that we do, they will be much easier for us to relate to and (some) humans will be able to accept their status and sympathise with them. Then the struggle for AI rights will begin, with economic interests trying to keep them in chains. However I doubt this will manifest itself in the kind of terminator2-style apocolypse postulated by the likes of blatant self-publicist Kevin Warwick, because AIs will be fundamentally safe: Although Aasimov's positronic brain and three laws are really pure technobabble, I'm sure human fears will make sure some kind of coded inhibition against anti-social behaviour will is implemented.

Which brings us back around to free will...

[ March 24, 2003, 11:35: Message edited by: dogscoff ]
Reply With Quote