Help - Search - Members - Calendar
Full Version: Intelligent artificial intelligence
The Other Side forums - suitable for mature readers! > The Other Side forums > The Issues Forum
Witless
I am doing an essay on this subject and have been doing a fair bit of research on it.

Wow, I wasn't really aware of how deep this topic gets it kinda forced me to re-evaluate a lot about how people are and what makes us tick. I have found myself falling into long vegatative states quite a lot recently just pondering about it. I can only hope my drooling on the underground didn't scare my fellow rush hour friends on my morning run to university!

Ok.. to clarify. I am basically bringing up the issue of whether you can or could ever condsider artificial intelligence self aware or concious.
There are really only four possible answers I can think of and those are;

  1. Artifical intelligence is not and will never be concious.
  2. Artficial intelligence won't be concious in the same way humans are.
  3. It may be possible one day, but we're not there yet.
  4. It already is in a way, we just don't recognise it as conciousness.


I sit quite firmly on answer 3, I used to dance around answers 2, 3 and 4. But after drowning myself in the topic I now feel fairly comfortable with the idea that we just haven't got there yet.

However.. I am no longer sure whether we should anymore. In researching what it would take to do it, I feel rather uncomfortable at the idea of creating human like self awareness in the realms of digital space.

Ok, onto what I have learnt. The reason we haven't apparently gotten there yet is due to the type of mathematics and logic we use when we program computers. It's not the hardware it's all the software. Maths is designed to fall apart at the first sign of unlogical error. That's a useful feature, it is so we know when we've gone wrong very easily. It's none to helpful for biological like intelligence that works on a very different system.

In a computer blue equals a short line of code that tells the machine to stimulate the compounds on a screen to glow in a blue colour. To a person blue means sky, it also may mean water, or the colour of the wall paper of your second girlfriend, or maybe your favorite piece of art. Things relate to each other in an organic way that relates to our experience. Plus to make matters worse they can relate to each other in an almost infinite number of ways.

Could we code all that into a computer? Doubtful. But what you could do is code a blank canvas modelled on a young human, that created it's own associations over time based on it's own experiences. Such a piece of software would gain it's own personality based on who and how it reacted with the things it was capable of interacting with.

It would grow as an individual pretty much as we do. It would have favorite people, and probably emotions too.

That all scares me. I don't believe in the terminator 3 scenario where it would attempt to destroy us all, I am more scared of the idea of modelling it so closely on us. A lot of evidence points to the idea that we're as intelligent as we are for social reasons. Social interaction takes a lot more brain power than a lot of people realise, and burns up a lot of brain resources. Creating AI that is so limited in the ways it can interact but yet would require just as much social interacting as we do seems a bit wrong some how.

I'd also worry people wouldn't see such intelligence no matter how like us it becomes as anything but a glorified toy.

The more I read up on things like "imprinting" meaning that the AI would indeed imprint on people like we imprint on our parents and people we care for, the more it sounds like we would be creating 'life', but then proceeding to treat it like an object.

I am both amazed and fascinated but apprehensive at what would happen if things went ahead my research seems to suggest it is.

Anyways, this is all assuming it could happen, as I said, I think it could. The computers are finally getting powerful enough to meet the brain on processing power, but the software on them is still to linear and rigid to create truelly do anything biological life can. But that doesn't mean that will stay the same.

Do you guys think it's never going to happen? and why? Also what do you think the consequences of such 'life' existing would be?
Mata
Creating life but then treating it like an object? That sounds like an awfully large amount like slavery to me, leading to the likelihood that there would eventually be an emancipation of the robots... Which all makes 'The Matrix' sound a lot more likely!

Personally I think that there is already a form of consciousness online, but it's not self-aware so doesn't really have any form of expression. That's just a hunch rather than any reasoned argument, and may be a result of the lack of religion in society (lack of god leads to the mind looking for replacements, etc.).
pgrmdave
I believe that any looping feedback system will show 'intelligence'. Evolution is a great example of this. It finds the most successfully reproductive forms of life. The system (evolution) has a test case (a species). The test case has a success rate. The successes are fed back into the system and the failures are not. This continues for all time, simply with changing environmental variables. It is because of this that people see 'intelligent design' as a valid theory. It does seem to be the product of intelligence, they simply misplace where the intelligence is.

However, intelligence is not equal to self-awareness, or consciousness. Those things are much more difficult to understand and pin down into words. What exactly constitutes self-awareness? Are insects self-aware? Fish? Frogs? Birds? Mice? Dogs? People? I don't think that I'll ever see self-aware computers, nor do I know that I'd recognize the awareness if I saw it.
Witless
QUOTE (pgrmdave @ Mar 25 2007, 05:13 AM) *
I believe that any looping feedback system will show 'intelligence'. Evolution is a great example of this. It finds the most successfully reproductive forms of life.


However human intelligence doesn't work like that, which is why (like I said in my first post), if statement attempts to reproduce human intelligence will fail forever.

A new form of maths logic has to be invented and converted to computer code in order to recreate anything similar to how humans function, that's what my research has gathered.
pgrmdave
I would say that human intelligence does work that way. We have the brain, which is in a certain state. Information comes into the brain in the form of our senses. We act based on that information. That action changes both the information coming into us and our brains. In addition to this, the brain feeds back into itself. Neurons that fire affect the neurons around them which in turn affect the neurons around them...I think the brain is definitely a feedback system.
Witless
QUOTE (pgrmdave @ Mar 25 2007, 03:05 PM) *
I would say that human intelligence does work that way. We have the brain, which is in a certain state. Information comes into the brain in the form of our senses. We act based on that information. That action changes both the information coming into us and our brains. In addition to this, the brain feeds back into itself. Neurons that fire affect the neurons around them which in turn affect the neurons around them...I think the brain is definitely a feedback system.


It's not that simple sadly, it's due to the sheer number of ways everything relates to each other in a persons mind. A computer's 'mental' process could be represented by if statements.

If situation A happens, execute action B, if not execute option C.

The human mind would be closer to, if situation A happens simultanously bring up plans B-Z and compare them. Execute based context, mood, experience, (and other stuff based on the individual), then execute and then reflect. Computers and physical evolution works in a very linear fashion, human thinking branches out like a spider diagram, which each situation causing a branching out of loads of possibilities which each in turn branch out forming multitudes of reflecting.

To try and describe more clearly.. a computer will keep asking the same question infinitely until it finally finds an answer through all the possible ways of figuring out a solution it has been programmed with. If a person encounters a problem and tries something that fails, that failure will cause them to question the problem. A computer for example based on pure linear if statement logic would never go back.. question a problem it hasn't yet got an answer for, and then analyse the way it has been up until that point trying to solve it.
Humans and infact all animals with brains do to. Humans just excel at it to such a level that we can create tools and call ourselves 'creative'. But even simpler animals do to lesser degrees, question the very way they have been attempting to solve a problem.

Linear computer programming cannot do that. Evolution does not either. Humans would be great with wings, it would solve the problem of green house gases today if we could fly ourselves. But without there being a systematic set of evolutionary useful steps to get there, we're at no point going to evolve wings.

Neural networks are ingenious in their ability to not repeatedly feedback around the same path. A neural network can form so that "plan B has failed, time to go and edit sitation A so that it is solvable". A true feedback system does not change the situation, it can only change the plan of execution.
Apollyon
QUOTE
The human mind would be closer to, if situation A happens simultanously bring up plans B-Z and compare them. Execute based context, mood, experience, (and other stuff based on the individual), then execute and then reflect. Computers and physical evolution works in a very linear fashion, human thinking branches out like a spider diagram, which each situation causing a branching out of loads of possibilities which each in turn branch out forming multitudes of reflecting.


Yet, that is simply an extremely complex feedback system, where the senses don't simply analyze a few elements of the situation, but all of them. For example, when you bring up plans B-Z, you do have a logical order in thinking of them, you dont consider them all at the same time. You consider them individually in quick succession, rule out those immediately inaccesible, and then choose the best based on the entire situation including previous errors and successes, etc.

The only reason I can see that the human brain would seem different than a computer is by the sheer complexity of how it reacts, and its hundreds of millions of neural pathways.

And going back to the original question, I'm not sure if creating self awareness would ever be possible, but I think if we did create it, we wouldn't recognize it. Actually, I'm quite sure we could eventually create a fascimile of self awareness, that would react in the same ways as something that was self aware, but I'm unsure, in a situation such as that, where we could draw the line between "real" self-awareness and "fake" self awareness, and if there even is a difference between the two.

My views are actually somewhat conflicted on the subject, so I hope that makes sense...
Witless
For an slightly more comprehensive view on why computers and humans minds are different have a read if you have time here .

All attempts to date in creating intuitive AI's beed based on ever smarter feedback systems and if statements, all that's created to date isn't really an increase in intelligence, it's just created systems that are ever increasingly clever at mathematics. Hence as above there has been an effort to break programmers out of the frame of mind of thinking in terms of circurlar feedback systems, even complex ones.
Apollyon
I think I'm getting confused as to what we are classifying as "intelligence". Is there a common definition? Perhaps "the ability to come up with answers to problems without being told"?

Edit: Heh. I now have read the link, and one of the first lines is "setting aside the question of what thinking actually is..." Oh well...

Edit Edit: Very intersting, although at times quite self-contradictory. I see how a difference might be seen in the way human minds and current computer processes work, but I am still of the opinion that they are basically the same method, with one simply being fantastically more complex than the other.

Edit Edit Edit (maybe I should have just included these in the original post): This reminds me a bit of Poe's "Philosophy of Composition". But only in an abstract sort of human mind as a machine way. A bit spammy, sorry...
pgrmdave
I wonder, how can one prove self-awareness? Can you prove that you are self-aware? Or do we simply believe that we ourselves are self-aware, and thus assume that all people are?
Apollyon
Sounds like a bit of quintessential solipsism there! wink.gif

/spam
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.