Overcoming the Amygdala Part 70
If human beings had no inner worlds at all, they would probably be fairly logical creatures.
They would observe factors in their surroundings and come to rapid and accurate determinations of what needed to be done. If they made an error because something wasn’t quite perceived correctly at any particular time, it would simply be a matter of recalculating the situation and issuing new instructions. Very machine-like, in other words. I expect a science fiction robot would operate along these lines, and most artificial intelligence being developed today probably has this kind of operation in mind.
It’s well known, and easily observable, however, that human beings are not logical in this way. Yes, they observe their surroundings, but often come to erroneous or inadequate conclusions about them, leading to incorrect or non-optimum decisions and reactions. That’s the downside. The other aspect is that human beings are capable of interacting with their surroundings in ways which create all kinds of things far beyond the limited abilities of a machine.
Why is this so?
Well, its pretty clear that something else is involved when a human being looks around at the external world. From somewhere within, human emotion, imagination, with elements of the unknown, enter the picture, blurring the scene or adding to it, depending on the context. Instead of seeing a street, we see the route we walked as a child to school or a pathway resembling something faintly mythical, or a road leading to a hospital with associations with trauma. In other words, we see more than a robot would detect. We see the external world plus what we project onto it.
The amygdala, in this scenario prompts the individual into fight/flight reactions often after he or she has already projected onto the environment in this way.
But if you were a robot, operating entirely logically, things would be a little different: your ‘scanners’ wouldn’t necessarily be projecting anything onto anything. As a robot, you would simply carry on going about your business — no projection, no sense of ‘departure’, no bypassing of consciousness to switch on your nervous system. A robot’s equivalent of a ‘projection’ would be its databank as programmed into it by a human being: the human being would determine the threat level of factors in the environment. In the absence of such items in the databank, there would be neither a reaction nor any need to bypass consciousness because ‘consciousness’ as we understand it would not be present.
The robot’s behaviour might be argued to be entirely logical, possessing no emotional baggage whatsoever. The robot isn’t doing any projecting, only receiving, and therefore it behaves ‘rationally’ in relation to the environment it can ‘see’. Its attitude makes perfect sense, but it is also flat and lifeless. Like the machine that it is, a logical robot would simply trundle on through the jungle, ignoring threats entirely (unless its human programmer had decided otherwise). The big difference between a robot’s behaviour and that of a human in the same conditions is that a human has this mysterious thing called ‘consciousness’, one of the results of which is that it is projecting something onto the surroundings.
Consciousness, then, is not just a rational interaction with an external environment.
Logic is all very well, in other words, but being entirely logical means not being completely human.
Being human means having a quality of consciousness which has something to do with an ability to mix inner and outer worlds.
Logic means the subject of reasoning. The ability to reason is vital to an individual. If a person cannot think clearly he will not be able to reach the conclusions vital to make correct decisions. But what determines what a ‘correct’ decision is?
The projection of an ideal.
Agencies, governments, societies, groups, individuals can capitalise upon a lack of logic and have done so for a very long time, keeping populations, personnel or partners ignorant. People who are unable to think or reason can be manipulated easily.
But thinking sensibly isn’t totally about logic.
Robots can ‘think’ logically and roll forward with actions, but the rightness or wrongness of an action can only be determined by comparing it against a desired goal or set of goals. Robots don’t and can’t ‘desire’ anything; they don’t have inner worlds, or dreams, or interconnected understandings of meaning.
If a human being wants to interact with the external world, he or she needs to project something onto it in order to decide on a right action.
That projection can be anything, but it needs to exist, because only against a projection can a departure be detected.
There’s been a tendency over the last century to think that some kind of complex mathematics of human behaviour can serve as a logical template against which everything can be measured, but the sheer complexity of human life and the vast number of factors involved make mathematics utterly inadequate: even if it were somehow possible to compute using some kind of galaxy-sized algorithm the behaviour of human individuals and the world around them so as to predict futures — as is partly envisaged in Isaac Asimov’s Foundation stories — there is still an absence of morality, of right and wrong, of quality, integrity, good and evil, optimum and non-optimum.
Because there would be an absence of projection.
Computers are only servo-mechanisms, dependent upon on who asks the questions and who reads their answers whether they are of any use or not. They can't think because even the complete rules of logic aren't enough to decide on correctness.
A computer doesn’t know good from evil.
However, having said all of that, logic is still tremendously important, as we will see.