In 1887 almost every philosopher in the English-speaking countries was an idealist. A hundred years later in the same countries, almost all philosophers have forgotten this fact; and when, as occasionally happens, they are reminded of it, they find it almost impossible to believe. But it ought never to be forgotten. For it shows what the opinions, even the virtually unanimous opinions, of philosophers are worth, when they conflict with common sense.
Not only were nearly all English-speaking philosophers idealists a hundred years ago: absolutely all of the best ones were…In general, the British idealists were…good philosophers. Green, Bosanquet, Bradley, and Andrew Seth, in particular, were very good philosophers indeed. These facts need all the emphasis I can give them, because most philosophers nowadays either never knew or have forgotten them, and indeed…they cannot really believe them. They are facts, nevertheless, and facts which ought never to be forgotten. For they show what the opinions even, or rather, especially of good philosophers are worth, when conflict with common sense. (They therefore also throw some light on the peculiar logic of the concept ‘good philosopher’: an important but neglected subject.)
David Stove, “Idealism: a Victorian Horror-story (Part One)” in The Plato Cult and other Philosophical Follies, 1991, Basil Blackwell, Oxford, p. 97; emphasis original.
The current near, or would-be, consensus is that we are all slaves to our neurons, or perhaps genes, or both; or maybe our environment, or class situation, or anything; anything which denies our free will and exonerates us from culpability.
Of course, it would be a fallacy to say, as some of you are tempted to say, that any consensus should not be trusted. Because there are plenty of truths we all, philosophers or not, agree on. The only lesson for us is that the presence of a consensus does not imply truth. And maybe that some fields are more prone to grand mistakes than others.
Update Stove on the Science Mafia in the Velikovsky affair.
The problem I see with exonerates us from culpability is that it is often equated with free pass on the consequences. Just because one may not be a serial killer by choice shouldn’t mean evading at least the same fate as Typhoid Mary.
The absence of free will (should it be true) may exonerate us from culpability but should not let us escape sanction. If anyone behaves in an anti social way, society has the right to protect itself. Besides, “punishment” or the threat thereof does sometimes alter behaviour, sometimes even for the better.
Stove’s “Darwinian Fairytales” is a good read, too. Alas, Stove is dead.
I subjectively refuse to be a slave to western (i.e. mostly American) empiricism. Without a balance between empiricism and reasoning of the mind there is no common sense. The empiricist denies faith but has faith in empiricism (the so called Ishmael Conundrum). The idealists are at least correct in the fact that nothing can actually be proved empirically like it can with reason by the way of mathematics. The neurons, genes, environment, class situation, or whatever physics which may supposedly deny our free will can only be asserted in estimated degrees of probability and not fact. No matter what the current “consensus” may dictate, as Isaiah Berlin once said, “The will of the creator obeys no law.”
I like turtles.
Thanks for the link to the Velikovsky Affair.
I was blessed in grad school to be surrounded by a group who introduced me to “World’s In Collision”. It was a favourite subject of many a coffee session and dinner discussion. What I remember is that Velikovsky was treated with decent respect by all of us. We were perhaps blessed in having a senior professor who was keen on continental drift at a time when the conecpt was treated with scorn by mainstream geology and Professor Caery of University of Tasmania was a paraiah unable to publish in major geology journals.
How, in a determinist universe, could a threat alter behavior?
If we truly are not free, we simply punish or do not punish “anti-social” acts because the causal stream in which whatever mass of humanity we happen to be talking about stands within necessitates that we do so. Moreover, the determination of acts as “anti-social” or not is entirely a result of these same entirely determined, entirely non-rational factors. Given determinism, no one is “to blame” for their actions, any more than the stones of an avalanche are to blame for smashing an unwary hiker. To punish them is as absurd as putting those stones on trial, but of course, given determinism, they must be punished, because things simply could not be otherwise. It simply means that justice is an absurdity, utter nonsense.
There are a lot of steps/assumptions in the argument that I’m clearly missing here.
Why do you think consisting of neurons in a body built by genes makes us “slaves” to them? How can you be a slave to yourself?
Why do you think consisting of neurons/genes contradicts free will?
Why do you think free will and physics (determinism, etc.) are incompatible?
Why on Earth would you think a lack of free will (in the sense implied by physicalism) “exonerates us from culpability”?
What do you mean by “free will”, anyway?
I have never understood Velikovsky’s appeal to the humanities. As a graduate student I witnessed a sort of revival of the mania at the university I attended. This was in the seventies and many of the arts professors, with their students, fell over each other in their adulation in a sort of cult of personality as near as I could tell. There was even a conference on the matter that I attended – it was hilarious. Maybe they thought that science had sustained a black eye in the exchange and this was all that mattered to them in their resentment of scientific achievement. In the decades since cooler heads have realized the vacuity of Velikovsky’s theories and he is all but forgotten. The humanities have moved on to other fads and fashions as if past indiscretions did not exist. Stove’s partial support of this makes him less sui generis than you state.
To Dan,
Quote “How, in a determinist universe, could a threat alter behavior?”
Isn’t this exactly what one would expect in a deterministic universe? Effect follows cause!
Heh. I’ve just been reading that Velikovsky link at the bottom of the post.
“This, among other things, led him to assert in 1950 that the clouds of Venus must be very rich in petroleum gas. All contemporary knowledge of the chemistry of the planet’s clouds was flatly against it. Yet it has turned out to be so.”
Scotian:
Because, given determinism, I can’t be convinced of the merit of doing or not doing something at a result of a reasoned reaction to a threat. I can’t consider Action A and Action B and decide upon Action B as a result of possible consequences, as that would entail free choice. At best, if we can reduce a threat to a physical thing (a difficult proposition in itself, how much does a threat weigh?), then we can say that that physical thing interacted somehow (again curious, how does a physical thing, presumably in the brain chemistry of one being and expressed through arbitrary sounds or through arbitrary markings on a piece of paper jump into my own brain?) with my own brain bringing about a further physical reaction. The threat as threat had no effect whatsoever, it was merely a physical process occurring wholly beyond the control of any of the beings (not agents, note) involved.
“Because, given determinism, I can’t be convinced of the merit of doing or not doing something at a result of a reasoned reaction to a threat. I can’t consider Action A and Action B and decide upon Action B as a result of possible consequences, as that would entail free choice.”
Why?
Consider a chess-playing computer program. What it does is to start from the current position, generate all possible future positions that can be reached by four or five legal moves, evaluates each position to count up if it has lost pieces, has pieces left unprotected, etc. and then picks the move giving the best worst-case.
Chess follows deterministic rules. The computer follows deterministic process of reasoning. And yet it “consider Action A and Action B [and the rest] and decide upon Action B as a result of possible consequences”, which you say is impossible without free will.
So does a chess computer have free will?
Dan,
I have read your response over very carefully a number of times. I get no sense from it. I’m not being snarky and I really do not understand your point.
NiV,
Yes it is indeed interesting that comments that Stove could get away with in 1972 because of the effort required for the casual reader to check the facts (i.e. hours at the local library) are now easily exposed within seconds using google. The web has not been kind to blowhards.
Nullius in Verba
No, because the computer does not “choose” nor does it reason. It simply follows a series of instructions that have been implanted within it by an external agent, i.e. the human programmers. As you note, its actions are utterly determined, and choice, picking, and reasoning all require that there actually be the potential of doing otherwise. If I’ve designed an arrangement of pipes and valves such that when the water in a bucket reaches a certain level, a valve opens and the water is poured out through a pipe, it would be silly to say that the waterworks “decided” to open the valve, or that it took into consideration the amount of water in the bucket and reasoned that it ought to release the water. Equally silly would be the notion that we could find the waterworks culpable for soaking someone standing below it.
The chess playing computer or the human being within a determinist universe are simply more complicated instances of this. Simply adding up a series of valves, or logic gates, to a point of great complexity doesn’t magically produce the ability to choose or reason, there is a difference in kind here.
Scotian
I don’t understand what is particularly difficult. If materialist determinism is true, than a threat, which is a non-physical thing, cannot possibly influence my actions. If a threat is reducible to a physical thing — I’d imagine the materialist would say that it is reducible to some sort of electro-chemical reaction in my brain (which is a problematic statement that we’ll accept for the sake of argument)– and the materialist claims that this is what affects my actions, than it is not the threat as a threat which influences me, but rather the threat as reduced to an electro-chemical reaction. This electro-chemical reaction was not generated do to any exercise of will on the part of the threatener, but rather came about as the consequence, presumably, of the physical state of things at the beginning of the universe in conjunction with the causal regularities that dictate the actions of these physical things. Likewise, I cannot “decide” to act or not act as a consequence of this threat, as deciding implies the ability to choose between alternatives, but I have no such ability.
If determinism is true than everything is determined, we cannot sneak in decisions or agency along the way. Human actions, given determinism, are no different than rocks falling down a mountain or a waves lapping upon a shore. Even the belief in determinism is not reached via a careful consideration of the evidence, a weighing of pros and cons followed by a decision to accept the position as correct, but rather is simply a consequence of the causal stream in which you or I or whomever happens to stand.
If the chess-playing program had free will it could choose not to be a chess-playing program. It might decide to give checkers/draughts a go for a while. Similarly, if I have free will I could choose to be horse, or a woman. Err, no, maybe things aren’t that simple.
“No, because the computer does not “choose†nor does it reason.”
What’s the difference between what it does and what a human chess player does?
It seems to me that you are pointing to two processes that are identical in function, internal content, and effect, and simply declaring one to be different on the basis that you happen to know how it works. That the instructions are ‘implanted’ is no difference – humans learn how to play chess from other humans, too.
Regarding your waterworks example, why would it be silly? People have built analogue computers out of waterworks just like that. It used to be how they solved differential equations.
Computers can play chess, find routes, solve equations, complete crosswords, solve logic puzzles, interpret instructions, generate new mathematical theorems, play ‘Jeopardy’, the observable effect of which is identical to humans. At some things computers are actually better at it. Reasoning – the problem-solving process – can be automated, just like ‘walking’ and ‘lifting’.
What is the difference in kind? It sounds like some completely invisible, untestable, magical property that humans have ‘just because’, a bit like vitalism. Assuming you do mean something else – can you suggest an objective test for detecting/distinguishing it?
Dan,
It seems that NiV understands you better than I do and has made a number of valid points. As far as I can tell the key to your position is this statement:
“If determinism is true than everything is determined, we cannot sneak in decisions or agency along the way. ”
This is an assumption of yours and without it nothing else that you say would seem to follow. You have to demonstrate that it is true, which is not going to be easy since you will have to show that a decision does not follow from a prior, i.e. is determined from an existing state. The debate about free will versus determinism is extremely complicated and I do not believe that it can be decided in a few sentences. The question may never be decided. There is certainly no gotcha question as many seem to think.
“If materialist determinism is true, than a threat, which is a non-physical thing, cannot possibly influence my actions.”
Chess positions can be threatening but you discount a program capable of perceiving the threat(s).
Just because we can predict into the future and see implications in no way implies our choices are “reasoned”.
Nullius (et al) this will be my last response both because I have to head off to PT, and because we seem to be talking in circles. Responding to each of your paragraphs in turn.
The difference is, as andyd pointed out quite eloquently, that a human chess player could know the best move in a given situation and simply choose not to make it because he doesn’t feel like it, or he could choose to flip over the board and dance a jig on the table while sinning “O Susanna”.
Humans learn to play chess from other humans, computers are programmed to generate set outputs given set inputs. The computer doesn’t “know” the rules of chess any more than my copy of Lews and Short knows Latin.
It’s silly because buckets don’t have minds, they can’t choose something or reason. Honestly, the fact that you can’t understand that inattimate objects don’t make choices seems to indicate some sort of grave misunderstand ony your part, one which I can’t begin to comprehend.
Except that you’ve completely ignored the fact that in each of these cases the computer was programmed. An outside intelligence, i.e. a human programmer, built instructions into a computer which dictates what outputs it provides given a specific input. In the absence of this exterior intelligence, the computer cannot generate anything whatsoever, it just sits there. More, the only reason these outputs are recognized as moves in a chess game or directions to Buffalo, New York or the answers in the form of a question is because humans are around to interpret these outputs. 10101010101011010111110101010101010000001111100100101010101010 has no instrinsic meaning in the absence of a human being assigning a value to that string of ones and zeros.
Asking for an “objective” test to verify subjective experience (which is the different in kind that we are talking about, human beings are subjects, computers, waterworks, and rocks are objects) demonstrates such a basic category error that I think it essentially makes my point for me. The notion that you would need an “objective” test (which presumably would need to be carried out by humans, who are subjects, making this a highly questionable enterprise) to demonstrate that utterly basic human experience, such as the fact that I chose to eat eggs and bacon rather than Cheerios this morning, is in fact experienced is ridiculous on its face. My experience is the data for which any theory, philosophical or scientific, must account for. Any theory which denies that this data exists simply because I cannot point to it in a lab is obviously inadequate on its face.
DAV
If you want to make the claim that your belief in determinism is not a rational one. That it is simply a consequence of the causal stream that you inhabit, and not a process of examining evidence and deciding upon the most likely explanation that’s fine with me. Although determinists seem to waste an awful lot of energy trying to convince others of their position given this. I guess you don’t have a choice though….
And, speaking of threats: Suppose I were to stand your behind a transparent shield and inform you that the arrow traveling toward your eye could not penetrate and even prove it to you. I would still bet you would blink when you saw it coming even when “reason” informs you it’s not a real threat. You really would have no choice. Note that the threat” may involve physical things but is itself non-physical.
“If you want to make the claim that your belief in determinism is not a rational one.”
Careful! I haven’t stated any such belief.
Taking this topic of ‘automated reasoning’ a bit further…
The reason that reason can be automated is that the physics of the universe has the property of self-similarity, meaning that different parts of it operate accoridng to rules that are effectively identical, with translation. There is a mathematical relationship called a ‘homomorphism’, in which you have two different operations in two completely different domains, and you can write down a mapping from the points of one domain to the points of the other such that the operation in one domain maps exactly onto the operation in the other.
This means that you can construct a ‘simulation’ of one part of the universe (say the pressure of plasma inside stars) using another bit of the universe (say water pipes, or electronic circuits, or even neurons) such that when you translate such-and-such a water level to a particular stellar pressure, the dynamics of the water pipes exactly matches the dynamics of the plasma, and you can use the simulation to predict what the real thing will do.
This homomorphism is what we mean by ‘meaning’. It’s how symbolic representation and concepts are built – how ideas and thoughts they can exist in a purely physical universe.
And as if the homomorphism property wasn’t fantastic enough, the universe has another, even more amazing property: Turing universality. Meaning you can build a simulation machine that can, with the right set up, simulate *anything* else in the universe.
So it’s possible to build a general purpose simulation machine that can simulate other bits of the universe, predict events, play with pre-conditions, and automatically construct ‘plans’ that if carried out on the real universe will produce a desired outcome. The task of taking a simulator and a desired outcome, and figuring out what inputs will yield it, is called ‘problem-solving’. A process of taking one simulation and transforming it into a slightly different but predictively *equivalent* simulation, by which we can often solve problems, is called ‘reasoning’. And reasoning can be automated, by a process with a deep connection to the Turing universality that allows us to represent any bit of the universe we want.
There is no great mystery about how a bunch of neurons can implement reasoning, abstract concepts, problem-solving, choices, sensing the world around it, acting to change it in accordance with goals, and so on. It’s built into the laws of physics such that all matter can do it. It’s just a matter of how it’s arranged. The *big* mystery – and the one the philosophers are still working on and arguing about – is how matter can *experience* the process. Why do we have the *experience* of sensation – both of the world around us, and of (some of) our internal mental processes? There’s a word for it – ‘qualia’ – to distinguish it from the simple fact of representation, and is one of the biggest topics in the philosophy of mind.
Whether the physics of qualia is really a problem is a difficult question – the problem is that we only have the internal experience of it, there is no objective external evidence of it to study. We really have no way to tell whether anything else has it – other people we assume have it by symmetry, a lot of the more intelligent animals seem to, but there seems to be nothing in principle to say that it doesn’t go all the way down. ‘Panpsychism’ is the belief that the entire universe is aware, and that our awarenesses are not isolated spots of light in an otherwise dark, mind-dead universe, but knots of complexity in a living sea of awareness. Spinoza used something similar as his version of ‘God’ – and pantheism is arguably a type of God that physics could conceivably accept.
On the other hand, if there really were no objective reality to qualia, we wouldn’t be able to talk about it. The experience has to be represented in the physics, to have an effect on it. The ghost cannot sit in the machine, passively observing. It has to pull the levers, and that means it has to be a part of the simulator, represented within it. But I’m not going to solve a problem that has had a generation of philosophers stumped in a blog comment.
—
So, to your recent comment. A computer can choose to flip the board and play ‘O Susanna’ too – that’s no problem. In fact, it’s usually harder to get the computer to stick to what it’s supposed to do – more usually there are ‘bugs’ in the program that can trigger strange behaviour. Computers can be programmed to learn chess from other humans, too – e.g. by deducing the rules from watching humans play. It doesn’t have to be coded explicitly.
You say buckets don’t have minds, but this seems more like an unconsidered axiom of yours. You seem to regard it as so obvious that you can’t explain why you think it’s so. As I explained above, I don’t find it obvious at all. I see no reason why inanimate objects can’t have ‘minds’. And it explains how brains can have minds rather neatly, as well.
If a human is born and allowed to grow up with no input – no teachers, no parents to teach language, no sensory input to teach about the world, I rather doubt that it would do very much, either. Again, we usually program general-purpose computers to wait for instruction, but we don’t have to. Quite often, specialised systems start working and doing things straight from being turned on.
The meaning of representative data such as a long binary string is inherent in what happens when it is fed into the simulation. If you feed in that number and the action that results is that it plays ‘O Susanna’, then that is what the number *means*. No humans are required.
But your use in your final paragraph of “subjective experience” suggests what might be the root of the disagreement. Prior to developments in computer science/artificial intelligence, it was commonly assumed that ‘subjective experience’ (i.e. qualia) and ‘reasoning’ (e.g. problem solving) were synonymous – we experience our minds solving problems, we assume its the same process. But we know now that other conglomerations of matter can solve problems, understand concepts, predict consequences, plan actions to achieve goals – what we don’t know is whether this can happen without anything experiencing it. It is at least conceivable that it can. (The so-called ‘zombie’ problem in the philosophy of mind.) If you find panpsychism to be unacceptable/inconceivable, that’s fine. There’s no definitive evidence one way of the other. You’ve got a lot of good philosophical company who feel the same way. But it’s not the same thing as reasoning, or making choices. That”s something we know a lot more about.
Nullius in Verba,
“At some things computers are actually better at it. Reasoning – the problem-solving process – can be automated, just like ‘walking’ and ‘lifting’.”
You obviously have no understanding of how computers really work.
No, the problem solving process can not be automated. A computer can not solve any problem it hasn’t been specifically programed to solve.
Artificial Intelligence is nothing but vapor ware and the problem will never be solved with the hardware that is currently available.
And even for a problem it has been programed to solve, perhaps chess, it doesn’t do it any better, it only does it faster.
Dan,
“Humans learn to play chess from other humans, computers are programmed to generate set outputs given set inputs. ”
Not really. They certainly don’t have every possible configuration stored.
“the computer cannot generate anything whatsoever”
You are confusing an incomplete machine with a more complete one. That the combined result has limited capability does not mean it cannot fairly mimics what humans do and do so in the same way. A human could decide to be a horse’s ass while the computer cannot? So what? Can you decide to be a giraffe? If not, what does that say about your free will?
How does a stem cell become a neuron? Through choice?
Matt S,
NV has it right.
Yes, AI has been oversold and allowed things which have nothing to do with intelligence creep under its blanket. Check out the works of Alan Newell and Herb Simon.
“And even for a problem it has been program[m]ed to solve, perhaps chess, it doesn’t do it any better, it only does it faster.”
It is generally agreed within the AI community that current chess programs DO NOT mimic human behavior. The current ones (even the IBM one that got Kasparov) look much further ahead than it is believed humans are capable. Unfortunately, they have turned chess into the equivalent of the game of Bridge with its point system that renders the game mechanical.
BUT that in no way implies that Turing was full of it.
The idea that computers solve problems is simple anthropomorphism. Computers no more solve problems than axes chop down trees or Shah Jehan built the Taj Mahal. People solve problems using computers. The computer doesn’t even know what a problem is much less that it is solving one.
We sometimes say so when we talk casually but that’s a bad place to start a careful argument.
Rich,
See General Problem Solver — an early attempt ca. 1960
Rich old chap, in your statement “The idea that computers solve problems is simple anthropomorphism” I must disagree, as in this case it begs the question. It would be a perfectly legitimate statement in any other context but here the debate is one of free will versus determinism which can be thought as asking whether the human brain acts like a very complicated computer. Although this is not the only way to frame the issue it must be dealt with on its own terms.
Scotian,
Nah, Rich is right. If you think not, try asking an abacus what it thinks of 7 x 8.
Briggs,
Rich’s response is nearly the equivalent of stating “You can’t build a computer out of transistors because transistors know squat about arithmetic.”
It’s rather short-sighted.
DAV,
You’re going to have to show how you got from his comment to your quote. I don’t buy it.
I do buy that computers can’t think. The only difference between the abacus and a computer is speed, really. Doing the banal faster is not thinking.
“I do buy that computers can’t think.”
Currently. The question is whether they ever could. Stating that they don’t (currently) is a dismissal.
Briggs, the abacus will say 56, the same answer given by the human. The question is not whether computers think but whether human thought is an illusion. You keep going for the gotcha response and end up begging the question.
“Briggs, the abacus will say 56, the same answer given by the human.”
No, the human will move the little beads around to positions which are used to represent the number 56. The abacus itself won’t be doing anything, and the bead configuration only gives the answer 56 because that’s the meaning humans assign to it.
The human brain has about 10^14 synapse connections. A computer has a few billion connections. Expecting computers to be able to match human reasoning abilities, or demanding that they do so before calling it “reasoning”, is silly.
The basic issue is that brains consist of neurons that individually follow very simple rules, without awareness. There is no magic that we can detect. There is no control centre receiving instructions from any aphysical outside – it all seems to be done by the neurons themselves. The question is, how can this lump of meat express concepts, reasoning, planning, problem-solving, general intelligence, etc.?
The problem, really, is one of an inadequate intuition. It’s not that we have any actual reason to think neurons *can’t* do all that on their own. It’s that we have a vitalist mental model of dead matter that acts without awareness or meaning, and the concatenation of the dead and meaningless remains dead and meaningless. It’s the same sort of thinking that led vitalists to make a distinction between living and dead matter, even though the atoms in your body are absolutely indistinguishable from the atoms in a rock, or the air. There must be some ‘spark of life’ that occupies it – a ghost in the machine.
So on finding that there isn’t any, that the atoms are all there is, the conclusion people draw is that this is saying that humans are dead and meaningless, automata without real thought, programmed by their genes, a puppet-play with no audience. And they don’t believe it, because they know that they themselves are alive and aware.
But they get the reasoning exactly backwards. It’s not that this shows humans are dead, like machines – it’s that this shows machines are alive, like humans. Simple, crude, primitive – like molecules in a test tube are only the crudest echo of the intricate biochemistry of a cell, or a simple metal bar is only the crudest echo of a steam locomotive – but essentially the same sort of thing. And then it becomes a lot more reasonable that a bunch of chemicals, a bunch of neurons, following maybe-deterministic laws of physics, can at the same time be something as wonderful as a human mind. There’s nothing special about humans – the entire universe is like that.
It’s no use simply repeating your vitalist intuition, that matter is dead and meaningless, as if it were obvious and unarguable, because it’s precisely the issue in question. On the contrary, I find it obvious and unarguable that matter *can* have meaning – and I’ve explained exactly how: via homomorphisms and simulation.
So, accepting that I simply don’t find it ‘obvious’ and don’t share your intuition, can anyone give an actual argument as to why it is so? Is this one of those ‘obvious because there’s a really simple explanation’ things, or an ‘obvious because it just *is*’ ones?
Mr. X,
You are missing my point. NiV has explained it very clearly and with more energy than I have. I would only add that I think that the free will vs determinism question is still open, and will not be solved easily as most here think. Most free will arguments do reduce to a ghost in the machine as NiV says and this must be avoided to be taken seriously.
NiV, you should consider a guest post.
NiV,
You make some great comments/observations and I would like to see more of your ideas presented in a guest post, maybe.
Your point that there is a difference in “‘subjective experience’ (i.e. qualia) and ‘reasoning’ (e.g. problem solving)” is something that I don’t think most are picking up. We probably do the overwhelming majority of our problem solving automatically, well below the threshold of awareness until the “answer” is “presented” to our active and very narrow attention span.
Have you read Descarts Error by Antonio Damasio? His idea of Somatic Markers seem to fit nicely with homomorphism and qualia i.e. our experience of the world (both internal and external) is mapped in bodily changes of state. These are coincidental with abstract reasoning and play a significant role in filtering and directing our attention and behavior by making our experiences personal and value laden.
Pingback: Space’s 10 Mistakes People Make When Arguing Science | William M. Briggs
Pingback: There Are Only Two Errors: Idealism & Materialism |