Judea Pearl Is Wrong On AI Identifying Causality, But Right That AI Is Nothing But Curve Fitting

Judea Pearl Is Wrong On AI Identifying Causality, But Right That AI Is Nothing But Curve Fitting

Yep

I’ve disagreed with Judea Pearl before on causality, and I do so again below; but first some areas of agreement. Some deep agreement at that.

Pearl has a new book out (which I have not read yet) The Book of Why, which was the subject of an interview he gave at Quanta Magazine.

Q: “People are excited about the possibilities for AI. You’re not?”

As much as I look into what’s being done with deep learning, I see they’re all stuck there on the level of associations. Curve fitting. That sounds like sacrilege, to say that all the impressive achievements of deep learning amount to just fitting a curve to data. From the point of view of the mathematical hierarchy, no matter how skillfully you manipulate the data and what you read into the data when you manipulate it, it’s still a curve-fitting exercise, albeit complex and nontrivial.

This is true: perfectly true, inescapably true. It is more than that: it is tough-cookies true.
Fitting curves is all computers can ever do. Pearl doesn’t think accept that limitation, though, as we shall see.

Q: “When you share these ideas with people working in AI today, how do they react?”

AI is currently split. First, there are those who are intoxicated by the success of machine learning and deep learning and neural nets. They don’t understand what I’m talking about. They want to continue to fit curves. But when you talk to people who have done any work in AI outside statistical learning, they get it immediately. I have read several papers written in the past two months about the limitations of machine learning.

Don’t despair, Pearl, old fellow, I share your pain. I too have written many articles about the limitations of machine learning, AI, deep learning, et cetera.

Q: “Yet in your new book you describe yourself as an apostate in the AI community today. In what sense?”

In the sense that as soon as we developed tools that enabled machines to reason with uncertainty, I left the arena to pursue a more challenging task: reasoning with cause and effect. Many of my AI colleagues are still occupied with uncertainty. There are circles of research that continue to work on diagnosis without worrying about the causal aspects of the problem. All they want is to predict well and to diagnose well.

I can give you an example. All the machine-learning work that we see today is conducted in diagnostic mode — say, labeling objects as “cat” or “tiger.” They don’t care about intervention; they just want to recognize an object and to predict how it’s going to evolve in time.

I felt an apostate when I developed powerful tools for prediction and diagnosis knowing already that this is merely the tip of human intelligence. If we want machines to reason about interventions (“What if we ban cigarettes?”) and introspection (“What if I had finished high school?”), we must invoke causal models. Associations are not enough — and this is a mathematical fact, not opinion.

Associations, which are what statisticians would call correlations, are not enough, amen, but that’s more than just a mathematical fact. It is just plain true.

Q: “What are the prospects for having machines that share our intuition about cause and effect?”

We have to equip machines with a model of the environment. If a machine does not have a model of reality, you cannot expect the machine to behave intelligently in that reality. The first step, one that will take place in maybe 10 years, is that conceptual models of reality will be programmed by humans.

The next step will be that machines will postulate such models on their own and will verify and refine them based on empirical evidence. That is what happened to science; we started with a geocentric model, with circles and epicycles, and ended up with a heliocentric model with its ellipses.

Robots, too, will communicate with each other and will translate this hypothetical world, this wild world, of metaphorical models.

Now I do not know exactly what Pearl has in mind with his “model of the environment” and “model of reality”, since I haven’t yet read the book. But if it’s just a list of associations (however complex) which are labeled, by some man, as “cause” and “effect”, then it is equivalent to a paper dictionary. The book doesn’t know it’s speaking about a cause, it just prints what it was told by an entity that does know (where I use that word in its full philosophical sense). The computer can be programmed to move from these to identifying associations consonant with this dictionary, but this is nothing more than advanced curve fitting. The computer has not learned about cause and effect. The computer hasn’t learned anything. It is mindless box, an electronic abacus incapable of knowing or learning.

This is why I disagree with Pearl again, when he later says “We’re going to have robots with free will, absolutely. We have to understand how to program them and what we gain out of it. For some reason, evolution has found this sensation of free will to be computationally desirable.” Evolution hasn’t found a thing, and you cannot have a feeling of free will without having free will: it is impossible. Robots, being mindless machines, can thus never have free will, because we will never figure a way to program minds into machines, and minds are needed for feelings. Why? For the very good reason that minds are not material.

Nope

Not coincidentally, in the theological teleological sense of the word, Ed Feser has a new speech on the immateriality of the mind which should be viewed (about 45 minutes). I will take that speech as read—and not as the quale red, which is a great joke. Meaning I’m not going to defend that concept here: Feser has already done it. I’m accepting it as a premise here.

The mind isn’t made of stuff. It is not the brain. Man, paraphrasing Feser, is one substance and the intellect is one power among the many other physical powers we possess. (This is not Descartes’s dualism.) The mind is not a computer. It is much more than that. Computers are nothing more than abacuses, and there’s nothing exciting about that.

Man is a rational creature, and his mind is not material. Rationality is (among other things) the capacity to grasp universals (which are also immaterial). Cause is a universal. Cause is understood in the mind. (How we learn has been answered, as I’m sure you know as you’ve been following along, in our recent review of chapters in Summa Contra Gentiles.) Causes exist, of course, and make things happen, but our knowledge of them is an extraction from data, as the extraction of any universal is. Cause doesn’t exist “in” data. We can see data, and we move from it to knowledge of cause. But no algorithm can do this, because algorithms never know anything, and in particular no algorithm engineered to work on any material thing, like a computer, like even a quantum computer, can know anything. (Yes, we make mistakes, but that does not mean we always do.)

This means we will never be able to build any machine that does more than curve fitting. We can teach a computer to watch as we toss people off of rooftops and watch them go splat, and then ask the computer what will happen to the next person tossed off the roof. If we have taught this computer well, it will say splat. But the computer does not know what it is talking about. It has fit a curve and that is that. It doesn’t know the difference between a person and a carrot: it doesn’t know what splat means: it doesn’t know anything.

We can program this mindless device to announce, “Whenever the correlation exceeds some level, as it is in the tossed-splat example, print on the screen a cause was discovered.” Computers are good at finding these kinds of correlations in data, and they worked tirelessly. But a cause has not been discovered. Merely a correlation. Otherwise all the correlations listed at Spurious Correlations would be declared causes.

Saying computers can discover a universal like cause is equivalent in operation to hypothesis testing, though not necessarily with a p-value. If the criterion to say “cause” isn’t a p-value, it has to be something, some criterion that says, “Okay, before not-cause, now cause.” It doesn’t matter what it is, so if you think it’s not a p-value, swap out “p-value” below with what you have in mind (“in mind”—get it?). In the upcoming peer-reviewed paper (and therefore perfectly true and indisputable) “Everything Wrong With P-Values Under One Roof” (to be published in a Springer volume in January: watch for the announcement), I wrote:

In any given set of data, with some parameterized model, its p-value are assumed true, and thus the decisions based upon them sound. Theory insists on this. The decisions “work”, whether the p-value is wee or not wee.

Suppose a wee p-value. The null is rejected, and the “link” between the measure and the observable is taken as proved, or supported, or believable, or whatever it is “significance” means. We are then directed to act as if the hypothesis is true. Thus if it is shown that per capita cheese consumption and the number of people who died tangled in their bed sheets are “linked” via a wee p, we are to believe this. And we are to believe all of the links found at the humorous web site Spurious Correlations, \cite{vigen_2018}.

I should note that we can either accept that grief of loved ones strangulated in their beds drives increased cheese eating, or that cheese eating causes sheet strangulation. This is joke, but also a valid criticism. The direction of causal link is not mandated by the p-value, which is odd. That means the direction comes from outside the hypothesis test itself. Direction is thus (always) a form of prior information…

I go on to say that direction is a kind of prior information forbidden in frequentist theory. But direction is also not something in the data. It is something we extract, as part of the cause.

I have no doubt that our algorithms will fit curves better, though where free will is involved, or the system is physically complex, we will bump up against impossibility sooner or later, as we do in quantum mechanical events. I have perfect certainty that no computer will ever have a mind, because having a mind requires the cooperation of God. Computers may well pass the Turing test, but this is no feat. Even bureaucrats can pass it.

18 Comments

  1. Richard Hill

    What am I missing? If we have a correlation offset by time, surely we can assume the leading effect is the cause of the trailing effect.
    Therefore machine derived correlations can give us cause and effect.

  2. Briggs

    Richard Hill,

    No, the cause is still in your mind. Try your example on the spurious correlations. It’s just correlations, mere signals.

  3. “The mind isn’t made of stuff. It is not the brain. Man, paraphrasing Feser, is one substance and the intellect is one power among the many other physical powers we possess. (This is not Descartes’s dualism.) The mind is not a computer. It is much more than that. Computers are nothing more than abacuses, and there’s nothing exciting about that.”

    After a career working with mostly liberal artsy sort of educated people–whose backgrounds are heavy in psychology, philosophy, learning, education–I’ve been spending lots of time with businessy and techy sorts–whose backgrounds are heavy in math, physics, engineering, accounting, finance, etc.

    One group I worked with fell hard for the scammy “neuro-leadership institute” and its shiny pictures of brains lighting up in different colors.

    They bought the services of these new-age MRI-driven charlatans to provide “neuro-leadership” training.

    I went to the brain guys’ conferences, and tried to figure out what their schtick was.

    The bottom line was that they were peddling a simplified human behavior thesis to the poor befuddled business and tech experts.

    The Neuro-guys’ thesis boils down to: People are masses of protoplasm with electricity running through. Their brains are just bits of protoplasm. We can take pictures of electric charges in their brains. Aren’t they pretty?

    They implicitly reject the idea that the “mind” is separate from the gooey jelly in your skull.

    So, I made an attempt to help my colleagues learn about human motivation. The goal was to help them understand that we can adapt the learning solutions we create to varying needs of varying target audiences, based on their needs and motivations.

    In doing this, I introduced the concept of “the mind”. My tech and business immersed colleagues were dumbfounded. “Mind? What’s that? You mean the brain, right?”

    That was when I realized that it was a lost cause.

    I left them to be bedazzled and confused by the Neuro-charlatan’s pretty MRI pictures of electricity in brains.

    I think that was a microcosmic demonstration of the power that the promise of AI holds over certain types of people–those who have been mis-educated, or who misunderstand the mind, psychology, and related subjects.

  4. Sander van der Wal

    The computer has also no builtin knowledge of correlations, i.e a computer without the current AI software won’t be able to the correlations.

    To get a computer to see cause and effect you need to program that into it, which is giving it a model of how to understand the world.

    The argument here is that a mind is capable of understanding cause and effect, and an algorithm is not capable of understanding it. Which means that there is some aspect of causes amd/or effects that makes it impossible for an algorithm to find a cause for an effect. Also, that same aspect offers no obstacle for a mind to be able to find a cause for an effect.

    Now, one wonders what aspect that would be? One wonders why the mind has no problem with it, and why an algorithm always fails because of that aspect.

  5. Ken

    When Artificial Intelligence (AI) fails, it often fails spectacularly. There’s a nice summary of the issues (with AI, and, how people are already attributing far too much to it without understanding what, or if, it really works): https://www.youtube.com/watch?v=TRzBk_KuIaM

    An example, among many given, describes how an AI algorithm ‘learned’ to associate the presence of ‘snow’ with “wolf” and thereby concluded that a photo of a dog (a husky) was a “wolf” because of the snow in the photo of the husky — the AI algorithm based its decision on the presence of snow alone, nothing about the actual animal!

    Ominous — some AI algorithms are being accepted/used for legal matters, and access to the underlying code for independent verification/validation is denied…but accept and use it people are… The human nature to credulity is enhanced when the thing believed in helps in some way.

    We something similar here — a rush to accept a philosophical basis [founded in ancient dogma/ideoology] of “mind” as something distinct from/independent from “brain”; ideology/dogma almost always wins, and seldom goes down without a struggle even in the face of ample evidence: “I have perfect certainty that no computer will ever have a mind, because having a mind requires the cooperation of God. ”

    When certain manifestations of “mind” are observed so is corresponding brain activity observed as increased oxygen consumption via fMRI. When those portions of the brain are damaged/destroyed (e.g. via stroke, injury, bullet, etc.) the corresponding manifestations of mind are no longer observed (when destroyed) or are impaired (when only damaged). That’s correlation showing cause-effect in both directions: a) Exercise the mind, we observe corresponding work in a specific portion of the brain; and, b) damage or destroy that specific part of the brain, we observe the exercised part of mind no longer functions as well or at all. Most people find that pretty compelling — and the amount of research and observation on such brain/mind examples is large and ever-growing (much going back to WWI observations of hundreds of head injuries where the then-slower bullets were far more selective/focused in the damage done than today; and, today’s stroke and other injury effects).

    Belief, founded on faith unsupported by evidence, is good at not seeing those independent but mutually reinforcing correlations for what they are. And such faith when threatened resorts to the usual pablum of false comparisons (e.g., refute or call into question a pseudoscience spin-off of the credible science to delude oneself and others like-minded that by refuting the pseudoscience ALL associated science is similarly wrong; using Orwellian generalizations such as “scientism” often helps to shut down one’s critical thinking apparatus by stimulating an emotional reaction).

    If one truly believes there’s no connection between a human brain and “mind” … then to where does “mind” disappear when it stops functioning? (we’ll just ignore the corresponding correlations with fMRI locus’s of brain damage & destruction as coincidences) With the aged and with those suffering compounding strokes, if the declines and outright losses of “mind” are independent of “brain” then where is the asserted immortal mind going? Recognizing that “mind” is a euphemism for “soul” this suggests that God, or maybe Satan, or maybe even both! are harvesting souls in “bite size” incremental pieces … after human death in which “mind” is fully freed from the mortal coil, to be reassembled like The Blob in that 1950s horror movie??

    The ‘”mind” independent of body’ notion leads to some curious implications if that’s truly what’s going on.

  6. Joy

    To argue for the brain instead of the mind is not really to do anything but word switch.
    Nothing has altered in all practical terms. It’s not clear why people think FMRI is doing anything to disprove anything about God or the soul, or people, or the mind.

    Maybe some old strange notion about a puff of smoke in the head or some other non viable and unlikely conviction. That is misrepresentation of what is being claimed. It alters nothing.

    When you talk to a friend you are talking to a person. Not to a brain. The brain is the equipment you are using every day of your life, along with the rest of your body. What peculiar disjointed ideas people have about their bodies. The brain can’t do it without the body either!

    If people want to consider themselves as a brain talking to a brain then there’s not a lot of harm in it. Just a bit too much effort.

    Similarly, minds don’t speak of talking to minds. There is a meeting of minds. People know what this means too, they don’t claim difficulty with it.

    This is all wordplay and missing of the point.
    FMRI is hardly news in neurophysiology. It is strange that people bring it up as a proof of non existence of something. What did people think was happening in there. It is proof of processing. People have always known this is happening in the head. You can feel it for a start.

    Cerebrovascular accidents aka strokes are also as old as the blood brain barrier.
    Each type of stroke gives classic signs and symptoms. There is a natural history for a stroke. Some last a few minutes only. Some go unnoticed.

    If people are treated like meat machines they do not take Kindly to it. See catholic treatment of babies in Ireland. See your daughter being treated like meat by a wolf in a church. Then see the phoney outrage when it’s a boy of the same age. See people insisting on manners on blogs. If people are just atoms then why bother with metaphysical niceties?

    You cannot say to a patient.
    “Hello brain number 3, I’m brain physio, coming to manipulate your brain via your central key point!
    It just doesn’t work. People with dense hemiplegia respond to direct and close physical contact. It is as much an art as a science. Nothing is done without science informing it. Even when science is wrong! Dealing with the person in the process is not simply a nicety. Dealing with them ‘mindfully’! not as mindless meat; or brainless meat, for that matter, is the way to a good outcome.

    So it’s fun to mock and parody talk of the mind and the soul but when your turn comes you will be grateful for the kindness as much as for the ‘brain treatment’.
    Furthermore, emotion physically affects tone. A big focus of treatment in stroke rehab.

    As for questions of God and claims of FMRI somehow showing anything about him?
    How queer that people think it would do that. Did they ever think that God would be visible?
    Of course they didn’t. Thoughts correlating with a ‘brain state’ is Geography.
    The thoughts themselves are not seen. They are experienced.
    There are neuro-tags in the brain that correlate with certain types of pain. It still explains nothing about the person in pain. The pain can be mapped very roughly. When it comes to trying to block it or treat it, the theory fails. Spectacularly, with success rates claimed at ten percent. Which is little better than leaving it well alone. Chronic complex pain, which is to say pain state outlasting tissue damage, is treated using the assumption of the mind. Sorry. No smoke and mirrors. Well there are mirrors, but thats a different story.

  7. DAV

    Computers are good at finding these kinds of correlations in data, and they worked tirelessly. But a cause has not been discovered. Merely a correlation. Otherwise all the correlations listed at Spurious Correlations would be declared causes.

    Actually, no. All of the spurious correlations you listed are between two variables. You can’t use a correlation between only two variables to determine cause. Pearl never said you could. He DID say you can use the correlations between three variables in the special case where one of them is the cause of the other two.

    For example, the price of rum and New England preacher salaries seem to be spurious correlation until one considers inflation which is a common cause of the other two. X and Y are independent given C if C is the common cause.

    Yes, determining correlation and independence is problematic.

    Pearl uses ’cause’ in the restricted sense that it can be used to make predictions. Granted, he doesn’t always make that clear.

  8. DAV

    We can see data, and we move from it to knowledge of cause. But no algorithm can do this, because algorithms never know anything

    Considering that an algorithm is a set of transformation rules (processing steps), one wouldn’t expect it to know anything. That doesn’t prevent the output from being a piece of knowledge, e.g. there is an X in the image.

    A lot of course depends on what is meant by “knowledge”. Is a fact “knowledge”? What about a concept? What about a number of linked concepts? If one doesn’t have Einstein’s breadth and fullness of “knowledge” about say, what an X means or could mean, can one still claim to “know” X? If so, when does it become “unknown”?

  9. trigger warning

    “A lot of course depends on what is meant by “knowledge”. Is a fact “knowledge”? What about a concept? What about a number of linked concepts? If one doesn’t have Einstein’s breadth and fullness of “knowledge” about say, what an X means or could mean, can one still claim to “know” X? If so, when does it become “unknown”?”

    In psychiatry, this is called a “word salad”.

  10. Larry Geiger

    It’s automata all the way down.

  11. swordfishtrombone

    Okay, I’ve read the article, watched the Ed Feser video on YouTube, and even read the section from Saul Kripke’s analysis of Wittgenstein on which the main argument used by Feser is based, and I still haven’t got the slightest idea how the ‘mind must be immaterial’ conclusion is reached.

    As far as I can see, we can’t have any immaterial components because the immaterial bits wouldn’t be able to interact with the material bits. Lowbrow stuff, but true nevertheless.

  12. Ye Olde Statistician

    a philosophical basis [founded in ancient dogma/ideoology] of “mind”

    To what ancient dogma did Plato and Aristotle subscribe?

    When certain manifestations of “mind” are observed so is corresponding brain activity

    When certain manifestations of “hiking” are observed so is corresponding footprint activity. That doesn’t mean the footprints cause the hike. Similarly, every act of the intellect entails an act of the imagination; that is, the mind makes footprints in the brain. Suppose you conceive of “dog,” an immaterial entity. But when you do so, you cannot help but imagine a particular dog: Rover, Fido, Snoopy, or Spot — particular material entities, or at least a beagle or dachshund, generic representatives of a specific kind of dog. Perhaps, you imagine the sound of the word “d-o-g.” Since the imagination, by ancient philosophical “dogma,” is seated in the brain, you will always produce brain patterns when you think.

    It remains to be seen whether imposing those brain patterns will induce the thought. From experience during stroke rehab, I learned that my forefinger could be induced to touch my thumb by dint of electrical current. The next day, the muscles having been awakened, I was able to make the same motion by an act of will. Not only did the electrical current not produce a “desire” to circle my thumb and forefinger, but it felt completely different. And this did not even involve conceptualizing,

    This is because the same thought does not elicit the same neural pattern. In fact, maintaining the same thought doe not mean maintaining the same neural patterns. So there is not even a 1:1 correspondence.

    [such as] as increased oxygen consumption via fMRI. When those portions of the brain are damaged/destroyed (e.g. via stroke, injury, bullet, etc.) the corresponding manifestations of mind are no longer observed (when destroyed) or are impaired (when only damaged). That’s correlation showing cause-effect in both directions

    Correlation never shows cause and effect. If the switchboard is damaged or destroyed, what happens to the messages? They simply do not get where they were going; they do not cease to exist. More to the point, the switchboard does not cause the messages to come into being, nor are the messages “emergent” properties of the switchboard.

    I experienced this directly when I suffered a right-hand pontine stroke three months ago. A lot of messages dealing with the motions of the left body side were not getting through. I experienced teh acts of will, but the arm would extend itself, or it would extend partly to its goal but then stop as if at a glass wall. Whatever the contrapositive of “phantom limb” syndrome, I had it. Incoming messages were okay: I could feel most things, but could not react to them.

    IOW, having recently suffered a stroke, I can testify that while my sensory-motor switchboard (brain) got scrambled for a while, my thinking (mind) was still humming along. I personally experienced this disconnect.

    An analysis, published in Nature Reviews Neuroscience noted that: “Weak statistics are the downfall of many neuroscience studies, according to researchers that analyzed the statistical strategies employed by dozens of published reports in the field. Especially lacking in statistical power are human neuroimaging studies—especially those that use fMRI to infer brain activity.”

    An interesting implication is that even if the brain can be modeled by a mechanistic system, then to recognize its own Gödel sentences as true would require the mind to be something more than the brain.

    https://rifters[dot]com/real/articles/Science_No-Brain.pdf

    http://www.pnas[dot]org/content/early/2016/06/27/1602413113

  13. DAV

    An interesting implication is that even if the brain can be modeled by a mechanistic system, recognize its own Gödel sentences as true would require the mind to be something more than the brain
    Assuming, of course, that recognition requires formal proof.

  14. DAV

    https://rifters[dot]com/real/articles/Science_No-Brain.pdf Asks:
    “Is Your Brain Really Necessary?”
    then goes on to talk about brain scans.

    The answer to the question is: Of course not — the idea of brains being necessary is sooo last century. They’re so easily damaged and just get in the way. Control your body directly!

    The question should have been: how useful are brain scans?

    IOW, having recently suffered a stroke, I can testify that while my sensory-motor switchboard (brain) got scrambled for a while, my thinking (mind) was still humming along. I personally experienced this disconnect.

    How do you know your thinking wasn’t affected? Isn’t this the ultimate in self-assessment which clearly is bias free (not!)?

  15. Joy

    Brain scans have clinical use. They aren’t a philosophical toy. Nor do they explain philosophy. Just a diagnostic tool, not a diagnosis, either, although it seems that way.

    They simply do not show enough detail, or confirm or deny the existence of a higher functioning element, of which consciousness, deliberation of purposes and self knowledge, are a feature.

    In the case of certain strokes there is a symptom known as neglect, where the patient does not know they have a ‘left’ or ‘right’. Usually it is a visual disturbance which accompanies the stroke. It is not a matter simply of not feeling it. In rehab, efforts are made to alter the environment around the bed and chair to encourage the patient to notice their neglected side, to encourage functioning of that part of the ‘mind’/brain and body. Leaving objects that side, addressing people only from that side, transfers to and from that side, teaching them to feel the arm and hold their hand and so on. This is particularly problematic because of the fact that the ‘knowing’ part is affected.

    Consider that you don’t necessarily know, unless you think about it, that you have a left arm, not every moment of the day. A feature of getting better is forgetting which side you injured. So, when you can’t see anything on one side, visual field, mild or total neglect results.

    That thing that happens when you walk too close to a doorway despite being able bodied, or missing your mouth with a cup, which has happened to everyone at some point, is called absent minded when coordination was absent for a moment. The focus was elsewhere. When the feedback isn’t happening normally, the person learns a way around the problem. The mind of the person is learning another way, or the same way and nobody can control how it is physically done in the brain.

    In the case of missing limbs, phantom limb sensation and or pain is real as in ordinary (nociceptive or peripheral neurogenic) pain. It is a central matter, not requiring any peripheral input at that stage. It happens in the case of stroke, too. It’s a feature which shows the processing is happening even when the peripheral tissue is perhaps normal, or even absent! However in the extreme case of the absent limb which is still felt, it becomes obvious that experience is what matters, in the end. The conscious person knows then should not feel the sensation from a part which is missing. The experience of the limb is remembered and experienced.

    Imagine being missing all four limbs, hearing and eyesight, traumatically. Where would you be?
    In somebody’s arms.

    The sensory area in the cerebral cortex known as the sensory ‘homunculus’, which is an area that relates to the given part of the body, is directly proportional to the amount of input or sensitivity of the part. sensitive parts being bigger. When thee are altered pain states, the areas shift and alter which is shown, physically. It still doesn’t explain the person who experiences the sensation.

    Faith that we are all similar is essential to empathy. Which is also a very rough approximation.

  16. Joy

    Dav is right, but so is YOS
    This inspired my first comment;
    “much going back to WWI observations of hundreds of head injuries where the then-slower bullets were far more selective/focused in the damage done than today; and, today’s stroke and other injury effects)”

    ‘Today’s strokes’ aren’t different from yesterday’s strokes! Todays bullets also vary!

    Cause of injury affects type of nerve damage. Squashing a nerve almost completely will not necessarily produce permanent effects. Nor does total severance.

    My first comment was inspired by Ken’s commentary.

    Something as basic as ‘normal’ blood pressure goes back to work done testing soldiers in the world war one era. It is now admitted they hardly represented, ’normal’ measures for resting Blood pressure. That “normal” is higher than “normal”. Stick that in our statistical model and smoke it.

    Clinical experience informs that ‘norms’ are a very rough guide. A necessary approximation. People don’t read the normal values list before daring to attend a department labelled a human being!

    It’s a dogmatic thinker;’s nightmare.

    …and minds still exist. The normal question is,
    “and you? How are you in yourself?” Watch that one when your Dr or physio asks you, he’s accusing you of having a mind…the luddite.

  17. A neural network is something like a vey little section of the human brain. Is a functional unit. Not intelligent, but not a stupid amount of wires.

Leave a Reply

Your email address will not be published. Required fields are marked *