Arguing with Automatons

selfie

Introduction

There is no metaphysical middle ground between libertarian free will and automatonism. I stress the metaphysical here because there is phenomenological (psychological) middle ground that backs up into the epistemological. By “phenomenological middle ground” I refer to what I take to be most people’s every day experience with making choices. If you step into a taqueria and for a moment do not know if you “feel more like” chicken, beef, or pork, you think about it and choose one. We each make these (and many other) sorts of decisions throughout our day. In this process I (for I can in the end only speak for me) do not feel impelled by something, some combination of events in my past, to make one particular choice over another. I had chicken last week, so this week I’ll take the steak, or perhaps I liked the chicken so much I choose it again. Whether you are committed to libertarian free will philosophically the choice of chicken, steak, or pork, feels at least superficially free. Whichever choice you make you are at the same time aware that other choices (and so futures) were potentially open to you.

I will not further address this experience of phenomenological freedom because it is conceivable that you can genuinely believe you are free without actually being free just as genuinely believing you are Napoleon reincarnated does not mean you are Napoleon reincarnated. The issue then is not whether the alternatives appear open to you but whether they actually are open. Although you might have chosen beef or pork and have done so in the past, on this occasion something stemming from your past (indeed going all the way back to the big bang) determined that you would choose chicken and this determination was (usually is because otherwise the phenomenological room would also disappear) at least entirely subconscious if not in fact unconscious. On this occasion you were going to choose chicken just as on prior occasions there were determinations that led to your choosing steak or pork at those times. Automatons are entities that sometimes appear to make free decisions from a purely behavioral viewpoint, but which we know not to be free because we understand all of what leads deterministically to those choices; that is, we know all of what underlies the behavior both necessarily and sufficiently.

In this paper my goal is not to defend a view of libertarian free will as I have done that before here in this blog and other places. What does interest me here are two related things. First does it make any sense for a human being with free will to argue or debate with an entity who appears to be a human being but lacks free will? Second, if no human beings have free will does any debate or argument between such entities have any meaning or significance? I am thinking of the following scenario. Two human beings are having a debate. The thought of the first being is freely expressed through speech in a language that both know. That speech, having some meaning in the common language the other being grasps in her thought, leads to a free decision in the thought of the second being to accept the argument of the first being or to reject it and freely offer a counterargument of her own. Note the freedom involved here would entail the second being might have, besides agreeing or offering a counterargument, instead have chosen simply to be quiet and abandon the discussion among other options. What is crucial to meaning here is the respondent understands the semantic relation between the argument presented and her response whatever that turns out to be. “The semantic” is important here because the relation is not about the brain states of one party invoking brain states in the other, but rather subjective states of consciousness whose form and content do not resemble brain states.

The Argument

An automaton is a “state machine”. Some combination of parts each having various but finite numbers of states in which they can reside together determine what the automaton “does” at any given moment. The parts here can be mechanical, electromechanical, or of any other constitution that can express a “state”. Automatons today range from such trivial devices as automated floor cleaners to sophisticated computers in which software initially constrains possible “states” expressed in hardware; servomotors controlling a driver-less car or making chess moves on a game board. Every automaton begins in some first state when it is “turned on” and that state evolves in time from that point depending on what the automaton experiences in its inputs. Inputs include what it senses of the world’s response to its outputs (for example a chess move) which become further inputs. Not only do modern automatons “adapt” their behavior (within the range of mechanical possibility) by responding to their various inputs (given the potentials embedded in their programming), and that behavior can appear unexpected from the viewpoint of a human observer.

That these behaviors appear unexpected does not mean they are not fully determined (causally) by the automaton’s present state including all present inputs. In all the automatons we build from thermostats to self-driving cars and game playing computers we take this “ultimately determined” status for granted. We know that if we examine the machine’s workings in fine enough detail we will find exactly which combinations of states determined any particular behavior, that those states were sufficient to cause the behavior in question, and that those states were recursively determined by prior states (plus inputs) back in time to the machine’s first state. Because we know exactly how one state causes another we assume there is nothing more to the behavior of the automaton than its history of prior states. We assume that is, the automaton experiences no internal subjectivity. If two such automatons were to have an argument of some sort, a third-party would see what looked like our argument response between two humans illustrated above. But we have good reasons to believe that despite the behavioral similarity there is no subjective, semantic, understanding occurring in either of them.

Back in the 1970s there was an automaton named ELIZA. Today there are far more sophisticated automatons, but ELIZA is illustrative for my purposes here because it was specifically designed not so much to “be intelligent” but to mimic intelligence passing the Turing test. Simply stated, the Turing test proposed that a machine would or could be taken to be intelligent if in interacting with a human being, the human could not tell if it was interacting with a machine or another human. But the Turing test proved a little easy to pass under domain restricted circumstances and ELIZA was proof of that.

ELIZA’s domain was psychotherapy of the lite sort in which a therapist speaks one-to-one with a patient. Humans (students) were the patients and asked to talk (type) to their therapist.

Patient: I’m not sleeping well lately.
ELIZA: Have you any idea why?
Patient: I’m having bad dreams.
ELIZA: What are the dreams about?
Patient: My mother.
ELIZA: Tell me about your mother.

A simple program by today’s standards ELIZA found subjects, verbs, and objects in patient sentences and wove questions around one or more of them. If the program could not find any specific word to incorporate in its reply it output something more general like “why?” Most patients could tell that ELIZA was a machine but only after enough interaction that they realized ELIZA’s answers weren’t getting at anything. But initially, and in brief transactions, many patients thought they were speaking (typing) to a human being. But here’s where it gets really interesting for this argument. There came a point in work with ELIZA that some students, even knowing that ELIZA was a machine, not only continued to interact with it (some for long sessions), but reported experiencing therapeutic value! Some students said the sessions reduced stress and helped them think about their lives. The sessions “had meaning” in the broad sense, they had significance to the student.

The first question we want to ask is: were these interactions of meaning or significance to ELIZA? We assume not. We normally take it there is “nothing it is like” to be ELIZA, there is no consciousness there, no free will, no subjectivity. All of ELIZA’s replies are necessarily and sufficiently determined by a few hundred lines of code controlling the CPU and memory registers of a non-conscious automaton. One alternative view (taken by Chalmers and others) is there is something minimally “to be like” ELIZA, there is some subjectivity there though we cannot, from the human viewpoint “get at” what it might be like. Thomas Nagel (“What is it Like to be a Bat” 1974) deliberately chose an example (the bat) that to most people would have a subjective experience of some kind. Nagel’s argument is that it is in principle impossible for us to access bat-experience subjectively. His conclusion is taken to apply to any other subjective experience including that of other humans.

What would happen if we made two ELIZA programs interact? From a third-party perspective it would be a conversation between a therapist and a patient, that is two persons. But we know that this is not the case. We can explain all the behavior of both sides with reference to nothing but algorithms and programmable hardware, and we have good reason to believe that these are both necessary and sufficient causes of the observed behavior. We wouldn’t normally think to say that either side experienced any “therapeutic value”, semantic understanding or indeed had any internal experience of the interaction at all. Why not? Two reasons. One is that we do not impute any consciousness to ELIZA, and not having any consciousness, ELIZA cannot have and will at all. We normally take for granted that some consciousness is a necessary ground of any sort of willing. Will is only experienced, only exists, subjectively and never, like Hume’s cause, in the third person. My theme here focuses on the will so I want to stress the causal determinism (both necessary and sufficient) of the combination of algorithm and hardware is what robs the automation of anything that could conceivably be called “will”.

Now suppose we substitute real human beings for the two ELIZAs but stipulate that neither has a free will. The interaction is, in a manner perfectly analogous to “algorithm and hardware”, causally determined by states of the brains of the two humans. This causal relation is both necessary and sufficient to bring about every question and response there being no genuine “will” about it. So what is different about these two cases? Why (and where) can there be meaning and significance in the humans but not the automatons? The difference is the humans are (or could be) conscious – I stipulated only that they had no free will.

In the literature on free will and philosophy of mind one often finds that deniers of free will are not always deniers of consciousness. That is, although there is no genuine will there is experience, something subjective, and meaning arises in that arena. But consciousness itself is problematic for the same reason as free will. As Sean Carroll (“The Big Picture: On the Origins of Life, Meaning, and the Universe Itself” 2015) put it “thought can’t cause physics”. But if consciousness is real, then by some mechanism physics causes thought, subjectivity, and that should be equally impossible. There is, to put it bluntly, no more evidence in all of modern science that physics causes thought (subjectivity) than there is (from a third-party perspective remember the two ELIZAs) that thought causes physics. Consciousness and free will are two sides of the same coin.

If consciousness is real, and therefore experience can have meaning, then one must hold that physics causes [nonmaterial] thought. Rejecting this leaves only epiphenomenalism or eliminative materialism. The first makes experience (the subjectivity we experience every day) an illusion, while the second says it isn’t even illusory but nonexistent, something experience itself makes incoherent. Think of having a few orgasms in some clinical setting. The clinician asks you “which orgasm was the most powerful?” You say “the second.” The clinician, monitoring the behavior of every nerve in your body, says “No, my instruments tell me the first was more powerful.” The question comes down to who are you going to believe? The report of the clinician or the orgasm qualia you experienced? I stress here that it isn’t the orgasm, the measureable biological phenomena of nerve and muscle, but the subjective quality of the experience that matters.

The above example applies to qualia in general, but orgasms are particularly individual and subjectively qualified. It would be absurd to hold the third-party measurement had logical priority over the subjective experience. The quality of an orgasm is in its subjective experience and nowhere else. It would also be absurd to hold that an orgasm was illusory (epiphenomenalism) or nonexistent (eliminative materialism). An “illusory orgasm” is no more possible than a “square circle”. But none of this means there isn’t some brain state associated with every experience including experiences of thinking or choosing. If subjective experiences (think orgasms) are real, if they mean anything to a subject, there must be at least a logical separation between brain states and subjective experience. This is the gap so well described by David Chalmers (“The Conscious Mind: In Search of a Fundamental Theory” 1996 and “The Character of Consciousness” 2010), and that forces one to accept a property dualism of some sort.

In his 2015 book “Free Will a Philosophical Reappraisal” Nicholas Rescher asks us to consider that there is some brain state literally simultaneous with “the thought”. The question is not which is physically antecedent (and so causal) but logically antecedent and so initiating. Rescher is a materialist, so his scheme must work from the side of physics. He argues the relation between physics and thought is not causal in the normal sense that physics understands it. Instead of a cause he calls it an initiation. He makes two distinctions here. Initiations are atemporal. Rescher (a process ontologist) holds an “event view” of cause in which events unfold (cause) other events. What is important about all event unfolding is its temporality. Events have duration (however short or long) and “causing events” must precede result unfolding in time. By contrast, initiations are simultaneous with their physical expressions. Crucially they are not “events”. Rescher calls them “eventuations”. In Rescher’s view the eventuations go both ways. Brain states eventuate thoughts, and sometimes a certain class of thoughts we commonly call choices or decisions eventuate brain states.

Although Rescher does not try to resolve the mystery of the interaction metaphysically he doesn’t have to. What he shows is the reasonableness of the relation going both ways. If physics can evoke consciousness, then consciousness can, correspondingly, evoke physics. A second consequence of initiations is that there is some brain state just before a decision or choice in thought which is not sufficient to guarantee evocation of the brain state correlated with the thought. Of course the “thought correlate” is compatible with that prior state. It must be one of the following states that can evolve from the prior state. That it does evolve requires the prior state (or some other compatible prior state) but also the initiating thought which remember by Recher’s view is not strictly a cause. This is important because the neuroscientist need not accommodate any thought. One brain state (an event with temporal duration and so causal powers) is traceable backwards through (temporal) series of other brain states, the prior unfolding into the latter (as in ELIZA) without ever detecting the inflection point where a thought had non-temporal control.

Rescher’s distinction gives us the possibility of free will but at the cost of some logical dualism. If one accepts such a dualism then there is no unique problem with free will. But if one rejects all dualism in favor of eliminative materialism, then not only free will, but consciousness itself (and so subjective orgasm) is impossible. The only escape from such a trap is the ad hoc move of declaring that physics causes thought but not the other way around. There is no particular reason to believe this is the case however for even in this view, the basic metaphysical problem of the mechanism remains. If someday neuroscience does resolve the matter of how physics causes consciousness and demonstrates its sufficiency, it is reasonable to suppose they will discover at the same time how it is that consciousness [sometimes] causes (eventuates) physics.

My original statement “no metaphysical middle ground between free will and automatonism” has now come to the identity between eliminative materialism and automatonism. We have no reason to suppose that consciousness is real (think orgasm) and free will is not. Each must interact with physics in what might well be the same mechanism, some non-temporal cause not yet identified but that crosses Chalmers’ gap. But where does all this leave us on the meaningfulness of arguing with automatons? If you accept that consciousness is in some sense real then there is no choice but to accept some dualism. Once you accept that, there is no reason not to think that libertarian free will of some capacity is real also. If you reject this and insist on eliminative materialism then neither free will nor consciousness is real, and you must accept this in the face of that very experience that leads you to this conclusion. In short, the conclusion is incoherent and that means eliminative materialism is an epistemological nihilism.

Epiphenomenalism fares little better here. There are no epiphenomena in the physical universe apart (purportedly) from consciousness itself, no evidence that physics can cause epiphenomena. If consciousness is epiphenomenal so are its contents including judgments, thoughts, and everything built upon them; our mathematics and all of what we take to be empirical knowledge. Suppose we (and who is this “we” given the epiphenomenal nature of consciousness?) use our mathematics and science, build a real (not simulated) airplane, step into that airplane and it flies.

Is our flight experience something real (remember the orgasm) or also an epiphenomenal illusion? If illusion, what mechanism (the interaction problem) entails such a reliable connection between the illusion and the world? Physics produces an illusory phenomenon able, nevertheless, to make discoveries and use them to engineer devices that can only work if the discoveries (mental phenomena after all) match purportedly independent physics across time. Planes don’t only fly occasionally or by happenstance. Properly designed, built and maintained they fly every time. The only alternative to this extraordinary coincidence is there is no “independent world” at all.

What saves epiphenomenalism from metaphysical nihilism is that they must hold (being materialists) that it isn’t anything subjective (in this case discoveries and their connection to application) resulting in these engineering marvels, but brain states determined in an engineer’s deep past. None of what we take to be “subjective experience”, for example thoughts about airplane wings, can have any causal relation to the production and flying of airplanes. Experience tells us this is patently absurd. Rescher’s notion of initiation might help here but physics (and traditionally materialism) does not recognize any atemporal cause.

If eliminative materialism or epiphenomenalism is true then human beings cannot be anything more than complex automatons whose “initial state” goes at least as far back as conception. Possibly it goes back further, but just as an automaton cannot know what states of the world led to its being “turned on”, it would be impossible for humans to know one way or another if what fixes [illusory] choices goes back any farther than conception of your body.

Either way, it doesn’t matter because there is no you in anything that you do, choose, believe, or think. There is your body of course, but what issues from it is no different in principle than what issues from ELIZA or for that matter a robot floor cleaner. There is no reason for any conscious and free willed being to accept anything that issues from you as anything more than properly (let us say) formed propositions in the English language. The signs (words) carry standard meanings to the conscious recipient but the issuer counts for nothing being unable to have any “genuine opinion”, that is subjectively (though it may falsely report having such opinions), to consider one way or another.

Note that this does not mean that propositions expressed by automatons are not true. They may well be true, but if they are it is purely by chance that such truth is expressed through this particular channel compared to any other. There is no reason to credit the source other than to recognize the expression came from this source. The expressive vehicle has no “stake in the game”. It makes perfect sense to take the propositions of automatons seriously in the same sense that it makes sense to take a chess move by Big Blue seriously. But at the same time, it makes no sense to further argue or debate an automaton or give it credit for being clever. As clever as their behavior might appear to us (who have consciousness and free will) the cleverness (though not the truth) is imputed to the automaton by us.

Consequences

So what happens if you debate an automaton and as a result your argument and alters its behavior? Nothing is going on other than your output becoming its new input and deterministically re-vectoring the automaton’s report. There isn’t any mind there to change and arguing with it becomes nothing more than a game played with the objective of affecting the course of its behavior. One might interact with ELIZA merely to try to invoke a particular response. But note that an automaton (or other determined entity) changes our free minds all the time. How many books have I read whose contents have persuaded me to alter my opinions or beliefs? Of course we normally assume that a conscious free-willed person writes the book, but there is no reason this must be the case.

Being free willed I allow the arguments (by accepting as valid and good and choosing to alter my beliefs, behavior, motives) in the book to have the impact on me that they have. Linguistically, crediting the book with “changing my mind” is merely (usually) a proxy for according its author that credit. But the book is neither conscious nor free willed and yet the book, by my reading, and not its author, is the proximate cause of my change of opinion.

At the end of the day then debating an automaton simply makes no sense. Winning such a debate is like winning a chess match against Big Blue. On the conscious side it might be satisfying and it provides new inputs to the automaton, but we have not thereby altered any mind. No person acknowledges any “good argument” on our part. If the automaton has a designer she might come to recognize something novel about my argument. I might be impacting some mind at second order here, but among the foundation pillars of materialism an insistence there is no designer.

So what do we do with an entity who looks just like a free willed person but claims to be an automaton? There are three possibilities: 1) the entity is lying, 2) the entity is mistaken, and 3) the entity is an automaton. Notice the three alternatives concern only the status of the free will claim. An automaton can produce true propositions. Theoretically, a mind might fruitfully engage with an automaton, even learn something from it. But fruitfulness is precluded if the subject at issue is or inevitably involves the no-free-will claim. As it turns out, most philosophical issues are entangled with the no-free-will claim. Obviously metaphysics and epistemology touched above, but also ethics (any subject having any socio-political import; anything on our world involving interaction between entities that look like people) and aesthetics (can an automaton experience beauty?); all the classic philosophic sub-disciplines.

If the entity is lying there is no point in arguing because we do not know the motivation behind the lie and thus even a knock-out argument serves no purpose. If the entity is an automaton then again there isn’t any point arguing because no argument exists that would make the truth other than it is. Big Blue is an automaton no matter how hard we try to convince it otherwise. Indeed we might cause Big Blue to report that it isn’t an automaton, a mistake by the machine. Reporting free will (or consciousness) when none exists does not change the fact of the matter. We have done nothing more than caused a deterministic system to mis-adapt in a small way, a Pyrrhic victory if ever there was one. Big Blue’s mistaken report need not affect its chess playing skills.

That leaves “being mistaken” by a conscious entity. Here at least there is, presumptively, a mind to be changed. In theory, some argument can affect it, could make the conscious entity recognize that it must in fact be free willed. While possible, such an argument isn’t likely to be found. Why? Because the individual concerned believes the falsehood (often asserted by authorities like physicists and philosophers) “there is nothing but physics” and “thought cannot cause physics” (even bearing in mind the causal distinctions made above). Ironically many of these same authorities see no inconsistency in physics causing thought. We cannot prove the reality of free will or even consciousness in any logically rigorous way any more than we can disprove it. Human beings (I speak biologically here) who claim “no free will” believe this (typically) for metaphysical reasons. If physicists are correct as far as they (all science) can legitimately claim and there is nothing but physics to be found by physical means, then the only possible evidence of the reality of consciousness and free will is what we experience subjectively in the daily business of our lives.

Either we assume that human beings on Earth who deny any free will are mistaken by intellectual error, a choice (free willed) to accept a falsehood, or we take them at their word and they are not, in fact, free willed. If we take the second alternative, continued interaction is nothing more than a game played with a sophisticated ELIZA. Of course in our real world, some mix of the these is also possible. Some of those who report lacking free will are simply mistaken, while others might genuinely lack it. But all of this only matters to free willed human beings on one side or the other. If a free willed being mistakenly believes she has no free will, she might be enlightened, liberated, saved by our interaction with them — however unlikely this is. If the being on one side has no free will, really is an automaton, arguing with it about this is a waste of time.

By contrast if there is no free will on either side, then everything is a “waste of time” because all interaction would be meaningless; epistemological nihilism. There would be nothing “to know”, only what determined physical behavior, a process physics does correctly recognize as purposeless and therefore also metaphysically meaningless. Why should all of us automatons bother to do anything at all? The answer should be plain. The capacity to ask that last question cannot issue from a true automaton. To an automaton, the answer must be determined, perhaps “to maintain its existence”; not a rationale or purpose (of a mind) but a blind switching of state. To question the meaningfulness of existence presupposes some subjectivity whose experience, and so existence, it is. If subjective experience is real then physics causes (perhaps atemporally initiates as in Rescher) thought, and though obscure there is no a priori reason why thought shouldn’t cause (initiate) physics by the same mechanism.

27 thoughts on “Arguing with Automatons

  1. Oh ok… I agree such learning would lead to behavior changes in the automatons but the desired direction of those changes would have to come from the outside. The relative *value* of various strategies (e.g. maximise reproduction for purposes of dominant control of resources vs maximize resource sustainability for all) cannot be decided by algorithm. To bring the point back to philosophy it is those philosophical issues that can’t be picked out, let alone resolved (except at random) by algorithms..

    Like

  2. Of course that would make sense. But then we’re not talking philosophy here. If big blue always made a bad chess move in a certain situation would it make sense to do something to change that behavior (not mind)? If you want it to win, then of course..

    I also said minded agents could learn from automatons.. Anything from medicine to cooking and auto mechanics. But in these cases we need not suppose there is a minded entity teaching us. We don’t have to suppose the machine is *responsible* in a minded way for what it has to teach.

    What I said makes no sense is any *philosophical* discussion with an automaton because in philosophy we do have to presuppose that a free-willed mind is the source of its argument. If we do not then the automaton’s (like the teaching automaton) response is fixed by algorithm + input. It isn’t *responsible* (except metaphorically) for it. Sure we could still argue with it, but what would be the point?

    Like

    1. The point would be that there are algorithms that can alter themselves based on the input they receive. And philosophical arguments could be one of those inputs capable of causing those alterations, which would in turn alter (hopefully for the better) its behavior. IF humans are automatons, then surely we run on these self-learning algorithms.

      I’m not saying we’re automatons, because I don’t believe we are; but even if we are, we should still teach and learn from each other (even about philosophy) as a way to improve our algorithms, avoid violence, etc.

      Like

  3. You said this about your article:

    “What does interest me here are two related things. First does it make any sense for a human being with free will to argue or debate with an entity who appears to be a human being but lacks free will? Second, if no human beings have free will does any debate or argument between such entities have any meaning or significance?”

    I interpreted your article as arguing that the answers to these questions are “no”.

    So I asked (you can consider it to be addressing your first question if you want, because the second is more nuanced) why it wouldn’t make sense for a human to argue or debate with an automaton. If an automaton is causing problems in the world, and you have the possibility of providing it with new information (your arguments, new data) so as to alter its behavior, or to cause changes in its internal “program” it seems to me it would make sense to try to do that.

    Like

  4. I do not understand your first question? I didn’t say it was pointless. But you keep saying “change its mind” begging the question which is whether it has a mind or not..

    More, the question of mind and free will are separate. I happen to think they travel together, but there are many philosophers who think mind is real while free will is not.

    Like

  5. Why would it be pointless to attempt to change its behavior, by giving it new sensory input (our words, arguments); especially if it were acting inappropriately?

    We know that there are algorithms that don’t halt. So if the automaton were stuck in one of those and had to instantly make a “life or death” decision, without knowing the “answer”, randomness (which isn’t deterministic) could come into play.

    Like

    1. You’ve heard of “Buridian’s ass”? (Google it if not) a paradox raised in 1900 and amounts to an illustration of a non-halting problem. In the computer a decision (sans additional input) might have to be random, but even then something algorithmic (elapsed time perhaps) has to break the deadlock. Real minds (even a donkey) are rarely troubled by this problem. Perhaps you have discovered a possible test for a genuine machine-based mind..

      Like

      1. okay, thank; i’ll look into it. But I meant the two questions in my last comment to be independent of one another. Would you mind addressing the first?

        Like

  6. I agree that “Consciousness and free will are two sides of the same coin.”

    A few questions:

    Why would an automaton not be able to “change its mind”.

    Why would it be pointless to try to get it to change its speech and actions? (if it were POTUS for example)

    What would happen if an automaton was forced to make a decision based on the result of a non-halting algorithm (due to another part of its programming)?

    Like

  7. I agree with your statement that free will and consciousness are two sides of the same coin.

    A few questions:

    Why would an automaton not be able to “change its mind”?

    If it can, then why would it be pointless to try to do so (if it were, for example, the POTUS)

    What would happen if an automaton had to make a decision based on an “algorithm” which was not halting, but was pressed for time, due to another aspect of its programming?

    Like

    1. First answer is it doesn’t have a mind (we suppose) to change. There might be an appearance of a change, but (again we suppose) some aspect of its algorithm or additional sensory input combined with its algorithm alters some choice.. Second, sort of same.. Some new input + algorithm halts the run-on computation..

      Like

  8. We’ll have to agree to disagree on that unless you are speaking of “God’s consciousness” in which case we (our consciousness) and the world would indeed be inside that

    Like

  9. Consciousness is no mystic enigma, like Chalmers it claimed, the cause for it can only be the physics and nothing else. It is our human experience that physics is the conditio sine qua non for all informational processes. Physics is not only mechanics, the acoustics, there is also the electro dynamics and the different transformations of energy. There is the dialectic between information and energy, but all is material, every information is material and therefore it means: an object of observation by the naturalsciences and especially by the physics. Is it so hard to understand, that our waking up every morning is just like the switching on of the radio or tv?! There should given no proof, that our brain works like a radiostation ? You know sure, what it means, if our brain permanent emits radiation: one can it receive with radio technique. Would you want not also believe, that the reasons for the great enigma lies in the secrecy of the services of states and that therefore no solution in sight ?!
    Quotation from yours: „If consciousness is real, and therefore experience can have meaning, then one must hold that physics causes [nonmaterial] thought.“
    Consciousness would not exist without physical base and its generating is an physical process. The cause for thoughts is the physics and not invers. The whole consciousness and its contents, the subjectivism and the false but also the correct of our thoughts is determined by our biological, social and cultural needs and our capabilities and skills. The subjective experience is only a part of an content of consciousness. The subjective experience is only a part of an content of consciousness. A special “qualia”, whatever should it be as material phenomenon, does not exist.

    Like

    1. Well if you say your qualia don’t exist I can only take your word for it. No one denies that physics is necessary for consciousness. The argument is about its sufficiency. Even Chalmers believes that physics causes consciousness. His claim is that it can never fully describe it and so physics is *logically insufficient* but not physically.

      Like

Leave a reply to Quine Atal Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.