I’m putting this review on the blog not because there are dangling philosophical issues here, but because this book is so direct and exhaustive about its two most important themes:
China is not a State with a party. The party is the State, and increasingly since 2012 and absolutely since 2018 Xi Jinping has become, like Mao before him and Stalin in the Soviet days, the chairman of the party for life.
China, under Xi is embarking on a serious attempt (using everything modern technology can provide) to build the ultimate surveillance State! Further, there is nothing unrealistic about this effort. They are mostly there.
This book is geopolitical in scope and theme. It is a warning to everyone but particularly the West concerning China’s international intentions and its present and future capacity to get what it wants. It is also about the West’s abetting China’s goals politically and especially economically. But make no mistake, China is not only a people and industrial power, a State with a government. The Chinese Communist Party and the State are synonymous, and since 2013, the CCP is more and more synonymous with the will of Xi Jinping.
As long as it is, this book is direct and to the point. Dr. Strittmatter does not spend chapters on Chinese history, alluding to it only where parallels pertain or narrative becomes part of the modern problem. There is enough reference to the period since 1949, and especially the cultural revolution (1966-1976) to bring a sense of what the Chinese people have been put through for the last three generations.
Following an unusual period of intellectual openness in the 2000s, China is, since 2013, constructing a now well-on-the-way-to-completion, ultimate surveillance State. Not only are AI-driven systems watching everyone from the outside, but citizens are being made to carry apps on their phones tracking everything from travel to conversation. It isn’t possible even to opt-out because doing so in itself brands you as an enemy of China and blocks you from any travel, jobs, apartments, and so on. Nor does complying with authorities guarantee your good standing. You will be docked social credit points if you do or say something you should not. If this isn’t bad enough, what counts as good or bad behavior or speech is at the daily whim of the CCP and Xi in particular.
Strittmatter cites many examples and drills the multi-faceted nature of the CCP program home. If the system isn’t quite finished (it is not), it soon will be. But this isn’t the end of the story. The Chinese are doing their very best to extend this ability overseas! Chinese citizens must travel with these apps and connect them to foreign Internets tracking them anywhere on Earth. When the Trump administration tried to ban certain Chinese-centered payment apps there was a huge outcry! Part of this came from Americans who now use those same apps, but a good measure was Chinese-sponsored propaganda. If the apps were blocked, the CCP would lose its best foreign surveillance asset!
So far, in many instances, foreign governments and corporations have backed off when China cries foul. The core motive is dollars flowing from China into NGOs on foreign soil and into the coffers of the world’s largest corporations (other autocratic governments can be paid directly). Unlike Russia, the Chinese, particularly the CCP which commands more capital than any other single entity in the world, is rich enough to buy much of what they want, including good press, and Western corporations are only too happy to sell it to them.
“We Have Been Harmonized” is about all of this and more. Strittmatter delves into the effect this is having on the psychology of the Chinese people. He hopes, of course, that this will not go on for very long, but he does not see any end to it. There is nothing to suggest the CCP will not ultimately succeed within China. He is not so hopeful about the world outside of China either. Democracy is under assault everywhere. Even where not Chinese-influenced, the present internal struggles, political polarization, and populism play into CCP hands, some greased by the money China is throwing around. Everyone working in Western executive and legislative institutions should read this book!
This review is not on the blog because of dangling philosophical issues, but to add to a series. “The Uninhabitable Earth”, “The Geography of Risk”, and now “Water”, each in their way tell us (boldly or in hints) about what is about to befall the Earth in the next 20-50 years and beyond.
Oddly, for me, this all began with Slavoj Zizek’s “The Courage of Hopelessness”. In commenting on that book I pointed out that economic exhaustion precipitated by climate change mitigation will collapse the present capitalist world order long before the left ever has a chance to make a substantial impact. I then stumbled on these other books, reviews and Amazon links all given above.
A long book methodically drilling down into an important subject. Of all Earth’s resources, air and water are the two most necessary to sustain life, and of the two only water exists in three phases, gas, liquid, and solid, on in and above the surface. There have been other books covering the history of water (particularly freshwater) use since antiquity. Solomon goes the extra mile and looks at water from more than the usual angles. Learning to sail the oceans is a part of the water story as are the world’s inter-continental canals (Suez and Panama) and oceanic choke-points (straights like Hormuz and Malacca) and also the story of the steam engine. He also notes that food is “virtual water”. Not only is water a consumable input in growing crops, but is also a component of the many steps needed to bring the crop to the table.
Solomon begins with a review of the freshwater situation on Earth and then visits every historical civilization digging into their history of freshwater management. A general cycle is visible everywhere. A civilization arises when its region’s water resources (including bordering seas if any) are successfully tapped to yield increased food, strategic trade or military advantage, or lower cost, usually all three in one mix or another. Successful water management results in population growth and territorial expansion until the population reaches the limits of its technology’s ability to maintain and expand its water management. Politics plays a role. Even where technology and knowledge exist, a society may become unwilling, politically, to do what is necessary to manage a degrading water system. As water management declines, so does the civilization, and this is so even where the needed water still exists. In the modern age, existing water, at least freshwater, is being increasingly used up or evaporating away as ancient glacial stores melt.
The real problem of course is not exactly water but population. Solomon notes but does not comment on this, rather treating it as an inevitable background to the whole story. On the one hand, an expanding population needs more water, but it also increasingly pollutes and otherwise abuses the freshwater still to be had.
Having reviewed water history around the world all the way up to the end of the 20th Century, Solomon goes into the modern challenge. He revisits each of the world’s regions and summarizes their present and near future water challenges. Climate change is re-arranging the freshwater balance around the world. Some places become much drier, and others much wetter. Winter snows melt earlier in the season, and summer heat more quickly evaporates stored water. Mitigating water-related disasters, whether larger fires in dry places or bigger, longer-lasting floods in wetter ones, are consuming a larger percentage of the world’s resources. Technological and political success managing these changes is key to the survivability of each nation, and the world collectively. There is no guarantee of success and in fact, the present trajectory does not bode well for anyone.
As noted in the review (included below), Lynch raises the question of intolerance in a tolerant society, but he does not answer it. “Must we listen to Nazis”, or must a tolerant society tolerate a social group (Nazis are not the only intolerant group in the western world, but they are a quintessential example of intolerance) who are intolerant? If the answer happens to be no, a related question is what sort of behavior constitutes intolerance that need not be tolerated?
North America, Europe, and associated “western nations” and India are presently the world’s more “tolerant societies”. These societies, taken as political entities, are beset by problems arising from the conflict between tolerance and intolerance, the mistaken belief that a tolerant society must tolerate intolerance.
An ideal tolerant society would be one in which every social group and every political alignment is committed to a tolerance of every other group, not merely in principle but in practice, the group’s declarations, documents, political appeals, and so on. The people of a tolerant society need not agree with one another intellectually, need not have the same ideas of what constitutes a good or better society. They have the right to vote for their views and, if their numbers are sufficient, dominate the society’s political process. Permitable differences include income disparity, at least to the point where it becomes effectively intolerant by precluding those on the downside from acquiring resources needed to continue their [tolerant] activities. The tolerant collective cannot advocate for advantage that precludes the same right to support whatever social, political, or economic policy any other group happens to hold, provided only that they are likewise tolerant.
Since, in our ideal tolerant society, every other tolerant group must be tolerated, there cannot develop any motive to cheat on the political process because the rule of tolerance, everyone must have the same opportunity for social and political expression, would preclude it. No group could justify its social or political ends on grounds that other [tolerant] groups have no right to their expression. Intolerant means never yield tolerant ends except in the single case of ridding society of intolerance. In that one case, tolerant means cannot work because the intolerant will always refuse to accede to the tolerant. Refusal on the part of a tolerant society to rid themselves of intolerant groups is the source of the intolerant group’s political advantage. More on this below.
Obviously, in such a society, there could be no Nazis for the simple reason that what makes a Nazi a Nazi (speaking of the collective) is not their economic theories, but their intolerance of certain groups, notably Jews, people of color, homosexuals, and so on. In the end, their intolerance becomes intolerance of every other group that disagrees with them on any subject.
By intolerance (on the Nazi part) here, I speak of the target group’s illegitimacy in the views of the intolerant group. The target group (or groups) have, in the eyes of the Nazis, no right to suffrage of any kind, even to the point (ultimately) of their right to exist, not merely as a social or political entity, but as individuals! Intolerance of this sort ends up asserting an “end justifies the means” social (and so political) attitude. If the target group does not even have the right to exist, the Nazi has no problem breaking with the “rules of tolerance” up to and including taking life.
An intolerant social or political group can only be comprised of intolerant individuals. That intolerant individuals might exist in an otherwise tolerant society cannot be ruled out. So long as intolerance is confined to them personally by criminalizing intolerant behavior (for example, hate crimes) and forbidding them to form collectives with any political or social voice the tolerant society survives. Groups of intolerant individuals might come together to express their mutual intolerance, but no such group can apply to be a political party or formal social group having any recognized political legitimacy, special tax status, or what have you.
When a tolerant society signals an intolerant group’s acceptance (socially or politically) by granting it political legitimacy, a certain inevitable, historically documented dynamic begins. The intolerant group has an inherent political advantage. Since, for the intolerant, the ends justify the means, they are free to cheat while those who are tolerant are not. Though it may take some time, the intolerant gain advantage, politically and economically, because their intolerance is [mistakenly] protected by the tolerant. This brings more people into the group (they sense an economic or political advantage in belonging) giving it even greater political influence. The cycle is self-reinforcing. The intolerant group eventually grows to overwhelm the formerly tolerant society.
This is why the answer to the original question: must we listen to Nazis, is no! Tolerating intolerance, possibly defensible on some theoretical grounds, is illogical because the intolerant are intrinsically corrosive to any society that tolerates them. Intolerance, like cancer, is inevitably destructive of the body that harbors it. It is not logical to do anything but struggle to root it out.
This commentary is already long enough, but I would briefly address the second question only implicitly covered in the above discussion: what counts as legitimately disallowed intolerance? Suppose I am the publisher of an astronomy magazine. Must I allow the publication of an article arguing that the earth is flat and at the center of the universe? If I sponsor a conference of astronomers, must I allow the flat-earther an official voice with a formal presentation? Must I allow her to attend the conference at all?
To all but the last question, the answer is no. As noted above, the issue is political and social intolerance, not intellectual disagreement. In my view, intolerance of intellectual viewpoints (“your ideas are idiotic”), even ad hominem (“you are an idiot”) do not automatically count as intolerance of the disallowed sort. My position as conference sponsor allows me to reject papers and speakers whose intellectual views clash strongly with my own. I am not denying this person a political or social voice or within her social group, nor social interaction with my group.
Forbidding her even to attend my conference might amount to disallowed intolerance provided she has not proven to be a disruptive influence at past conferences; this because a conference is a social as well as an intellectual event. To avoid unrealistic restrictions on human psychology, the tolerance demanded of every social and political organization is limited to the right of each organization as such to exist legitimately in the eyes of every other organization. The association of astronomers is not intolerant of the flat-earth society politically or socially, only intellectually.
We might go on to examine a more complex and perhaps realistic case. Must the flat-earther be permitted to teach astronomy or earth science in a public school? Imagine she is otherwise qualified by having the appropriate teaching certificate. What complicates this example is the public nature of the school (supported by taxes on the community of all social groups in its district) coupled with the curriculum approved (presumably) by that community. I leave this example as an exercise for the reader.
Another book about the polarization of American politics, this time, the viewpoint of individual and social psychology. Lynch makes some excellent general points about extreme polarization and unwillingness to listen to other views poisoning American politics. He well describes the harm this does to democratic polities in general and the U.S. in particular. There is nothing new in this. There have been other periods of extreme polarization in American politics, but not like this one since before the Civil War.
Among the new features, this time around, the Internet and the sheer scale of many modern corporations contribute to the problem. The Internet market is filled with people who actively seek to limit their exposure to ideas running counter to their own. Providing individuals tools to build these barriers to alternatives (the same tools can explore alternate viewpoints) is just good business. Individuals, of their own free will, choose to use them to limit perspectives to which they are exposed.
The Internet is but one facet of this problem of know-it-all arrogance infecting polities all over the world. Still, the pain is both acute and different in the U.S. and Europe because these are among the few places in the world (Australia, Japan, among others) where political and ideological alternatives are not criminalized. Lynch lays out the problem and its consequences both for the health of society and “the truth,” which he points out, is always out there even if not directly accessible or utterly denied by postmodern critics.
While the book is good in general terms, Lynch elides specific problems. He asks at one point, “must we listen to Nazis?” In other words, must a tolerant society tolerate intolerance? He asks the question but never really answers other than to point out that opinion on this goes both ways.
If this is not a great book, it is a good one and another solid addition to the literature about dangerous sickness in Western cultures.
Whatever one believes about The Urantia Book, there is plenty of serendipity in the universe. Literally on the day I published “Problems with the Cosmology and Astronomy of The Urantia Book”, I received a link to Tom Allen’s “The Great Debate on the Scale of Orvonton”, one of the issues I discuss in my essay. Mr. Allen does this issue far more justice than do I. For example, he suggests that some of the confusion over The Urantia Book’s terminological usage stems from its describing two different Orvontons: today’s partly finished one, and the future finished version. This is an excellent point that I missed. The time factor, destiny, does help to interpret what The Urantia Book says about this matter. It does not, however, completely clear up the problem.
I have no quarrel with the content of Mr. Allen’s book. He does miss a few things when evaluating Urantia Book claims against modern cosmology (he has republished the book three times, last in 2020, to accommodate just such advances). Type-1A supernova overlap with and supplement the Cepheid variable “standard candle” and have now for some thirty years, but they are not mentioned. It can be argued that what the papers call the Grand Universe is more substantially complete than he thinks [21:1.4]. His argument, that the universe does not look (to modern astronomy) like the papers describe because we are very early in its history can be challenged. He does mention the big bang, but only to dismiss it as one of many mistaken cosmological theories soon to be discarded as have others in the past. I believe this is unfair. Allen fails to accommodate an enormous expansion, since 2000, of evidence in support of the big bang, though to be clear, the Orvonton debate and the origin of the universe issue are not directly connected.
Mr. Allen states his bias explicitly (as a good philosopher should) on page 8 where he says: “I crave philosophically to understand what the Urantia papers say about the cosmology, cosmogony, and cosmography of the universe. I am curious how current astronomy along with early 20th Century history validates or confuses revelatory articulation.” The revelatory status of The Urantia Book over-all is assumed. While the papers do state that the cosmology presented is not inspired, it is assumed to mean something, to represent some truth-fact about the universe’s organization. If what Mr. Allen calls “surface errors” in The Urantia Book’s assertions are in conflict with modern astronomy, our job is to puzzle out what the book is really trying to tell us.
I do not make this assumption. Cosmology and astronomy have made longer leaps since 1965 than they did throughout all of human history prior to that year, including the development of powerful telescopes (optical and radio) in the first half of the 20th Century when the papers were written. Throughout human history down to roughly 2000 all astronomy was electromagnetic (including the discovery of the CMB), light of one wavelength or another. Only since that date have two non-electromagnetic means of sensing the cosmos come into existence, neutrino and gravitational wave astronomy, the former in particular strongly reinforcing cosmology’s conviction in the truth-fact of the big bang.
As noted above, none of this bears directly on Mr. Allen’s exposition of the Orvonton scale issue. If however I am right (I do not insist that I am right) about the deeper absurdity of Urantia Book cosmology (see essay linked above), those problems reduce the significance of the Orvonton dispute to something like the medieval scholar debate over how many angels can sit on the head of a pin.
None of this is to gainsay Mr. Allen’s book. As concerns both the wider and narrower cosmological issues, he has set himself an impossible task. One simply cannot assume what The Urantia Book says is meaningful and contradiction free, and accommodate the discoveries of modern cosmology at the same time.
This delightful little book is written for a specific audience, readers of The Urantia Book, and specifically, readers interested in what The Urantia Book says about cosmology and astronomy.
The Urantia Book describes a [future] highly structured universe still very much in that structuring process. But to present this description, the authors were constrained to reveal it in the cosmological and astronomical language and knowledge of the times in which The Urantia Book was written, more or less the 1930s. Orvonton is a sub-segment of the present and future universe.
What The Urantia Book says about Orvonton suggests it might be the Milky Way galaxy and its satellites. Other statements suggest it includes (perhaps in the future) all the galaxies in our “local cluster”, or the “local sheet” (a peculiar collection of near-by galaxies all lying in a plain), local volume, or up to the Virgo supercluster! None of these collections was understood in the 1930s, astronomers at that time having discovered some of these galaxies but not their spatial relation.
Mr. Allen pieces together the clues leading to various of these hypotheses. He is meticulous and scholarly, carefully documenting all the various lines of evidence from The Urantia Book and evaluating them in relation to both 1930s and modern astronomy. His purpose here is to survey the territory. He does not argue for a particular favorite interpretation. His evaluation if not exhaustive is close to it. Overall a scholarly presentation, and while there are issues here and there with text formatting in my Kindle edition, given the narrow audience for this book, I will not count those against him. Bravo! Good job!
The purpose of this essay is to set the cosmology and astronomy of the Urantia Book against what modern, twenty-first-century cosmology and astronomy observe in the physical universe. I will also argue that even if today’s cosmology and astronomy have got some things wrong about the structure of the universe, there is enough evidence favoring cosmology’s fundamental insights to render the Urantia Book’s cosmology, and much of what it says about astronomy, impossible.
Conventions: The Urantia Book is UB or “the book”. Reference to scientific papers and images are linked. References to sections of the book are signaled by [UB paper:section.paragraph].
Updated on May 26, 2021 to include section “a missing superuniverse”.
SCIENCE AND THE URANTIA BOOK
The Urantia Book (UB) is about God. Its theology (presented primarily in the Forward, papers 1-10 and 99-118) expands human ideas about God, revealing a more nuanced picture than any human-originated theology has achieved. Theology has consequences. For example, if God is good and what humans gain in this life has continuing value personally, there must be some mechanism for expressing a postmortal personality. The UB illustrates this with its story of the ascension scheme coupled with an explanation of universe administration (God the Seven-Fold) terminating in the Creator Sons, which sets the context of our relationship to Jesus, Michael of Nebadon. The book’s last section, “The Life of Jesus”, is perhaps the most remarkable illustration of the relationship possible between man and God ever written!
The UB contains hundreds of scientific assertions. Readers of the book have for some time been aware that much of this science is problematic. In 2017, Geoffrey Taylor re-wrote (updated) “Scientific Predictions of the Urantia Book”, his 1987 paper co-authored with Irwin Ginsburgh. In this paper, he discusses 31 specific “scientific predictions” found in the UB. He compares them to what is known now, confirming (most), disconfirming (a few), or remaining an open question.
Part of the problem of assessing these UB assertions is dating them. UB history holds the text of the book was completed before 1940. If this is true, then any matching discovery made after 1940 would be evidence for the UB’s veracity, at least that its authors made a good guess. The UB was not published until 1955. All of whatever physical precursors existed before that date, the hard evidence that no changes were made during the 1940s and very early 1950s (the original, date-able, notes and printing plates) were destroyed. I mention this because it is a part of what is problematic about “UB science”. I do not attempt in this essay to resolve these issues. What is problematic about UB cosmology and astronomy has nothing to do with these date issues.
Here is a categorization and count of issues Taylor addresses:
The UB contains dozens of “scientific assertions” besides those Taylor mentions, and some of the above might fit different categories. To an extent, he cherry-picks his examples. For example, he makes no mention of this on 65:6.1. “Ever will the scientist come nearer and nearer the secrets of life, but never will he find them, and for no other reason than that he must kill protoplasm in order to analyze it.” The italics never and must are mine because categorical terms like these make the statement false. Biologists have been probing cells and measuring their living processes since the late 1960s! Surely revelators (who could “anticipate the scientific discoveries of a thousand years” [UB 101:4.2]) would know this? Why include categoricals like “must” and “never”?
Besides the “hard science” categories listed above (Taylor’s subject), the UB makes hundreds of statements in the arenas of soft sciences, anthropology, sociology, psychology, even “political science”, but none of these are Taylor’s subjects, nor mine. This paper focuses on cosmology and astronomy because the UB’s description of the mortal ascension scheme rests on these. I will cover the biology of human evolution (another major issue) in another paper.
In paper 101:4.1, The book makes this statement: “Any cosmology presented as a part of revealed religion is destined to be outgrown in a very short time”, and 101:4.2 emphasizes that “The cosmology of these revelations is not inspired.” To me, “not inspired” means the revelators merely adopted and adapted the cosmology, primarily the steady-state idea they found in human sources before 1950. But the book’s morphology of the Master Universe (everything inhabited and not yet inhabited), nor its revelation of “space respiration”, is not to be found in astronomy or cosmology papers of the period. Where did the authors get this material? Except for the steady-state-creation idea, UB cosmology does not reflect scientific consensus or even speculation of the 20th Century’s first half. If “not inspired”, and not a product of early 20th Century science, how exactly are we to understand it? If it seems not to match observation, are we to accord it some credibility merely because it appears in the UB?
Briefly summarized UB cosmology says:
The physical universe is a steady-state creation along the lines of human ideas popular in the first half of the 20th Century. [UB 9:3.4] [UB 42:4.9]
Space, presently filled with material creation, respires in billion-year cycles. A billion expanding (we are currently halfway through such a phase) and a billion contracting. [UB 11:6 whole section]
The material creation is not symmetrical except bilaterally around an axis perpendicular to Paradise. Paradise is an ellipse, and the universe as a whole rotates around paradise (much more on this below). The axis perpendicular to Paradise is the only one close to symmetrical. The other two axes (an ellipse has three) are asymmetrical. [UB 11:7.3]
The physical universe astronomers and cosmologists see from Earth looks absolutely nothing like what the UB describes. What we see cannot be interpreted (rationalized) along lines the UB claims is the case, nor can the UB presentation be aligned to modern observations. It isn’t that the UB is wrong as to details; much of it cannot be made sense-of in the light of present observation, including types of astronomies invented in but the last few decades! At least this is what I now believe.
Cosmology is a purely observational science. The universe “happened” (slowly or suddenly), once, sometime in the past, and continues to the present day. We cannot experiment by setting initial conditions and seeing what sort of universe emerges from them. What cosmologists do is look. Having well understood the physics of light and the effect of gravity on it, they propose various theories about how the universe got going (like steady-state) and ask: “what are the consequences (to the light we observe) of that theory”? Dozens of theories have been tried (including those suggested by UB readers trying to rationalize the UB picture with present observations), and only the Big Bang survives. The Big Bang’s consequences (the first of many, the Cosmic Microwave Background temperature, calculated 10+ years before it was found), is the only theory that survives all, and I mean all the tests (see note on Big Bang evidence at the end of the essay).
STEADY STATE versus THE BIG BANG
The big bang was, 100 years ago, a nascent cosmological competitor to the millennia-old idea that the appearance of matter in the cosmos is an ongoing process, new matter, hydrogen (or perhaps protons, neutrons, and electrons), slowly appearing throughout the universe. This “steady-state” creation would forever produce new material for the formation of stars and other entities, yielding today a universe of unknown size and age, possibly infinite and forever!
In 1953 George Gamow contributed to the debate. Given the controversial (until the mid-1960s) notion of a big bang roughly ten-billion years ago (a then-best estimate based on tracing apparent recession speed of distant galaxies backwards in time), Gamow reasoned that there should be a left-over, cold, cosmic microwave background of roughly 7 degrees Kelvin (the CMB) throughout the universe. In 1965 the CMB was discovered accidentally by two Bell Labs engineers trying to figure out why they couldn’t get rid of a constant noise at 2.72 degrees Kelvin from a new, very sensitive antenna. The first “big evidence” for the big bang was not that distant objects appear to be racing away from one another (a steady-state creation also expands as more matter is added), but that there is a cold-light at 2.72548±0.00057 K coming from every direction we look.
A singular origin is the only reasonable explanation for the phenomenon of this light. Since the 60s, numerous “other phenomena” whose observation can only be explained by singular origin, evidence upon evidence, has piled on to support the idea. As might be expected, at least into the last quarter of the 20th Century, “Steady State” aficionados suggested other explanations. All had (as good scientific theories must) testable consequences. The tests all failed, while the big bang has survived every test of its theorized outcomes. I bring up the big bang here not to hawk it (I list some independent evidentiary lines at the end of the essay), but because it has implications not only for the matter of “steady-state creation”, but also the UB’s other cosmological assertions, space respiration and the shape (morphology) of the creation.
As concerns “steady-state”, the UB tells us that the Infinite Spirit can slow down energies to the “point of materialization” [UB 9:3.4]. Presumably, this is the source of all the matter in the universe. Any light produced by this process would cool as the universe expanded (the book tells us we are in an expanding phase due to “space respiration”). But since matter creation is constant, we would expect the temperature of such light to vary as we look across the sky. It would be warmer coming from “newer regions” and cooler from “older”. Yet all the background light we see (strictly “listen to” with radio telescopes as it has cooled down to microwaves) had to begin simultaneously. To hypothesize that, nevertheless, light from creations at different times all happens to hit Earth at 2.725 degrees Kelvin from everywhere is ad hoc.
SHAPE OF THE UNIVERSE
The UB’s biggest problem is the shape of the universe (the Maltese Cross 11:7:3), its declaration that there is an upper and lower limit to “pervaded (the material creation) space” (11:7.6). A related problem is the missing consequence of a mass, Havona and its surrounding gravity bodies, “as great as the seven superuniverses (the presently inhabited “Grand Universe”) combined” [UB 12:1.4], not to mention the very existence of a different, “non-pervaded space” constituting a sort of reservoir to and from which pervaded space flows. The UB description, if authoritative, would have evident observational consequences. Given the UB picture, if we look in all directions from our position in space, we should see different things. In a direction outward along the plain of creation (as the UB tells it), we should see lots of galaxies (the “outer space” universes). But in a direction perpendicular to this plain (up or down), we should see nothing at all beyond our superuniverse. Even accepting a rationalization by UB readers that our galactic supercluster (see ASTRONOMY below for discussion) is the real superuniverse, we should see nothing beyond it.
Moreover (I thank my friend Charles Lamar for pointing this out), if we look in a direction above or below the center of our galaxy, above and below what the UB claims is the center of creation lying somewhere behind it, some substantial angle of arc would be a view at and through non-pervaded space. What would non-pervaded space (not to mention some mid-space zone that must also intervene [11:7.3]) do to the starlight coming from its other side? The UB provides no clue to this answer, but the only thing these regions could possibly do, if our observations are to be believed, would be to so manipulate the light that the universe of galaxies on its other side look like the universe we view in every other direction!
So what do we see? We see what cosmologists call an “isotropic universe” that is also homogenous on very large scales. meaning “the same in every direction” In every direction, we see billions of galactic clusters and streams of galaxies out to 10+ billion light-years. Even our supercluster is but one of billions of them in every direction (see illustration in this link). Every dot of light in that image is a galaxy or cluster of galaxies, and this is what we see in every direction we look. Moreover, even if this illustration (after all, a computer construct based on observation) is not quite right, what is indisputable is that what we see is the same everywhere! In every direction, including towards the Milky Way’s center there are galaxies and galactic clusters at all distances everywhere. Even if individual estimates of distance are considerably mistaken we cannot be mistaken about the shape of the overall distribution.This fact alone makes the UB picture of a bilaterally symmetrical universe unbelievable.
Some astute reader is going to object and say that the universe may not be precisely isotropic. There is in fact some evidence that matter-density in one axis is greater than in the axis perpendicular to it. But the difference is two percent. Material density along the denser axis is two percent greater than in the perpendicular axis. Two percent is nowhere near the all-and-none difference that follows from the universe architecture portrayed in the UB!
Suppose the UB has deliberately provided a fantasy cosmology (possible, if not likely even given the astronomical knowledge of the 1920s and 30s) on which to rest its description of the mortal ascension scheme? The problem is that what we see is so vastly different from what the UB describes it is impossible to reconcile the two architectures. Furthermore, astronomers on a world a few billion light-years from earth would observe, from their world, the same isotropic universe we detect from ours. A universe that appears isotropic from every position within it hasn’t any center! The whole of the UB ascension scheme ultimately rests on Paradise at the center of everything, a center that doesn’t appear to exist. Now one might argue that we cannot assume another position from which to view the universe. We have good reason to believe the universe would appear isotropic from any place in it, but we cannot know this. There are, or would be, other consequences to what we see if Havona existed.
THE HAVONA GRAVITY PROBLEM
This issue of a center (and what the UB says about it) is integral to the book’s “shape story”. The central universe, Paradise, the billion Havona worlds, and the “dark gravity bodies” surrounding it, are said to contain mass “far in excess” of the entire Grand Universe [UB 12:1.4]! That’s a lot of mass! I’m tempted to bring up the matter of gravitational waves here, but I demure. It is possible (being no physicist), the arrangement of a central mass surrounded by two rings of “dark gravity bodies” orbiting in opposite directions [UB 14:1.8] is set up precisely to cancel (by interference) the enormous gravitational waves that, otherwise, we would surely have noticed (and do not) coming from some particular direction in the sky. But while I can speculate my way around missing gravitational waves, there would be other consequences of such a mass.
According to the UB, Havona is presently on the other side of our galactic center (we are not told how far), where dust obscures what would otherwise be a view of a massive dark body occluding everything on the far side of it [UB 15:3.3]. Whoever constructed this part of the UB cosmological fantasy did not understand the effect of mass on light. Even if no gravitational waves emanate from Havona, the central universe has gravity [UB 11:8.7].
While we cannot see directly through the center of our galaxy (we do see behind the dust in X-ray light, but what is visible are stars yet in our galaxy), we can see above and below the central band. What do we see? We see the same thing we see in every other direction, billions of galaxies out to more than ten billion light-years! But that is not what we would see if there was, lying behind the central band of the Milky Way, a collection of bodies whose mass was equal to the whole of the grand universe. All the light coming from stars (superuniverses and outer space bands) on the other side of Havona (setting aside the issue of looking through non-pervaded space discussed above) and just above and below the Milky Way’s central band would be bent towards us and appear compressed together. What we would see is starlight fused into a bright band (see this image of a black hole lensing a galaxy lying somewhere behind it. Now imagine that instead of a single galaxy, we saw the light of thousands smeared out by the gravity of Havona), a halo of light surrounding an empty (dark) region. We do not see anything like this.
Although our view through our galactic center is hazy and the few stars we resolve are within the Milky Way, whatever is beyond the galaxy, it cannot be a mass-collection as great as the rest of the Grand Universe combined. Any large gravitational mass would still distort the light coming from stars (galaxies) on its other side. In short, and again what we would see looking in that direction would not look the same as what we see looking in the opposite direction. But what we see is the same. There cannot be a mass such as the UB describes somewhere on the other side of our galactic center.
While perhaps not UB cosmology’s biggest problem, space respiration is a big one. Briefly put, the volume of “pervaded space”, the horizontal arms of the maltese cross, respires, expands and contracts, in alternating one-billion year cycles [UB 11:6], presumably expanding and compressing the material creation along with the space it occupies. Contraction does not result in a “big crunch” (everything gets crushed together, generating a new big bang), but rather a partial inspiration (contracting) for a billion years before expanding again. Neither the UB nor modern cosmology hints at anything like a mechanism that could drive this process. We are told only that “non-pervaded space”, the vertical section of the maltese cross, also contracts and expands inversely with the pervaded zone. As with other such assertions of the revelators, we are left only with the reasonable assumption that God knows the trick.
Even if real, we cannot measure space respiration directly. We have not been observers on Earth long enough to witness a transition from contraction to expansion (our present condition). If, however, our understanding of how light behaves in an expanding universe (red-shifted), and how it would behave in a contracting universe (blue-shifted), is correct, the alternating expansion and contraction would appear to have visible consequences we do not observe.
There are two issues with space respiration. The first is again the temperature of the Cosmic Microwave Background light. The UB never says how old the physical universe is. Most readers take it to imply it is older than the 14.8 billion-years cosmologists believe it to be. But even given our age estimates, there would have been seven complete respiration cycles (seven out, seven in, and presently a eighth expansion). It might happen that the universe’s background light is currently at 2.725K (setting aside the consequence of steady-state creation at different times noted above) given a universe that is expanding and contracting in two-billion-year cycles. But remember that the calculated temperature (in 1953 before the CMB was discovered and measured), only four degrees Kelvin off the measured temperature, was based on a model universe expanding continuously for roughly ten billion years.
Even if we assume the universe expands over-all, each expiration leaving the universe a little bigger than it was when the prior inspiration began (more matter being created over time), it seems extraordinarily coincidental that the measured temperature of the light is very close to the theoretical result of light from a big bang and continuous expansion of 14.8 billion years! That coincidence is problematic.
The coincidence regarding the background light’s temperature is not the only observational consequence of space respiration. Suppose we take two very similar stars, A and B (same mass, composition, history, and spectrum), both a few billions of light-years distant, but star B is one billion light-years farther from Earth than star A. Both stars exhibit red-shifted light because we are presently in an expiration (expansion) phase of the respiration cycle. But on its journey to Earth, star B’s light experienced an extra period of blue-shift (being one-billion light-years more distant) than star A. When star B’s photons were as far from Earth as star A, B’s light would be a little bluer than it would, had it not traveled that extra billion years during a contraction phase. Compared to A, star B would appear a little bluer than it should(remember they are identical). By our theories, it should be a little redder being one-billion light-years more distant.
To be clear, star B’s light would still be red-shifted but less red-shifted than star A. When star B’s light, the light we see today, reaches us, its redshift distorts towards the blue. It would appear closer than it is because our theory of light says that “less-red means closer”. Half of the millions of galaxies we see with our telescopes in every direction (roughly those at odd multiples of billion light-year distances from us), would be a little bluer than our cosmological theories predict. From our viewpoint, their cosmological distance would appear closer (less red) than they are.
Looking outward from Earth, for every two-billion light-year increment, half the stars in every direction would appear closer to us than they should! There would appear to be rings, like tree-rings, extending every other billion light-years outwards for as far as we could see. The rings would be an optical illusion, a mirage, an artifact of the stellar spectrum given our current theories. But given our present ideas, the illusion of such rings would be unavoidable and noticeable to astronomers and cosmologists if space respiration were a fact. But we do not see such rings, an illusion that space respiration, if real, would impose on our viewpoint. Space respiration, like the maltese cross, is a fantasy.
According to the UB everything in the universe, other than Paradise, is rotating. Indeed, every layer of the material creation from Havona outwards, the Grand Universe and the four outer-space levels, rotates in a direction opposite the layers adjacent to it [UB 11:7.9]!
There is nothing in the big bang theory that would impart rotation, angular momentum, to the universe. Most cosmologists do not believe the universe is rotating. Imparted by the big bang, rotation would leave a polarization fingerprint on the background light, the CMB. Cosmologists have looked, but see nothing of this so far. That doesn’t mean it isn’t there. In fact there is recent evidence that rotations around multiple axes is possible (see link), while the UB claims but one axis (the semi-symmetrical axis perpendicular to Paradise). Nothing of what has been seen would suggest opposite rotations at different distances from us.
Alternate rotation of successive space-level bands would surely be noticed. Between the Grand Universe and the first outer-space level, the additive effect of rotation in opposite directions would have dramatic effects.
First, within a band, the proportion of galaxies rotating in the direction of band motion would be greater than the differences observed. Between bands, an even greater, alternating, difference would stand out. We would expect more rotation in one direction in nearby space, one-hundred-million light-years, and a billion light-years distant, more in the opposite direction. The small statistical variation in rotation randomness detected (see link above) makes no mention of variation by distance, nor does this earlier paper looking at a single-axis rotation. Second, and much more obviously, all the galaxies in the next outer-band approaching us would exhibit blue-shifted light, while those moving away from us would be more red-shifted than universe expansion could account for.
Despite some controversy over universe rotation as a whole, there can be little doubt the UB claim of alternating directions-of-rotation cannot be true. By any measure of distance, the lack of systematic difference in the color of light produced by bands of galaxies rotating in opposite directions is an irrefutable falsification of the UB claim.
DARK MATTER AND DARK ENERGY
There are two problems in modern cosmology, dark energy and dark matter, that are not mentioned directly in the UB, but bear commenting on in relation to what the book does say. Dark energy (say cosmologists) is what pushes space apart yielding the galaxy recession observations made since the 1920s. The UB has space respiration which has problems discussed above. On the side of physics and cosmology, there is the quantum vacuum, which at least (despite controversy) points at a solution to the dark energy problem.
Dark matter is another problem. It arises from our observation that the stars in the outer areas of a rotating galaxy are moving as fast as stars nearer the galactic center. This violates our understanding of how gravity works. Unless that is, there is much more gravity in and around the galaxy than we can measure by adding up all the stars and gas we detect. “Dark Matter” was proposed (in 1933 by Fritz Zwiky) as a solution to the problem. Think of it as a sort of stand-in for “we do not know what but it has gravity”.
The UB, tells us about entities, “Master Physical Controllers”, “force organizers”, and “Power Centers” [UB 29 all] whose job might just possibly include making that strange behavior happen, some sub-system of their larger-scale organization. Unlike space-respiration, unless one day dark matter is directly detected, there is nothing to be observed that would permit us to tell the difference between the action of controllers and force organizers or dark matter.
Dark matter is among the few cosmological problems not directly informed by the cold light of the CMB. But that light does pose an issue for controllers and other entities organizing physical matter as portrayed in the UB. The cold light is a fingerprint, left by the past, on the present distribution of galactic clusters we see throughout the visible (Earth’s “cosmic horizon”) universe. That fingerprint, plus momentum and gravity (including dark matter), all floating on dark energy, explains the present distribution of all the matter in the visible universe.
Unless the real goal (at least to the fourteen-billion-year stage) of the entities revealed in the UB is the present isotopic and homogeneous distribution we observe, they aren’t doing very much besides turning galaxies into pinwheels. To be sure this is not specifically a problem for UB cosmology as the matter of dark matter lies for the moment beyond our grasp.
In a hierarchy of “big science”, astronomy falls below cosmology. Cosmology is about universe origins and structure over-all. Today, on Earth, cosmology is focused on the background light. Astronomy is about the light of stars, not the background. In this essay, and the UB, the two disciplines cross over in the implications of space respiration and alternate-band-rotation. But those ideas are found nowhere in modern cosmology or astronomy other than the possibility of some over-all universe rotation, and the notion of a permanent, gravity-driven reversal of expansion into a “big crunch” and new big bang.
There is a lot of astronomy in the UB, much of it problematic. As with cosmology, the problem is what the UB says conflicts with our observations. Here, I refer to a more “local neighborhood”, hundreds-of-thousands of light-years and up to a few hundreds-of-millions, but not billions.
What exactly corresponds to the superuniverse of Orvonton? Tom Allen has written “The Great Debate on the Scale of Orvonton” addressing this question in a far more thorough and systematic way than I do here. He also makes a point about time. It is quite reasonable to suppose that the book speaks of two different Orvontons, one as it exists now, and the other as it will exist in the far future. The UB does not differentiate between these, but Mr. Allen’s point is worth bearing in mind in the discussion below.
The UB usually implies Orvonton is the “Milky Way Galaxy”, the issue being what counts as the Milky Way? Our superuniverse is about five-hundred-thousand light-years across [UB 32:2.11]. Now introductory astronomy texts will say the spiral arm galaxy we think of as the Milky Way is about one-hundred-thousand light-years across, but that does not include the now-discovered dozens of satellite galaxies orbiting the spiral part. If Orvonton includes all of these, five-hundred-thousand light-years is a fair (possible) estimate.
About two million light-years from the Milky Way is the spiral galaxy Andromeda and its collection of satellites. Is Andromeda another superuniverse? If it is, the fact that our two galaxies and their satellite collections are careening towards one another at thousands of kilometers an hour should be troubling. It will take millions of years for them to collide, but in the UB’s picture, they shouldn’t be drawing closer to one another at all but preceding in an orderly orbital fashion around the central universe! The UB says Andromeda is not yet inhabited [UB 15:4.7]. If anything, to our telescopes, it looks at least as well organized as our own Milky Way. Why should a well organized star cloud so close to us, in particular compared to all other galaxies in our super-cluster, be uninhabited when we, clearly, are not?
There is another curious thing about the Milky Way and Andromeda. There aren’t any other big galaxies anywhere within a few tens-of-millions of light-years. Some hundred randomly scattered smaller galaxies are in this region, our “local galactic cluster”. Beyond the “local cluster”, there are some hundred-thousand other, mostly small, galaxies and other local clusters out to one-hundred-million light-years! This collection, our super-cluster, is not distributed smoothly in its space but looks more like a chaotic three-dimensional ink-blot. It is called Laniakea, and this link is a computer rendering of it.
This region of space, a bubble some hundred million light-years across, looks nothing like the UB’s description of Orvonton, its ten grand divisions [15:3.4, 41:3.10] and so on. If the UB refers to the far future, it isn’t clear about it, especially if astronomers have supposedly identified eight of the ten divisions [UB 15:3.4]. This would have to mean “future Orvonton” and the eight identified divisions the few near-by galactic clusters identified in the 1930s. Some UB readers have seized on the hundred-thousand figure for the number of galaxies in the super-cluster (a rough estimate which could be substantially high or low, we do not know) and suggest that these galaxies are really what the UB calls “local universes”, the domains of individual Creator Sons. What ever part of the superuniverse these entities represent, they will come to look like the UB description in time.
Entertaining this idea for a moment explains the present chaotic distribution of the super-cluster if it can be shown that some order is being imposed. Is Laniakea more organized today than it was a billion or so years ago? We do not know of course, but our limited observation of relative motions does not suggest any ordering pattern and can seemingly be explained purely by gravity. It might also be that the UB is just plain inconsistent! Moreover, the speculation about entities around our galaxy, all the way out to the super-cluster, does not explain why, looking outward, widening our focus beyond a few hundred million light-years to a billion or more, we do not see one or a few super-clusters around us, but thousands of them in all directions. According to the UB, there are empty spaces, bands of lessened activity, in between bands of galactic creation, the “outer space levels” surrounding the Grand Universe [11:7.7]. Our view reveals nothing like this. To be sure we observe gigantic voids, empty space distributed like holes in Swiss cheese, but nowhere laid out in neat concentric circles.
A NOTE ON THE GREAT ATTRACTOR
The Great Attractor is not a part of UB cosmology or astronomy. It does, however, illustrate what readers sometimes do with bits of space news in effort to reconcile the UB with observations. Some decades ago, shortly after Laniakea’s discovery, it was also discovered that the entire supercluster was moving together in a definite direction and speed that was not, at that time, explainable. The term “great attractor” (GA) was coined in the early 2000s by Lynden-Bell as a stand-in for whatever it is that is pulling us along, at the time, an unknown gravitational source. Some UB readers speculated that the GA was Havona. I have already noted the effect on our observations that Havona would impose, and in the last few years, the GA has proven explainable after all.
There are not one but two Laniakea-sized superclusters out ahead of us in our line of flight, one a hundred-million light-years ahead of Laniakea, the other a hundred-million beyond the first. By contrast, behind us, in a direction opposite these two superclusters, there is a void, a bubble of mostly nothing some five-hundred-million light-years wide. Two superclusters lie in one direction, with nothing to counterbalance their gravity in the other. That explains both the direction and speed of our motion.
A MISSING SUPERUNIVERSE?
If the shape of the universe is the UB’s biggest problem cosmologically speaking, nothing more illustrates the book’s internal inconsistency better than this issue of Orvonton. The UB does not tell us if superuniverses evolve together or if number one somewhat precedes two, and so on down to number seven, Orvonton. Either way we should, from our perspective in Orvonton, see two (at least) other superuniverses, number one out in whatever direction we are moving, and number six on our other side.
If the Milky Way is Orvonton, then Andromeda is a natural candidate for one of the other inhabited superuniverses. But there is nothing comparable to Andromeda on the other side of us, and moreover, the UB explicitly denies Andromeda is inhabited! If Andromeda is not inhabited it cannot be superuniverse one or six. What then of Laniakea, our enormous hundred-million-light-year-spanning supercluster? Surely it is possible there are uninhabited regions of Orvonton, but of the hundred-thousand or so galaxies comprising Laniakea, the two largest and most obviously developed are Andromeda and the Milky Way.
If Laniakea is Orvonton, then there are two other superclusters (Shapley the nearest) out in one direction, but nothing, a gigantic empty void, in the other. If Shapley is superuniverse number one (we are moving in its direction), there is nothing to Laniakea’s opposite side representing number six. Perhaps Orvonton is even bigger than Laniakea? Astronomers have recently mapped a gigantic supercluster, outside Laniakea, that wraps more than half-way around it they call the “south pole wall” (see link). Such speculation can go on forever, but long before we reach the south polar wall we have left Orvonton’s association with the Milky Way far behind.
No matter what collection (the local cluster, the local sheet, and so on) we suppose might be Orvonton the selection of what would have to be universe numbers one and six would be arbitrary. No matter what we want to call inhabited superuniverses, however we group the galaxies, everything around them would have to be “outer space” and so moving in a direction opposite to our counter-clockwise rotation around Havona. We do not see any such behavior anywhere. The entire Laniakea cluster is moving in roughly the same direction.
The two scales, billions of light-years and hundreds of millions of light-years, are problematic for the UB. Neither should look like it does. Below these scales, in the millions of light-years and less, what the UB says is equally problematic.
If the Milky Way is Orvonton, even at five-hundred-thousand light-year across, the local universe of Nebadon is only one-hundred-thousandth part of it [UB 15:13.1]. Even were the Milky Way a sphere half a million light-years in diameter, each local universe would have a diameter considerably less than ten-thousand light-years (5.3 thousand by my calculation, but let’s be generous). Inside its volume must be ten-thousand system and one-hundred constellation headquarter collections (100 constellations each with 100 systems) [UB 15:2.4, 15:2.5]. Each system would be only a few hundred light-years across (500 by my calculation see below and note on the calculation at end of essay).
The UB claims that these headquarters worlds are lit by suns that give light but no heat [UB 15:6.3, 15:7.1], but it also says that the people of these worlds can see ordinary stars external to the headquarters. If they can see out, we can see in. Astronomers have mapped every star within a thousand light-years of Earth in every conceivable electromagnetic wavelength from the X-ray to the infrared. If, in the volume encompassed by that radius from Earth, any stars radiate visible light but no infrared, we would surely have noticed. A system-headquarters collection of 50 worlds [UB 15:7.5] with multiple suns, of any sort, only a few hundred light-years distant would stand out.
If the Milky Way is really the local universe, and Laniakea is the superuniverse, the nearest system headquarters could be thousands of light-years distant, and we have not mapped every star out that far. But there is no support for this idea in the UB. The book’s “Milky Way” is bigger than our Milky Way, by a factor between two and five (diameter, not volume and depending on where one draws the satellite boundary), not the two-hundred times required by associating Orvonton with Laniakea! There is only a convenient coincidence, astronomer’s estimates (which could be far off) of about one-hundred-thousand galaxies in Laniakea.
Below the galactic scale, there are, in the UB, many troubling assertions about stars. Astronomers estimate our sun will remain stable for another four or five billion years. The UB says twenty [UB 41:9.5]. This discrepancy is nothing like the all-or-nothing universe morphology problem. If stars act on (or are acted upon) energies we cannot detect (next paragraph), they might well extend the stable life of a star.
The book says that “ordinary sun(s)” can give out heat and light for trillions of years [UB 15:6.4]. Not only is this in conflict with modern astrophysical theory supported by observation, but it contradicts the twenty-five billion figure in paper 41. To some extent, the contradiction depends on what is meant by “ordinary suns” (see below on red-dwarfs). The book also says that suns, under certain (otherwise unspecified) conditions, transform and accelerate “energies of space which come their way established space circuits” [UB 15:6.4], implying the sun’s heat is being utilized, or augmented, in ways that should impact our observations. If stars, our own and others around us, are so affected by these energies their lifetimes extend by one or two orders of magnitude, our measurements of their light would be inconsistent with our astrophysical theories.
Astrophysics deals with the physics of stars, what makes them tick. The UB’s brief description of the process is consistent with what was known in the 1920s & 30s and remains true, if over-simplified, today. The UB description includes the special role of carbon in the fusion process [UB 41.8.1], something first proposed (by George Gamow) in 1923. The first sentence of this paragraph is put interestingly: “In those suns which are circuited in the space-energy channels, solar energy is liberated by various complex nuclear-reaction chains, the most common of which is the hydrogen-carbon-helium reaction.”
Gamow won a Nobel prize in physics for his discovery of this process. All-stars, at least all-stars very roughly similar to our sun, undergo the same carbon-catalytic reaction. Perhaps all those we see are circuited, but more problematic, the parameters of our equations and their theoretical results exactly match our observations of stellar behavior without having to account for gaps where contributions from “space-energy channels” had any impact. If undetectable energies were affecting solar output, the stellar spectrum should not be what our equations predict, and we observe.
There is no astrophysical evidence that any of the tens of thousands of stars we’ve examined and cataloged are affected by anything other than their gravity, pressure, temperature, and metal content reflected in their spectrum. The first three determine the rate of hydrogen fusion, while the metals content, in conjunction with ongoing gravitational contraction [UB 41:8.2], determine what happens after the hydrogen supply is nearly exhausted. Our sun, says the UB, will undergo a period of stable decline as long as it’s present middle age and youth combined, a total of over fifty billion years [UB 41:9.5]. Do encircuited suns decline? If so, is encircuitment the difference between the UB’s fifty-billion-year stable life and decline, and the more disconcerting six-to-eight billion-years before, in the estimate of astrophysicists, our declining sun becomes a red giant and swallows the orbits the inner planets? The UB tells us that some suns go on forever [UB 41:7.7].
Modern astronomy does recognize that the most common stars in the universe (perhaps half of all-stars) are red-dwarfs and some astronomers believe such stars might shine for a trillion years (a hundred-billion being a more commonly cited figure) given only the hydrogen with which they begin their lives. Still, the red-dwarfs we observe and have cataloged match our theoretical predictions, again having nothing to do with unknown “energies of space”. They appear to use their fuel as we would expect, given their mass, temperature, and so on.
We also know that such suns are electromagnetic nightmares not suitable for biological evolution as we imagine that process. But what the UB considers habitable and what present science thinks are “habitable limits” are very different (see UB paper 49). Unlike the stars, galaxies, and universe shape, neither our concept of life nor our extrasolar planetary astronomy is up to the task of comparing what we observe to what the UB says is the case. Holding judgment in abeyance, however, is not simply to accept the UB. Like the “Maltese Cross”, space respiration, and alternate-rotations, we have to assume that God (and his agents) could foster such radical living forms. All the same, worlds without atmosphere [UB 49:6] hosting living beings may be as much a fantasy as the other three problematic claims noted above.
There are issues with UB “space science” at every level, and the problems get worse as one goes up in scale. Whether our sun is stable for six billion or twenty-five-billion more years is immaterial to our short lives on Earth. The superuniverse problem is a little worse. If a seraphic transport can travel three times light speed [UB 23:3.2] and spiritually advanced mortals awaken three periods (Earth days?) after death [UB 49:6.8], then the system headquarters can be, at most, three light-days distant (this conundrum well noted by readers fifty years ago). Given that our nearest stellar neighbor is four light-years away, I should not have to explain why there can be nothing as significant as a system headquarters (fifty worlds and some number of suns) so close to us.
Other calculations produce similarly problematic results. There are one-billion systems in Orvonton (100,000 LU x 10,000 systems/LU). Even if we assume Orvonton is a sphere (very generous) with a radius of two-hundred-and-fifty-thousand light-years, the average radius of each system could be no more than two-hundred-and-fifty light-years (see note at the end of the essay on this calculation). Although out of reach for a three-day trip at triple-light-speed, that distance is close enough for us to detect a collection of so many worlds and suns.
Moving up the galaxy scale to supercluster, the conflict between the UB and present astronomical data gets worse. George Park is one of the “UB astronomers” who introduced the idea that Orvonton is the Laniakea supercluster spanning nearly one-hundred-million light-years, ignoring the UB’s plain statement of Orvonton’s size (500,000 light-years). John Causland also has a presentation on “UB Astronomy”, which he introduces by noting that the book’s claims do not match modern observations, but, he says, if we look at what is said in the context of 1920s cosmology, the book makes sense. That isn’t quite true. Even in the 1930s astronomers understood enough about the physics of light, and telescopes were powerful enough, to reject UB claims about space respiration and alternate-rotations if astronomers had become aware of them. Back in those days, the book’s assertions about planets, suns, and even the Milky Way, plausible-seeming for the average educated reader, would be rejected by real astronomers. In particular (and especially), the “maltese cross”, the idea of a bilaterally-symmetrical universe, is not to be found in the cosmological literature of any period.
Acknowledging the “times of the writing”, however, does not address the bigger problem: modern observations at both astronomical and cosmological scales make much of what the UB claims is the case not merely implausible, but impossible! There is nothing about Laniakea that looks like the UB’s description of a more-or-less orderly super-universe, and as we pan out to a view billions of light-years across, the universe seems nothing like what the UB describes! Not only are there millions of superclusters in every direction, but there isn’t the slightest evidence of a massive gravitational center in any direction!
UB theology is centered on God, who is spirit. But God himself resides on Paradise, which has to be the center of the physical universe! In the time-space realms, the UB informs us, spiritual beings live on physical worlds [12:8.1]. We cannot elide the headquarters location problem by suggesting that collections of architectural worlds (and in particular their suns) cannot be detected with our physical instruments. Where are they? Moving up in scale, we have the problem of reconciling the UB picture with an isotropic and centerless universe originating in a big bang and now cooled down for near fifteen-billion years.
Our theories of the big bang well describe everything we observe in the physical cosmos given an age of fifteen-billion years. All the present controversy surrounding the big bang is about what happens in its first three seconds! Three seconds marks the time of nucleosynthesis, the formation of protons and neutrons, nuclei of hydrogen, helium, and a little lithium, from the quark and radiation soup. Despite many unknowns, the structure of the present universe we see follows comfortably from our theories beginning with nucleosynthesis! To suggest that all our evidence-based conclusions are an illusion is not credible, revelatory claims notwithstanding.
These cosmological and astronomical issues do not render post-mortal survival and ascension impossible. The UB’s God certainly has the power to arrange for survival and ascension into and through the universe as we perceive it, not to mention creation via a big bang. If the revelators could forecast our scientific progress for the next thousand years (their record is terrible less than 100 years out), why make up this fantasy universe architecture, and why say so much about cosmology and astronomy that today, only 66 years after publication, is so obviously false? If the revelators were not permitted to reveal the big bang, why make up a fantasy? Why not merely tell us about the soul, post-mortal personality reconstitution, the general nature of descendent personalities, and so on without embedding the descriptions in a fantasy universe?
My theory is that it all has something to do with drama. One purpose of the book’s ascension story is to drive home the truth that perfection in God’s terms is a long educational process. If the revelators merely described the survival mechanism without a physical stage on which all plays out (not to mention many now-unlikely-to-be-true statements about the physical and biological history of Earth — a subject for another paper), the UB would be half its size. Readers would come away with little in the way of appreciation for the scope and complexity of the process. In short, the authors created a fantasy universe to emphasize the drama and adventure of the ascension. The purpose of the fantasy universe is literary!
______ A few notes ______
NOTE: Evidence for the Big Bang
Assuming the big bang, the temperature of the “first light” (photons) in the universe (the Cosmic Microwave Background [CMB]) was calculated in the early 1950s (by Russian physicist George Gamow) and found, in 1965 to be within four degrees (Kelvin) of its predicted value. Importantly this radiation is identical (down to ten-thousandths of a degree) in every direction we look, impossible if UB cosmology were true.
Assuming the big bang, physicists (in the 1970s) realized there must be a background neutrino temperature a little cooler than the background photon temperature. This difference is due to the universe becoming transparent to neutrinos three seconds after the big bang, while photons are not liberated from the radiation for three-hundred-seventy thousand years (see the “recombination event”). The neutrino background temperature was measured in 2010 and found to be one-one-hundredth of a degree off its predicted value.
Assuming the big bang, the pressure of the early universe would cause compression waves to bounce around through the initially very dense and hot universe; literally reflecting off the limits of the universe at that time. Sound is a compression wave, and this prediction means that the expanding universe would have “rung like a bell” for a period. As the universe expanded, the wavelength of these echoes lengthens their frequency drops. Eventually (at recombination, see link above), the density of the expanding universe drops below the value required to support compression waves leaving a frozen wave, a small density gradient in the distribution of matter reflected in the microwave background. Cosmologists predicted the frequency and amplitude of this frozen wave (and its first few harmonics) in the late 20th Century. In the first decade of the 21st Century cosmologists measured both to be exactly what was predicted (see “The Music of the Big Bang”  by Amadeo Balbi, and these links [graph], [article]).
When instruments became sensitive enough, cosmologists found tiny differences (ten-thousandths of a degree) in the CMB. The big bang theory says these small differences, mapped accurately enough, should predict the present distribution of galaxies (the slightly cooler spots being where galactic clusters would form). Such accurate mapping was achieved in the 2010s, and the map does indeed predict precisely where galactic clusters are found today.
The distribution of stars, their color and size, along with our calculations of stellar life well matches (it is what we would expect to find) a roughly fifteen-billion-year-old universe!
Item (1) above was the first evidence of the big bang. Items (2-4) would be extraordinarily coincidental if the big bang is not real.
NOTE: Calculation of System (for example Satania) radius. Assume Orvonton is a sphere of radius 250,000ly
Find cubic light-years in Orvonton (radius of 250,000ly) Pi(2.5x10e5)e3 = 4.9x10e16 = culyOrv
Find the cubic light-years in a system. There are 1 billion systems in Orvonton (100.000 local universes times 100 constellations times 100 systems (4.9x10e16/1x10e9) = 4.9x10e7 culySys
Find radius cubed of system (culySys/Pi) = 15,605,095
Take cube root of radius-cubed for radius = 249.9 light-years!
The reviews published here in the blog are but a few of those I write for Amazon. I republish here because these books leave dangling philosophical implications, interesting to me, that isn’t appropriate to note in a book review. But few books reviewed and commented upon here give me as much dangling bait as this one. As always, the full Amazon review and link to the book are included below.
Flanagan says there are two moral roots, biology (evolution) and culture. He is missing the third root, human mind’s contact with spirit, a concept I promise to flesh out below. Here I will say that this contact has nothing to do with the pronouncements, moral or otherwise, of existing religious institutions, but with a property unique to human mind.
Consider this: humans murder other humans. We say this is wrong (the inverse of right) and perhaps evil, the inverse of good. Every human society on earth avers this moral principle with few exceptions permitted. War is one exception nearly, but not entirely, universal. “Honor killings” among some peoples who do not deserve the appellation “civilized” (I am not even as much a moral relativist as Flanagan) is another if rarer exception. Chimpanzees, it is well documented, also sometimes murder other chimpanzees. Yet, we, that is humans, do not think of this as wrong. We say that chimpanzees are amoral. Being mere animals (though we share 98% of our DNA with them) they are not, we say, able to tell right from wrong.
Why, having emerged (biologically) from the animals, do humans possess a conviction (which animals do not) that there exists a right and a wrong, a good and an evil? I refer here not to any conviction about specific wrongs, even murder. Apart from murder, there are numerous (as Flanagan so well shows us) culturally varying ideas of rights and wrongs. Why are humans in particular convinced that such things as rights and wrongs, or moral better and worse, in the abstract exist? I am not referring here to phenomenal experiences whether pleasurable or painful. As organisms we are biologically directed towards pleasure and away from pain. This is true of animals in some ways even more than humans. But only humans come to attach a moral facet to the experience, a conviction not only that I like pleasure more than pain, but that there is an abstract moral quality to pleasure lacking, or inverted, in pain.
The conviction cannot come from biology. We are animals. Could the 2% difference between our DNA and chimpanzees alone explain it? That 2% has a lot to do. We do not, after all, look much like Chimpanzees. Surely, the ubiquitous (at least widespread) belief in rightness and wrongness, a moral direction, lends itself to the fitness of larger, more complex societies. Yet Darwinian selection pressure comes (as we know) from outside the organism, even the community of organisms. If the conviction that a moral direction exists, exerts selection pressure, the conviction cannot be an illusion. If we merely “make it up” (the conviction is illusion), yet more of those humans who have it pass on their genes, we have a case, contra Darwin, in which fantasy, having no mind-independent counterpart, is biologically adaptive!
What about culture? Of course, our specific notions of right and wrong are cultural. But the cultural evolution of moral specificities presupposes the belief in a moral direction, a better and worse.. Chimpanzees have been around on earth much longer than humans. They live in complex (for animals) societies. Yet, they have not evolved any socially-determined moral specificities other than the default “rule of the strongest male”.
The moral particulars Flanagan so eloquently describes evolve from both biology and society. But the conviction that there ought to be moral particulars (and vaguely the direction particulars take over historical time) comes from the human mind’s sensitivity to, apperception of, values: truth, beauty, and goodness. Like morality abstractly as compared to specifics, the values are not the truth of particular propositions, or the beauty-content of particular configurations of the material world, or what particular acts, people, or concepts (like justice) are good. They are, rather, pointers in the direction of such things. They constitute our only phenomenal access to spirit, which from the human viewpoint, comes out to sensitivity to the moral quality of God’s character.
Take justice (civil, economic, or criminal) for example. All cultures revere justice, and while what constitutes justice varies greatly between cultures, most strive for something resembling fairness which, in its turn, is also expressed in various ways. The vast majority of the world community would say that summary execution, the “justice” of Islamic terrorists, is not really justice at all, and they would be right.
Justice is not the value “goodness”. Rather, justice has goodness if a particular expression of it has something of the flavor of that value. Perhaps (in a criminal context), life imprisonment for a capital crime is more just (has more goodness) than execution. But execution after a fair and honest trial based on indubitable evidence is more just than summary execution! It isn’t the particular, nor even the concept “justice” that is supplied by our phenomenal experience of values, rather the conviction that there is something called goodness, and justice has [at least] some of it. From the human viewpoint, goodness is absolute only in the limited sense that it is mind-independently real. Our sensitivity to spirit has but the barest inkling of value-flavors as these would seem from God’s viewpoint, while the goodness content of anything (abstract or otherwise), as perceived by us, is a judgement of individual mind and subject to the relativity of individual (and socio-cultural) perspective.
I have written extensively on this. For a deeper discussion of the relation between values and mind see “From what comes Mind”, and “What are Truth, Beauty, and Goodness” among other articles. Here I sketch the briefest summary of it all. There is a field in spacetime that, in conjunction with brains, is the source of mind, that is phenomenal consciousness, the “what is it like to be..” experience. The richness of consciousness is proportional to the capacity of the underlying brain. Mammals, richer than reptiles, dogs, cats, and some birds, richer than mice, and chimpanzees richer than all the rest, except the human. The richness of the human experience includes sensitivity to the values, to spirit, something transparent to animal minds, and it is this sensitivity that constitutes the conviction that there is a “moral direction”, that some specific moralities are (or can be) theoretically better (meaning more true, beautiful, and good) than others.
The conviction that there is a moral direction does not automatically (or even often after much thought) produce correct judgments concerning the value-content of particulars. There are, even between cultures, often broad agreements over the truth, goodness, or beauty of particulars. Just as often there are disagreements over which morality has “more goodness”, which artform or natural phenomenon “more beautiful”, of what human notions of the universe are “more true”. In our day to day, culturally embedded experience, we agree (mostly) on only two things: the particulars are relative (even within a culture), and yet values have an absolute quality to them.
Think of an old fashioned magnetic compass. The pointer never locks-on to “the north” (leaving aside “magnetic north”) but floats vaguely in its direction. Looking at a compass, we never know exactly in what direction lies north, but it does give us an approximate direction, and by implication (the needle is not completely random), a philosophical reason to believe that “the north” exists.
Truth, beauty, and goodness are properties of the universe (not merely the physical universe which would make beauty the only value) whose reality we apperceive. We do not merely make them up, individually or collectively. We do “make up” the particulars, individually for ourselves, and collectively for society. The particulars are relative, to each other, and to the direction of the value pointer. Some particulars are “better than” others because they better reflect what we sense of the reality of truth, beauty, and goodness.
What moral thinkers down through the ages have noticed is that particulars chosen because they are more aligned with [what we perceive to be] the value compass tend to have better and longer-lasting social outcomes. It is also true that the conviction of truth, beauty, and goodness’s reality, their objectivity, does not impart objectivity to our evaluation of their instantiation in particulars. The reality of a value direction is, like the material world, independent of human mind, though it is sensed, on earth, only in human mind. Moral particulars emerge (socially and individually) when social situations arise that appear to involve sensed values, particularly, in the case of morals, goodness.
Theism, grounding value apperception, is the tool Flanagan needs to complete his project.
It explains why humans can be moral, immoral, and amoral, while even the most advanced animals only have the last.
It explains the direction (however slowly things change) of moral evolution in human communities. It gives him his “ought from is”, not in detail (particular moralities) but as concerns their general direction towards more truth, beauty, and goodness.
It provides the reason for individual moral striving by answering the “why should I” question. In the long run, we are not dead. This life is but a phase of a much larger project in which we personally continue to participate.
I have not here discussed personal survival of material death (an implication of my theism, see “Prolegomena for a Future Theology” and “What is the Soul?”), but I note that Flanagan’s favorite alternate cultural example, Buddhism, answers this same question with reincarnation. According to the Karma doctrine, we are all living in something like John Rawls’ (“A Theory of Justice” 1971) “original position”. None of us know into what social status we are reborn (analogous to “Pascal’s wager”). Wise in this life to follow a moral course that maximizes our chance for a good (in the sense of less onerous) “next incarnation”.
Some might see this book as an apology for moral relativism. It is not that, exactly, but does struggle with that notion because Dr. Flanagan cannot quite get what he wants here. What he wants is an appreciation for the utility and value (to human-well-being) of various moral particulars as found in cultures around the world. In addition, he wants to select out of this collection those particulars that are good for humanity as a whole, where “goodness” is not measured solely from the viewpoint of any particular culture nor by utility alone.
He cannot get what he wants, because the cultures chosen to illustrate his points are themselves selected from a certain range of what is, to us steeped in Western European culture, already acceptable. The Buddhist doctrine of “no-self” and its moral implications is acceptable, as is ancient Roman Stoicism. By contrast, leaving unwanted babies exposed to the elements to die (also ancient Roman practice), or Wahabbist beheading of infidels, is not.
The book is divided into three parts. In the first, the author explores what he takes to be the two-roots-of-morality: biological evolution to human status with all its attendant adaptations for survival for the first nine-hundred-and-ninety-thousand years of human existence, and the cultural (social) accretions of the last ten-thousand years. This allows him to identify what he calls the moral or ethical “possibility space”. Yet (as he admits throughout the book) from these two alone he cannot identify an unambiguous “ought from is” (a well-known conundrum introduced by David Hume in the 18th Century) without the selection bias introduced by his chosen examples.
In the book’s middle third Flanagan chooses one emotion, anger (with manifest roots in biology), and explores its moral possibilities across his cultural examples. Anger, however, is one of the negative emotions that all cultures understand is better limited (at least) under normal circumstances. The issue in focus here is whether anger, its expression, is ever morally justified, a virtue. It is easy enough to construct examples in which some action, taken “in anger”, results in a genuinely just outcome. Yet Flanagan understands that every culture seeks either to extirpate anger or at least to limit its expression, and that such expression as might be permitted makes sense only, if at all, because of some prior circumstance that warrants anger in the first place! The moral complications engendered by negative emotions like anger are good perhaps for cross-cultural comparisons, but not very good at helping to understand the more limited variation in positive moral virtues like compassion which seem universally to be welcome.
In the final third the author returns to the theme of finding some universal “oughts” from the combination of biological (which the world shares) and cultural roots. He shows us that, broadly speaking, almost every culture agrees that it is better for all if each individual is kind, honest, just, compassionate, and so on rather than envious, hateful, insincere, and selfish. He also uses this section to explore the implications of concepts of the self to moral motivation, But he can never answer the question of why, exactly, I should choose this “better course” if, in my personal opinion, I am better off (economically, politically, sexually, whatever) doing bad? He puts his finger right on the heart of the problem: there can be no ultimate answer if, as J. M. Keynes noted, “in the long run, we are all dead”.
Flanagan does what he can with the tools he has. Although he does address religion in the context of cultural and social forces, like Keynes, he believes that in the end, we are all dead, and this belief, shared by the vast majority of his peers, leaves him with little more than some interesting cultural comparisons, a description of, as he calls it, the “moral possibility space”. I will address the tool he lacks, and its implications for biology, culture, and morality in my blog.
This book (Amazon review and link below) is another attempt to find a solution to both the necessity and sufficiency of brains to minds. Gazzaniga is a materialist, and so by his supposition, there must be, in the brain itself, the secret to mind’s manifestation. He has written a very cogent examination of the brain’s layering and the complementarity of a rule-law combination that animates life and (he thinks) is the secret to the otherwise mysterious properties of consciousness. This theme is reflected in “Incomplete Nature” (Deacon 2011), while his connection between life, consciousness, and quantum mechanics brings Henry Stapp (“Quantum Theory and Free Will” 2017) and others to mind.
Gazzaniga is not a physicist but a neuroscientist, and his specialty is the connection between brain lesions, surgery, and consciousness. What he notes, profoundly enough, is that consciousness is not something that must be generated by a whole, healthy brain, nor does it arise from a specific part or even anatomical layer, but emerges from any parts of the brain that still work! When only parts of the brain are working, the affected individual reports (sometimes in very indirect ways depending on what damage there is) that they are conscious and feel mostly normal, despite considerable gaps in accounts of that experience’s content. For example, a patient may report feeling perfectly normal even though her awareness includes nothing whatsoever to her left.
In this book, we have a well-written account of the various ways in which the brain, a marvelously complex and mysterious thing, generating some “what is it like to be” inner world the individual reports as her subjectively-recognizable self, even when damaged! But even if the principles and mechanisms of this process are something like what Gazzaniga suggests to us, they are empirical evidence only of their necessity, not their sufficiency, to bring about the emergence of subjective experience.
Nor, it has to be said, are the limits of what we know about the brain evidence that it is not sufficient to bring about mind’s emergence. The problem here is metaphysical. In all other emergent phenomena identified by science, even the case of life, the point of emergence is identifiable, as are the properties of what emerges. There is always a physical connection between prior and post-emergent physics. Both are always physical. The one can be fully traced, with mathematical rigor, through to the other. The brain-mind connection is different. No one has identified where, in the chain of neurological causes, a subject appears, nor precisely what the subject is. The brain’s physics plays its essential role, but what emerges isn’t physical in any sense that physics understands that term.
Yet there is also no evidence (evidence taken to involve physical observation) that there is anything in the universe (besides brains) that contributes some other “necessary ingredient”, that together with the brain, becomes sufficient for the emergence of the individual mind. The hypothesis that such a phenomenon exists is speculative and grounded on physics’s inability to do the job thanks to causal closure, the principle that physics produces only physics.
Gazzaniga suggests the emergence, in living matter, of translated information (in our case, DNA to RNA to proteins), what he calls a rules-based ordering, allows physics to violate the causal closure principle. Gazzaniga is saying, essentially, that the rules-based operation and interaction between layers and sub-sections of the brain can and does produce a non-physical emergent reality, mind! But there is no evidence that rules-based violation of causal closure is possible. None of the other emergent phenomena in the universe, including life (the other “rules-based” phenomenon), violate causal closure. No one has suggested how information ordering as such would or could produce a violation. Physics has nothing here. “Mind exists, therefore physics must be sufficient to produce it” is the sum and substance of the claim.
There have been attempts to side-step this problem. Russellian Monism suggests that every object in the universe, from protons to galaxies, has “mental properties” (sometimes called “proto-mental properties”) that “add up” to mind of the sort familiar to us when brain-objects appear on the scene. None of these theories includes any suggestion as to the nature of these “mental properties”. David Chalmers (“The Conscious Mind” 1997 and others) suggests “mental laws” built into physics (a view that collapses into Russellian Monism), or a set of laws parallel to physics and present with them from the moment of the big bang (collapsing into what Philip Goff [“Galileo’s Error” 2019] calls “cosmological panpsychism”). Like mental properties, the form such laws might take, or how we might go about detecting their specific influences, is left unspecified.
Each of these suggestions has numerous problems besides leaving key requirements unspecified. I’ve addressed these in other papers (see “Fantasy Physics and the Genesis of Mind”, and “For Every Theist there are One Hundred Materialists”). All of these ideas amount to a quasi-dualism (what Chalmers calls “property dualism”), and in every case, causal closure is violated. Materialism (if some of these ideas can be called materialistic at all) in the philosophy of mind comes down to a two-horned dilemma. Either mind is real and non-physical in which case we must account for its apparent violation of causal closure, or mind isn’t real at all, leaving us nothing for which to account.
A few philosophers have made a go at the second horn, but it strikes most as prima facie absurd. If you accept the first horn (as does Gazzaniga, Chalmers, Goff, and many others), you are already a dualist no matter what your materialistic credentials. Substance-dualism is another alternative. There are more nuanced versions than the simple Cartesian “mind imposed on brains”. For example, a detection, by brains, of some field with which brains, and only brains, interact. Individual minds are analogous to the sound (compression waves) issuing from radios whose antennae are sensitive to some electromagnetic radiation; the field is the radiation, the brain is the radio and antenna, mind is the music (see “From What Comes Mind”).
The problem with substance dualism is that whatever the field is, it isn’t physical. Its source must be something other than physics. Critics argue that this demands both a plausible source (for example God. See “Metaphysical Stability in the Philosophy of Mind”) and an accounting of the field-brain interaction. But as noted in papers linked above, the unspecifiable “proto-mental properties” of Russellian Monism, panpsychism, or the “psychic laws” of Chalmers’ property dualism, demand the same dual accounting (asserting that these qualities “just belong to physics” is not an account of their origin) while violating causal closure (they are purportedly physical after all). Substance dualism preserves causal closure. Physics is not required to be both necessary and sufficient for consciousness.
Yet even granting that such a model is correct, how the brain works to detect the field remains an open empirical issue. Gazzaniga and Deacon (see link above to “Incomplete Nature”) both have more nuanced views here than philosophers like Chalmers, Nagel, Russell, Goff, and many others; all moderns trying to make that first horn work.
This is a book about consciousness and specifically, an attempt to find a solution to the qualitative difference between “minds” and brains from within physics. This is a consequence of the “materialist paradigm” (it can only be physics). Dr. Gazzaniga is a true believer. But his is the case for ninety-percent of the philosophy of mind I read and review anyway. What distinguishes this one?
Gazzaniga reviews some history for us and brings forward insights from psychology, biology, medicine (in particular observations of damaged or surgically altered brains), and physics, in particular, the notion (from quantum mechanics) of complementarity. Phenomena can have two aspects, they can exist as two sides of the same coin but at the same time, one cannot always say how each becomes the other. The two sides are not mutually reducible.
Gazzaniga, along with many others in the field, believes that quantum phenomena have some connection to consciousness (many others have speculated about this), but he also believes that this connection began way back at the origin of life. Life, like consciousness, rests in part on quantum behavior! I’ve been calling attention to this very reasonable idea for years, so it’s nice to see the idea expressed by someone with more credibility than I seem to have.
This is an important aspect of Gazzaniga’s theory because it allows him to trace the root of “the subjective” not merely to brains, but all the way back to the origin of life. Here he brings in the distinction between “rules” and “laws”. The mechanisms that characterize living things, all living things, are “rule-governed”, not “law-governed” The distinction is important because a rule (in our case how DNA sequences become specific protein sequences) adds an extra layer, an abstraction, on top of laws. Laws are fixed, rules can be changed. That is the secret of both life and consciousness. He is NOT claiming that early life was conscious. Instead, what makes life alive, its complementary double-sided nature (lawful rules), is the same principle operating in the emergence of consciousness from brains.
From medical brain research, he notes that damaged brains are still conscious. Aspects of the former consciousness will be missing, but the person (whose damaged brain it is) doesn’t notice what’s missing. From this, he concludes that consciousness is not produced by a particular part of the brain but rather is a product of every part of it operating to produce its own small part of the whole subjective experience.
Also incorporated is the idea of modularity and layers of neural activity. Consciousness bubbles up through the layers becoming progressively richer in richer brains, but existing in some sense from the times of the earliest true nerve ganglia. The book is crafted to carry us through the development of these ideas from both medicine and philosophy. Gazzaniga’s “instinct idea” is the last aspect introduced. He notes that, like consciousness, brain research points to instincts being distributed phenomena, hence, consciousness is an instinct! Logically this is a stretch and is not as important to the theory as his rules-laws distinction and synthesis of complementarity and modularity.
In the end, like other speculations referenced in the book, he fails to nail down the “how” or the “what” of consciousness. Gazzaniga’s approach might prove to be a useful addition in the quest to answer these questions, but all of them, including this one, are perfectly consistent with a dualism holding that brains are necessary but not sufficient to explain the appearance of the subjective from the objective. Every one of his ideas can be true, while still not giving him what he needs. Every other complementarity known to our physics can be physically measured on both “sides of the coin”. Not simultaneously, but that is beside the point. It remains precisely the problem with mind that physical measurement of the “other side”, the subjective side, is impossible! That makes mind different. That makes brains insufficient, or at least leaves open that possibility.
“Disunited Nations” is a forecast of the world’s geopolitical layout twenty to fifty years from now. The “global order” set up in 1946 (see review included below) is unwinding, but it is not unwound. Peter Zeihan doesn’t fix dates, but it is reasonable to suppose that, as he sees it, the complete unwinding will take another ten-to-fifteen years. Following that, it will be another fifteen-to-twenty-five more years for the dust to settle into some new version of normal. Forty years (to ~2060) and the geopolitical world will stand transformed. Alas Zeihan’s analysis, the status of nations in that future, implicitly takes climate to be a constant. He mentions “climate change” only once, doesn’t discuss it, and misses its implications for the same time-frame.
“The Uninhabitable Earth” (2018 by David Wallace-Wells) makes climatological projections for roughly the same time-frame, twenty-five to fifty years from 2020. Wallace-Wells goes beyond that, but the near to medium-term climate future contains enough change to alter not merely the geopolitics of the world but the geography and geophysics of it! According to Wallace-Wells, a further two to three-degree centigrade rise in average temperature is now “baked into the system”. If we cease all industrial carbon output now, we will reach two degrees over the 1900 base (we are at one degree and change now) in thirty or so years, three degrees in seventy-five. But we are not “stopping all industrial carbon output now”, nor does it appear that we will even slow it appreciably over the next twenty-five years. As a result, we will hit two degrees in fifteen years and three degrees twenty or twenty-five years later, all of this well within Zeihan’s geopolitical time-frame.
The most direct climatological impact has to do with a weather-related concept called “wet-bulb temperature”, a measurement of the temperature if relative humidity was one-hundred percent. Human beings cannot survive the heat (in the absence of some mitigating technology) if bodies cannot sweat their way to cooling down. At 35C, you will die if the humidity is near one-hundred percent. At 45C, fifty-percent humidity is enough to kill you. A few dozen cities around the world reach lethal temperature and humidity levels on multiple days during their summers. In Wallace-Wells’ view, this condition will prevail over virtually all the tropical and much of the temperate Earth by 2075, possibly by 2050! In between now and then, the next five or ten years, the number of places and the number of days on which people (always the elderly and other vulnerable groups first) die because it is too hot will continue to expand.
How much difference does a degree or two celsius make? In 2020, Phoenix had a record 50 days hitting 110F (44C), shattering the old record (33 days) set only 9 years earlier. The hottest it got was over 120F! When the climate hits 2C degrees warming, Phoenix will experience one-hundred or more days a year of such high temperatures, and on some days, temperatures will reach 130F (54C)! Phoenix is already pretty hot. In 2030 the outside will be very uncomfortable for a third of the year and, simply put, not survivable on the worst days. The city will require more electricity and water to cool buildings and sustain life. Electricity may, perhaps, be forthcoming. As for water, the Colorado river will by then be a fraction of the present volume (already below levels when the river’s physical connection to Phoenix was made). At three degrees celsius, no one will be able to afford to protect themselves from the heat in Phoenix!
When does a city, even one well above sea level, become unlivable thanks to heat and humidity? Does it happen when the temperature exceeds lethal levels ten days a year, fifty, or a hundred? In Zeihan’s terms, some of the impacts may be perversely beneficial! Hot weather that kills mostly the elderly might help correct a nation’s demographic decline by rebalancing the age distribution!
The people of Phoenix, and for that matter, much of the globe, will have no place to go. By 2075 coastlines the world over will be transformed; their megapolises, presently the locus of most economies, will be gone. The rough triangle between Houston TX, Mobile AL, and St. Louis MO, will be a permanent part of the Gulf of Mexico. Large-scale permanent “oceanification” will happen to low-lying places the world over. Bangladesh will be underwater, as will South Florida and a good deal of North-Western Europe. Today’s productive farmlands in temperate zones will be too hot and too wet (the central U.S.) or too dry (California) to grow many of the crops produced in those regions today. In California, people might retreat to the mountains’ relative coolness, but those places are burning down! A leading wildland fire expert said that every burnable [wildland] acre in California would burn at least once in the next ten to twenty years!
Zeihan projects a future based on fixed (geographic) and fluid but forecastable (demographics, present requirements for food and energy, resources) data. Like his data, some climate impacts (rising sea levels) are pretty much a sure thing, though exactly how fast this happens remains unknown. The physical geography of the world’s coastal plains (some extending inland hundreds of miles) will be very different. Food and the availability of freshwater will impact demographic trends. Zeihan makes it clear that the world of 2050 will not produce and transport as much food as it now does. He projects famine. Climate considerations suggest that famine will be global and not merely a regional problem. What will Indo-Pakistani populations do when all the Himalayan glaciers melt away, and the Indus and Ganges rivers are a tiny fraction of their present volume? In poorer food-producing (and especially water-scarce) regions, there will be mass starvation.
Rising water will not cover the Earth. There will always be coasts somewhere. Rivers will empty into the new coastlines; new port opportunities will arise. But some of those places will merge into the regions where it is too hot or dry or wet to survive without expensive infrastructure. Will even a rich country like the United States be able to afford any of this? Longer than others, perhaps, but not by all that much. There is more woe to be had. Will cropland problems (heat, drought, floods, crop-destroying winds) be as severe as the sea level problem twenty years from now, or will that take fifty years? Either way, the question becomes tangled with Zeihan’s projections for the relative worth of national economies, and American cropland will not alone be negatively affected by climate.
The answers to how this works out in the near term, say the next ten to fifteen years, lie in the economic intersection between Zeihan’s analysis and climatological effects (see the link to the Wallace-Wells book, and also “The Geography of Risk” for other discussions of it). Coastal populations will fight rising waters; others will wrestle with drought, fire, or floods from storms. All will battle the rising temperature, and at some point, varying in each part of the world, it will become too expensive to do so. We cannot predict exactly when the cost of climate-mitigation will first exhaust a national economy (in the U.S., the barrier islands of the Eastern Seaboard, Florida, and the Gulf of Mexico), or when New York City begins abandoning large sections of itself. Still, that time is but a few decades away at most. Bangladesh has less time than that! The bottom line is this: Zeihan projects a specific global distribution of wealth and resources fifty years from now. He projects massive refugee migrations because the “global order” that presently sustains many populations by trade will be gone. Thanks to climate change, that wealth (broadly and with possible partial-exceptions in places like Canada, Siberia, and the Nordic nations) will be one-tenth (the money [energy, resources] spent on early climate-mitigation efforts) and the refugee populations ten or a hundred-times what Zeihan projects.
I haven’t the grasp of details I need to juggle Zeihan’s country-by-country analysis of the world after “the order” has collapsed plus the impact of climate change. I can make two generalizations with reasonable certainty. (1) While climate change will not alter the distribution of resources in the Earth’s crust, it will impact every other parameter Zeihan considers. (2) Everyone will come out much worse off than Zeihan predicts. I can only hope he will read this, and it will give him an idea for a follow-on book.
This book looks at the future of the Earth’s various nations and their relations over roughly the next 50-75 years. If you read other authors on international relations, you will recognize many of the same notes struck. But Zeihan is less interested in the relations between governments compared to that between physical countries (and regions) situated in specific geopolitical settings, with their particular demographics and economic requirements both on the selling side (outputs), and resources (inputs) needed to produce goods and feed its population.
Zeihan opens in 1946 when the U.S. economy was half the world’s economy. The way Zeihan sees it, unlike the empires of the past whose conquests were mostly military, the U.S. offered the world a bribe. First, the U.S. would patrol the seas and guarantee freedom of navigation everywhere to all. Second, the U.S. would fight and bleed for any ally when necessary. Third, the U.S. would open its markets to its partners even if they partly protected their own. Fourth, the U.S. would provide financial liquidity to grease all the wheels and make this work.
This four-part bribe has worked for the most part to grow the economy of the planet, feed expanding populations, and in general, keep any tendency to militaristic conquest to a minimum. The trade relationships and supply chains developed over the last half of the 20th Century, and the first decades of the 21st, are a testimony to its success. It hasn’t been perfect. Not everyone wanted to be on board. But as it happened, the great majority of the world’s economies did get on board (even China since 1972) and have benefitted, over-all (not without hiccups) as a result.
The problem is, the bribe has run its course. The U.S. economy is now about twenty-five percent of the world economy, not half. The five-hundred-fifty ship navy the U.S. had in 1946 is now down to about three-hundred ships, and one-hundred of those dedicated to supporting nine super-carriers. The U.S. can no longer afford to be the guarantor of the sea lanes, nor be an open market to any import. The same is becoming true of standing military commitments around the world. The American people are tired of bleeding, or the threat of bleeding, for others whose interests are not often aligned with our own, and there are not enough dollars to float all economic boats.
Not only is the “great order” unwinding, but scatter-shot American foreign policy, a policy without any clear direction, is helping dissolve it even faster than it otherwise needs to go (not that other governments are much help). The question is, what happens when all of those U.S. guarantees are gone (the U.S. is, for now, still patrolling the seas). That future is what this book is about.
Zeihan takes us on a tour of the world by country and region, describing what each will experience when the order is gone. His dominant considerations revolve around internal geography, location in the world, and population demographics over the next fifty years. The parents of that generation, the youth of fifty-years from now, already born. The economics of resources come next. What does a country (or region) produce? What inputs does it need to make whatever it is? How does it feed its population, where do its energy and materials come from, and so on? As it turns out, by Zeihan’s analysis some nations and regions will do better than others. Most end up very badly, and the mix won’t be what you expect. To be clear about one thing though, “better” and “worse” are relative terms as he makes clear at the end of his analysis. No one will be as well off in absolute terms as they are right now!
As refreshing and unexpected as it is, Zeihan’s analysis has a blind spot. It is strange that except in the context of Japan, China, and the Middle East, he never focuses on India and the Indo-Pakistani region. Not sure how he missed that one, but he did. Meanwhile, his projections have a broader problem. He mentions climate change literally one time and says nothing about it. The impact of climate change is noticeable even now, and within the fifty-plus-year timespan covered in the book climatic effects will be much more dramatic. The book Zeihan should factor into his analysis is not geopolitical but geophysical: “The Uninhabitable Earth” (2018 by David Wallace-Wells). Nothing in that book augers against any of Zeihan’s analysis except to make the outcomes for everyone even worse than his broad brush paints them. I will address this intersection in my blog.
At the end of my review (see below), I said two things were missing from this book. The first is everything that had happened in American politics since 2008, when the book was penned. The second is the uniquely American socio-cultural factors that led to what the author calls “the great sorting out” of the American electorate supporting Congressional hyper-partisanship between roughly the late 1960s and 1995. In this essay, I address the second issue first, and then, imagining myself to be Ronald Brownstein, project what he might say about the election of Donald Trump. The two problems are related.
From the left, there is the politics of identity, the earliest example of which, the women’s movement, has deep socio-cultural roots but in modern terms begins with the suffrage movements of the late 19th and early 20th centuries. In America, the African American experience emerged from roughly one hundred years of political and economic suppression between the civil war and the 1960s.
These historical examples of identity politics began to fragment in the mid-1970s when the left, seemingly helpless against growing economic disparity in the United States, retreated to academia. To preserve the relevance of humanities, they began offering courses in ever-narrower identities. Although these teachings never demanded the abandonment of “wider issues” (e.g., women’s rights as compared to “lesbian’s rights”), it was only natural that this would happen. People have limited time to devote to any one matter.
If the left has, inadvertently, turned the Democratic party into a herd of cats, the right has more deliberately unified the Republican party under a banner of racism, xenophobia, and social intolerance. There has never been a serious left in the United States. The right is another matter.
America has always been a racist nation, beginning with its treatment of the continent’s own natives. The extreme communist left has never been a significant force in American politics, but the Nazi (and pre-Nazi) right has been a force locally since before the Civil War and today has gained substantial strength.
In school, we Americans all learn we are a melting pot. Yet as each of the various races and ethnicities arrived in North America, they were beset by bigotry promulgated by those already here. Blacks were imported as slaves and have suffered the worst of the racism down to the present day, though one can argue that native Americans were treated even worse. In the latter half of the 19th century, the Irish and Italians were set upon by the English and Scottish already here. The later (early 20th Century) Eastern Europeans were terrorized by the Western Europeans, and all persecuted the Chinese and Japanese when they arrived on the West Coast.
From the time of the Russian Revolution, the left in the U.S. was heavily suppressed while the right was left free, especially in the South Eastern States both to organize and commit violent crimes against black Americans, and later (down to today) to utilize that organization for terror purposes against everyone who isn’t perceived as both white and Christian.
In 2020, there are three political classes with a real voice in the United States: liberals, conservatives, and right-wing extremists. There are a handful of left-wing extremists, but they do not have anywhere near the right’s political presence and organization. The extreme left is as intolerant as the right but has a different set of issues, not racial or cultural, but economic. The left has no political representation. Bernie Sanders is about as close as they get, and he is a mild democratic socialist. By contrast, the far-right has always had some political representation in state and national legislatures, governors (George Wallace), and now the President.
Of course, not all conservatives are Nazis any more than all liberals are communists or socialists. But the right is more closely connected than the left because both conservatives and Nazis advocate meddling in others’ personal lives. In contrast, on the left, real communists do likewise, but the liberals do not. Therefore, the left is more politically diffuse, while the right is more concentrated, which brings us back to Donald Trump.
What Brownstein calls “the great sorting out of the American electorate” was a crystallization and concentration of the electorate on the conservative side. The liberals have always been and still are a diffuse collection of various viewpoints, inevitably given the nature of liberalism, toleration for differing views. The shift went through many stages, especially in the South East, where racist, “conservative Democrats” were replaced by racist Republicans.
In the lead-up to the 2000 election (Bush vs. Gore), Karl Rove realized there existed a crystalized conservative block, which would tip every election if it could be persuaded to vote in large numbers. In 2000 Democrats still outnumbered Republicans in most of the U.S. (today, the two parties each register roughly one-third of the electorate with ostensible independents making up the other third). The strategy worked again in 2004 with help from propaganda (the “swift boat” controversy) propagated by the media, social and otherwise.
In 2008 came the reaction to both Bush’s conduct of the Iraq war and the mortgage crisis (2006 in the U.S. followed by the rest of the world in 2008), the election of Barack Obama. The racist elements of the U.S. electorate went wild. Gun sales leaped (and are jumping again as Trump comes to the end of his first term), incidents of racial, ethnic (especially anti-Muslim), and anti-LGBT attacks increased nation-wide although school shootings, fueled less by bigotry and more by bullying, stole many of the headlines.
Obama, having served two full terms, looked forward to handing the reins of power to Hilary Clinton, or at worst a conventional Republican. What happened surprised everyone. Donald Trump (thanks to Steve Bannon) went Karl Rove one better. Trump discovered he could win (first primaries and then the election) not merely by consolidating and bringing out the “conservative vote”, but by giving voice to the nation’s most ardent racists and bigoted groups: anti-gay, anti-black, anti-Muslim. Trump didn’t entirely succeed. Hillary Clinton won the popular vote. But in four critical states, Trump tied enough of the conservatives and extremists together to put him over the top in the Electoral College.
The extremist subset of the “conservative vote” had not participated en masse in national elections because neither Republicans nor Democrats (except for the “deep south”) supported their extremism. Trump did support and encourage it. That has been the secret of his success even to this day, now a week before the 2020 election!
The Second Civil War is a book about hyper-partisanship in American politics, how (and why) it got to be the way it is, and what might be done about its problematic consequences. In 2020, no matter your political persuasion, we can all agree that American politics is hyper-partisan. But Mr. Brownstein isn’t speaking of Donald Trump (#45), or Barack Obama (#44), but George W. Bush (#43)! The book ends in 2008 before the election of Obama!
A well researched and well-written book is mostly about the relation between the American presidency and Congress, House and Senate. It begins back at the last election of the 19th century and moves rapidly forward, giving us more detail through the presidencies of the mid to late 20th century ending with Clinton and Bush #43. There were partisan periods in American politics before, but also a long period from the early 20th century through roughly the Carter presidency when the parties were so diverse that one could not tell, by policy preferences, who was a Democrat and who a Republican.
All of this began to change in the late 1970s with various rule changes adopted by the House and Senate. The parties became more distinct and disciplined. In 1995 under Clinton, Newt Gingrich who, using the new rule-base established in the prior generation, crystalized the combative partisan style that still characterizes the political parties today.
In parallel with the evolution of the parties in Congress, there was (according to Brownstein) a great political “sorting out” of the American electorate into more rigid conservative and liberal camps. Brownstein covers this shift in popular focus from bread-and-butter issues to cultural issues that define the parties’ difference today, especially Republican conservatives. He does not give us reasons for this shift (except to say that it was cultural) but focuses on its effect, the acceleration, and solidification, of partisanship in Congress. It was Gingrich who most took advantage of this cultural change.
Back in the day when the parties were indistinct, it was painful for a president to get anything done, especially in domestic programs. In today’s hyper-partisan environment, it is also difficult for a president to get anything done unless his party has a significant majority in both houses, something that hasn’t happened since the partisan divide began! In his last chapter, Brownstein suggests what might be done to result, eventually, in a congress and administration empowered to pass significant legislation while each party retains its distinct character. I do not know if the Obama administration made any attempts at easing the partisan divide, but they were not particularly successful if it did. Clearly, Donald Trump has made the divide even more profound than it was under Bush #43.
Two things are missing from this book. First, everything that has happened since 2008 (for which the author cannot be faulted). Second, the history and socio-cultural factors that drove the development of hyper-partisanship within the electorate. Partisanship in Congress evolved and sharpened steadily over 30 or so years from 1970 to 2000. This evolution could not have occurred (particularly on the conservative side) without electoral support, and the electorate, over that time, was happy to give it. I will deal with both of these issues from a 2020 perspective in my blog.
I have said there is, in the philosophy of mind (PoM), no stable position between eliminative materialism (EM) and substance-dualism grounded in theism (T). Stability refers to the inability of these theories to suggest answers to fundamental grounding questions. I have in mind three grounding issues.
EM (and its cousin Functionalism (F)) are stable because they deny there is anything, any “mystery of mind”, to be explained (in the case of F that there is anything besides various functional descriptions we can cogently talk about). Nothing interacts, there is no need to specify anything, and there is nothing whose origin requires any explanation.
Property Dualism (PD), Russellian Monism (RM) of many forms, and panpsychism (P), also of many variations (some built upon RM, and others not) are unstable hypotheses because they float free of any metaphysical ground. Along with T, they accept there is something about mind to be explained, but they do not address any of the three issues (PD having only the first to worry about). T addresses all three of the issues, having answers to two of them, and provides good reasons for our inability to resolve the third. T is metaphysically grounded.
Each of these PoMs also has some relation to the principle of causal closure (CC) in the physical. CC consists of two fundamental axioms: (CCA) physics comes only from prior physics, and (CCB) physics can produce only more physics. CC is not one of the listed issues because each of these theories does address their relation to CC. Ironically, T is the only PoM, besides EM&F, that fully respects CC (see below and also “Fantasy Physics and the Genesis of Mind” for more detail).
All of the other PoMs have two things in common (I do not include idealism in this essay because it differs from the others in this respect). First, there is a mind-independent world fully subject to CC, and second, the mind is in some way a part of this world (exists in the physical universe), and it’s unique (seemingly non-material) qualities warrant explanation. Idealism denies a mind-independent world (a slippery slope to solipsism) and ends up having to fall back on T (or a “simulation scenario” see later) for an explanation.
The interaction issue plagues every PoM (including T) other than EM&F. Every PoM apart from EM&F is offered to explain the existence of mind without resorting to T, because (among other things) T is taken to have an interaction problem! Yet to one degree or another, every PoM from PD on suffers from the same problem.
The interaction problem is particularly acute for PD, which also makes the most overt break with CC. In PD, plain-old-physics is the causal root of mind in violation of CCB. Most, but not all, PD advocates also accept the reality of “mental cause”, which violates CCA. Thanks to PD’s stipulation that CC is false, it manages to address the “origin issue”; brains cause (it must be a causal relation) mind, which is the end of the matter. Thanks to this emphasis, PD manages to avoid any need to specify anything mental before the brain’s appearance, but PD cannot escape its interaction problem. How exactly does plain-ordinary-physics produce and then subsequently interact with a subjective-anything at all?
The interaction issue is about mechanism. What exactly are the mechanics of the physical production of mind or mind’s capacity to cause physics? PD advocates assert that minds and brains do interact, and they admit that these are violations of what CC fundamentally says. Still, they are as unable as theists to say anything about how exactly this works. PD does not have any specification issue because there is nothing mental about the universe either in part or whole except when brains are present.
RUSSELLIAN MONISM AND PANPSYCHISM
RM and P both developed to avoid PD’s problem with CC and interaction. In the end, they fail to avoid either. The foundation of RM and P is the idea that there isn’t any “interaction problem” because the mental is in some way built into the physical. “Some way” hides a lot of skeletons. What we take to be CC already includes the effects of “the mental” as these are taken to be either constitutive (often the un-observable essence) of the physical, or having a causal relation to it. There are many variations, but in every case, the idea is that when brains come along whatever-it-is that constitutes the mental embedded in the physical, results in consciousness as we experience it.
RM rests the mental on the micro-physical, quarks, leptons, bosons, and all their assemblies. While we’re being speculative, if we ascribe the mental to any one particle, the force bosons would be the logical candidates because they mediate the relation between the quarks that give matter (protons, neutrons, atoms) its dispositional qualities. But I digress.
Some advocates of RM ground the mental in quantum phenomena. Somehow, these “essences of the mental” add up in assemblies until reaching brains, consciousness as we know it emerges. How and what goes on in this assembling (the infamous “combination problem”) is anyone’s guess. Some variations of RM assign a special status to the total assembly, the whole universe. RM adds-up to P.
Without RM adding up to it, P holds that there is no mental in the micro, but instead, it appears only as a quality of the total, the universe. Indeed one might look at the present universe at vast scales (billions of light-years) and note its resemblance to a giant brain. In a recent paper, Phil Goff suggests that the cosmological settings, fixed in the opening second of the big bang, are the result not merely of cosmological mentality, but also intentionality, operating in and on a universe filled with undifferentiated radiation (see link to his article below)! In effect, the claim is that in this state, the universe is mind in its purest form!
While this is not idealism, the idea of “mind independence” is shifted. There is a physical world independent of mind as we (and presumably the higher animals) experience it, but it is no longer shorn of mind altogether. The universe (or particle) mind is not our mind, but it is the case that a “mental essence” lies at the heart of the material world. Yet, (we are assured) this mind is not consciousness as we (or even fish) experience it.
This brings us to the heart of the “specification problem”. In what, even partly, does this “mental essence” consist? No RM or P advocate that I know has the slightest positive suggestion regarding any of it! They say “it isn’t consciousness as we understand it”, but that is saying what it is not. I would not presume to demand philosophers give us rich detail (as we can do about our mental properties), but they seem unable to specify even one of these qualities. This failure applies equally to RM and P. In the case of P, the quality in question is a mental property (at least one) of the universe. Again, no positive suggestion is forthcoming.
Every one of these philosophers pays homage to this problem. They all admit they cannot provide any positive quality specifications. Goff goes so far as to suggest that such qualities may remain forever unknowable. What philosophers ignore are the consequences of this failure to the theory itself. To say that X, having properties we can never in principle discriminate, is responsible (causally or constitutively) for Y is to say that X cannot be confirmed or dis-confirmed! This admission empties the theory (a hypothesis about the physical world) of any possible physical content! Proposed solutions to the “combination problem” rest on nothing because no one can say (even speculatively) what is being combined!
This brings us again to interaction. The whole point of RM or P is to make mind, of our sort, un-mysterious. Biological mind, the interaction between mind and brains, is not [supposedly] a mystery because atoms (or the universe) encompass mind’s potentials before its appearance, not merely its possibility, but also the mechanics of the process that produces it. How do they do this? The same mystery is transferred to the micro-physical or the universe.
No one disputes that brains are physical. How does the micro (or universal) embedded-mental interact with the physical? What does it do to physics, not merely to produce the mental that we know (that it does so is by stipulation of the hypothesis), but to drive physics (cosmology) towards its emergence? How would physics (cosmological evolution) be different if the proposed mental-in-it was withdrawn? Philosophers have only pushed the interaction problem to another part of the rug, and even that not completely.
The origin issue is something else again. Physics has the quantum vacuum. Our physical theories show no sign of needing a mental component to operate as they do. We can trace the evolution of the entire physical universe to the big bang, and the physical equations describe all of the effects we observe without a “mental term”. From what ground does the mental essence of atoms or the cosmos arise? To say it just happens in the same big bang as everything else is merely to stipulate an answer and beg the question.
Of the three issues, interaction, specification, and origin, the last is the least acknowledged by the RM or P community. The properties whose specification we do not know, interacting we know not how, are merely stipulated to have been present since the big bang. Surely there is something about this that warrants inquiry? I suspect the problem is that any such investigation quickly backs up into something like God. CC is purportedly teleology-free (at least this is how physicists understand it). The moment one suggests there is a mental essence associated with physical causal relations, one throws this corollary of the principle into question. If the proto-mental has the effect of guiding physics towards brains and consciousness CC, as understood by physicists, is broken.
If the proto-mental does not affect physical unfolding until and unless brains happen along, if it is teleology-free, its existence is more mysterious still! There would be then, an essential (if unspecifiable) quality of the physical that has no effect what-so-ever over billions of years but happens to generate consciousness when brains happen along, which by sheer luck, occurs. This is exactly how PD comes out! “The mental” emerges from brains and only from brains! RM and P are teleological hypotheses, or they are explanatorily redundant!
If there is a teleological direction, who or what sets it? Dr. Goff goes so far as to assert that the cosmos is not merely minded in some vague sense, but intentional, as far back as the big bang (“Did the Universe Design Itself” Nov. 2018). Intentionality would leave no doubt about teleology. I pointed out to Dr. Goff that once he goes this far, he is nine-tenths of the way to God. He has never commented.
T addresses the origin question, and along with it, the teleological direction. Whatever it is that brings mind about when brains are on the scene (see “From what comes Mind”), God is responsible for it. What then is responsible for God? God is responsible for God, theism’s grand stipulation. There is a single entity at the top of the chain of being who is responsible (eternally) for its own existence. If this weren’t so, there would be yet something antecedent to God.
In this context, it is worth mentioning simulation scenarios (SS) tangential to PD, RM, and P. These break down into three broad varieties. Mind might be simulated directly (a brain-in-a-vat) in which case SS collapses into idealism (ultimately solipsism), with the simulators (whomever they are) being proxies for God. Next, the mind-independent world is simulated directly and our physical selves simulated within it (many computer games serve as the present metaphor for this). However, the appearance of a first-person subjectivity in the simulated world is as mysterious as it is without hypothesizing the simulation. Finally, what is simulated is not the world as we find it but the big bang, the settings, and the standard model (what David Chalmers’ calls a “metaphysical simulation”). The world is left to evolve with the simulation providing whatever is required to evoke minds from brains. Except for the interaction mechanism, this scenario does remove the mystery from mind’s appearance, but at the cost of making the simulators not merely proxies for God but to all intents and purposes, actually God!
God is God if and only if he is his own eternal source, meaning there never was a time when God was not (see “Prolegomena to a Future Theology”). Of course, this “too pat” answer is among the reasons atheist philosophers (the majority these days) reject T. But in response, as an alternative, they offer nothing to ground their speculations!
God sets the settings producing a physical universe where mind is possible. There is nothing else for God to do until somewhere conditions become life-compatible. From that point, God, or some proxy acting for him, might have something to do with life’s origin and the evolution of mind-supporting brains. There is no mystery in a teleological direction if God is the source of it.
T also addresses the specification issue. Mind emerges from the functioning of brains as PD advocates imagine it, except that something God adds to the physical universe (see again “From what comes Mind”) supports mind’s emergence. There are no mental qualities or essences of atoms, stars, or even the whole physical universe. Interestingly, Dr. Goff suggests a field idea for whatever it is that embodies the intentionality of the cosmos (as do I in the aforementioned essay), but he says nothing about what might ground it metaphysically. If there is a fire in a fireplace, we say “the fire is intentional”, but this is a metaphor. The fire has no intention. The intention belongs to she who lit it. T grounds intentionality because it belongs to God, who is minded — conscious!
That leaves the “interaction issue”, identical to that faced by PD, RM, and P. Under T, even brains have no “mental qualities” of their own. The mental is a response, an outcome of the interaction between brains and whatever supports its emergence. Consciousness is analogous to the movement of a pointer on a dial, a response to the interaction between some phenomenon and an experimental apparatus.
T can say no more about how this works or what “the something” does to brains to evoke mind than any other hypothesis. But T does have two arrows in its quiver the others lack. First, it has God who knows the trick! Second, it gives a good reason why we cannot fathom the interaction mechanism. Whatever it is that interacts with brains to produce mind, it is metaphysically antecedent to the mind, the subjective individual consciousness, it invokes.
Whatever God (directly or indirectly, e.g., via a field in spacetime) does to produce consciousness is transparent to us. We have only the result, the output of the interaction that constitutes our experiential arena. That arena has phenomenal access to the structure of the material (mind-independent) world through qualia, the mediation of physical senses. It has no access to the phenomenon that invokes consciousness from brains beyond that which is invoked. Individual mind is our brain’s measurement or detection of the supporting phenomenon, whatever it is. Like the motion of the needle, that’s all we get!
T has one other advantage. It leaves CC alone! PD accepts that CC is false. RM and P try to save CC but do so only by stipulation — “the physical is fundamentally also mental”. In T, CC genuinely holds. The purely physical produces, and emerges from, only the physical! As Dawkins puts it (The Blind Watchmaker), “God is redundant” (he should add “in or for the physical as such”). Mind’s source is outside the physical, from God, directly or indirectly. God also happens to be the source of the material, including the properties of CC. This fact explains the ability of mind to both represent the physical and manipulate it. Mind is designed and intended, by God, to do that!
If atheist philosophers want to be taken seriously by theists who are, at this time, the only philosophers who grasp the fact that God alone grounds all the metaphysical questions, they must suggest reasonable answers to the questions posed at the beginning of this essay. PD, RM, and P all suffer by breaking with CC one way or another. Only T (besides EM&F) fully preserves CC, and only T explains why the interaction mechanism is, in principle, out of our reach.
Both RM and P (especially P) could claim (along with T) that we cannot specify the proto-mental or how it interacts with brains because it is antecedent to the consciousness invoked. But unlike T, these qualities supposedly originate in a physical event, the big bang, which is historically, but not metaphysically, antecedent to mind. It may well be that we cannot fathom the interaction between mind and proto-mental assemblies (or the cosmological totality) in brains. But hydrogen atoms and planets, not to mention the universe, are of the mind-independent world, the world to which we have phenomenal (sensory) access. If we cannot even hypothesize about the effect of the proto-mental on the physical, if the proto-mental has no measurable effect on cosmological evolution (at least up until brains come along) then it is explanatorily redundant.
T is not an empty hypothesis. It posits no unspecifiable proto-mental qualities embedded in physics or cosmology. What evokes the mental comes from outside physics altogether, from God who is not explanatorily redundant because he is the source and ground of both whatever-it-is that evokes the mental from brains, and the physical, ultimately the brains, from which consciousness emerges. For this reason, because it is metaphysically antecedent to physics, its interaction with brains cannot be fathomed by the consciousness invoked by it.
None of this impels us to say that God exists, let alone that he must exist. In this PoM context, T is a hypothesis like all the others. It happens to be the only hypothesis that answers most of the questions left unanswered by PD, RM, and P. Not only is it logical, it is the best hypothesis we presently have and should therefore be taken seriously.