Review: Hamilton St. Lucian 2006 rum

Review: Hamilton St. Lucian 2006 rum

I seem to have missed reviewing this rum and thought I’d better get to it while I yet have some left. Edward Hamilton is a modern adventurer of the old school. He has been everywhere and done a lot. Eventually he found his way to the Carribbean and fell in love with rum, its history, and its making. Sometime later (2006) he started “The Ministry of Rum” website (and then taught himself enough computer coding to make it better). The sight is a fantastic source of rum information. A lot of its categories are inactive. If you look at his “rum of the month” you discover that only two were ever entered, the latest in 2006! But the forums are very active with hundreds of rum and whiskey subjects discussed, and his informational essays (dozens of them) about rum will always remain relevant. There is a lot of education to be had here. Membership, allowing you to participate in the forums, is free.

Edward eventually put together a collection of rums under his own name. Alas this is one of the moribund parts of his website, only 4 of his collection listed and described. There must be nearly a dozen now, but all (I’m not sure of that) limited bottlings some no longer easily found. The nice thing about buying any Hamilton rum is that you know you’re getting something honest, unadulterated, well made, and not on the mass market. His labeling is among the best in the industry. On the front of my bottle it says:

Hamilton St. Lucian Pot Still 2006
Distilled by Lucian Distillers batch 813-7CS Aged 7 years
63.8% ABV

Supposedly you can put that batch number in some field somewhere on his website and find out more about that particular batch, but I have been unable to find that entry point. Perhaps one of my readers here will have better luck. The back label says this:

Back label: Hand selected by Edward Hamilton for the Ministry of Rum collection from the cask aging warehouse at St. Lucia Distillers Ltd for its flavor and authenticity. Distilled from fermented molasses in a Vendome Pot Still, this medium-bodied rum was imported in the cask in which it was aged in St. Lucia.

As I understand it the barrels are imported in New York where they are bottled. Let’s get to the rum.

Color: Medium amber, copper a little organge rather than red. The rum in my glass is just slightly cloudy. Mind I opened this particular bottle about 3 months ago and I do not remember the rum being cloudy when it was fresh. I have one more bottle, I’ll try to remember to update this review when I get around to opening it.

Legs: Swirrled it forms the tiniest of droplets at the front of the glass that only slowly coalesce towards the back in thick legs that slowly drop down the glass.

The aromas of this rum are fantastic. Only a little alcohol (interesting considering the almost 64% ABV), no acetone, no “young rum notes”. It smells rich and sweet with ripe but not overripe fruit: apricot, orange, banana, allspice (or something like it), and a noticible “pot still” funk so up front in rums like Pusser’s and Appleton 12.

The flavor strikes me as nothing like the aroma. The contrast is jarring. This rum is not sweet; very dry. I cannot taste any of the fruit I get on the nose. There is burnt brown sugar that isn’t sweet, tobacco, and a meld of oaky smokey (as smokey as rum gets) notes I can’t tell apart but I can tell there is a lot of nuance here I am not qualified to reach. The alcohol makes its presence known. It isn’t in the least harsh, but it comes up across the mouth and down the throat with a medium finish that speaks of oak. The fire stays with you for a bit after the swallow slowly fading. The texture wants to be creamy but the alcohol cuts through the cream. There are lots of contrasts in this rum but they don’t fight, they get along and sum to something interesting and different.

Given the high ABV I had to see what a little water did to the flavor. Turns out not a lot! Adding 5 or 6 drops and then twice that to an ounce brings out a little fruit from the nose but only by a little. I get a little banana, maybe raisin but it’s hard to tell. There is less heat but not by much. All the melded richness, the flavors I can’t separate, is still there, the funk is even a bit stronger. The rum still isn’t sweet, but maybe not quite so dry. Perhaps there is a little less oak bitterness, and the aftertaste gets a little longer.

At around $50/750ml here I’m on the fence with this rum. I have one more bottle to try, but not sure I would buy more even if I can still find it. I think the price is very good given the complexity, depth, and balance of the rum. It is a very good rum if you like this kind of profile. I can enjoy it, and I can appreciate it because to really get in to rum you have to stretch your palate. But it isn’t something I sip and say “wow I love this”. I have to work at it.

There is a good review of this rum by Inu Akena at his website. Hit the link.

The cigar by the way is a “Cinco Maduro” from Rodrigo Cigars. Medium strength with wrapper, binder, and all the filler blend made from maduro (5 different) leafs if I recall a “lost creation” of Island Jim. Don’t know if there are any of these left but George has a lot of good cigars and frequent discounts. It’s worth getting on his mailing list.

Arguing with Automatons

selfie

Introduction

There is no metaphysical middle ground between libertarian free will and automatonism. I stress the metaphysical here because there is phenomenological (psychological) middle ground that backs up into the epistemological. By “phenomenological middle ground” I refer to what I take to be most people’s every day experience with making choices. If you step into a taqueria and for a moment do not know if you “feel more like” chicken, beef, or pork, you think about it and choose one. We each make these (and many other) sorts of decisions throughout our day. In this process I (for I can in the end only speak for me) do not feel impelled by something, some combination of events in my past, to make one particular choice over another. I had chicken last week, so this week I’ll take the steak, or perhaps I liked the chicken so much I choose it again. Whether you are committed to libertarian free will philosophically the choice of chicken, steak, or pork, feels at least superficially free. Whichever choice you make you are at the same time aware that other choices (and so futures) were potentially open to you.

I will not further address this experience of phenomenological freedom because it is conceivable that you can genuinely believe you are free without actually being free just as genuinely believing you are Napoleon reincarnated does not mean you are Napoleon reincarnated. The issue then is not whether the alternatives appear open to you but whether they actually are open. Although you might have chosen beef or pork and have done so in the past, on this occasion something stemming from your past (indeed going all the way back to the big bang) determined that you would choose chicken and this determination was (usually is because otherwise the phenomenological room would also disappear) at least entirely subconscious if not in fact unconscious. On this occasion you were going to choose chicken just as on prior occasions there were determinations that led to your choosing steak or pork at those times. Automatons are entities that sometimes appear to make free decisions from a purely behavioral viewpoint, but which we know not to be free because we understand all of what leads deterministically to those choices; that is, we know all of what underlies the behavior both necessarily and sufficiently.

In this paper my goal is not to defend a view of libertarian free will as I have done that before here in this blog and other places. What does interest me here are two related things. First does it make any sense for a human being with free will to argue or debate with an entity who appears to be a human being but lacks free will? Second, if no human beings have free will does any debate or argument between such entities have any meaning or significance? I am thinking of the following scenario. Two human beings are having a debate. The thought of the first being is freely expressed through speech in a language that both know. That speech, having some meaning in the common language the other being grasps in her thought, leads to a free decision in the thought of the second being to accept the argument of the first being or to reject it and freely offer a counterargument of her own. Note the freedom involved here would entail the second being might have, besides agreeing or offering a counterargument, instead have chosen simply to be quiet and abandon the discussion among other options. What is crucial to meaning here is the respondent understands the semantic relation between the argument presented and her response whatever that turns out to be. “The semantic” is important here because the relation is not about the brain states of one party invoking brain states in the other, but rather subjective states of consciousness whose form and content do not resemble brain states.

The Argument

An automaton is a “state machine”. Some combination of parts each having various but finite numbers of states in which they can reside together determine what the automaton “does” at any given moment. The parts here can be mechanical, electromechanical, or of any other constitution that can express a “state”. Automatons today range from such trivial devices as automated floor cleaners to sophisticated computers in which software initially constrains possible “states” expressed in hardware; servomotors controlling a driver-less car or making chess moves on a game board. Every automaton begins in some first state when it is “turned on” and that state evolves in time from that point depending on what the automaton experiences in its inputs. Inputs include what it senses of the world’s response to its outputs (for example a chess move) which become further inputs. Not only do modern automatons “adapt” their behavior (within the range of mechanical possibility) by responding to their various inputs (given the potentials embedded in their programming), and that behavior can appear unexpected from the viewpoint of a human observer.

That these behaviors appear unexpected does not mean they are not fully determined (causally) by the automaton’s present state including all present inputs. In all the automatons we build from thermostats to self-driving cars and game playing computers we take this “ultimately determined” status for granted. We know that if we examine the machine’s workings in fine enough detail we will find exactly which combinations of states determined any particular behavior, that those states were sufficient to cause the behavior in question, and that those states were recursively determined by prior states (plus inputs) back in time to the machine’s first state. Because we know exactly how one state causes another we assume there is nothing more to the behavior of the automaton than its history of prior states. We assume that is, the automaton experiences no internal subjectivity. If two such automatons were to have an argument of some sort, a third-party would see what looked like our argument response between two humans illustrated above. But we have good reasons to believe that despite the behavioral similarity there is no subjective, semantic, understanding occurring in either of them.

Back in the 1970s there was an automaton named ELIZA. Today there are far more sophisticated automatons, but ELIZA is illustrative for my purposes here because it was specifically designed not so much to “be intelligent” but to mimic intelligence passing the Turing test. Simply stated, the Turing test proposed that a machine would or could be taken to be intelligent if in interacting with a human being, the human could not tell if it was interacting with a machine or another human. But the Turing test proved a little easy to pass under domain restricted circumstances and ELIZA was proof of that.

ELIZA’s domain was psychotherapy of the lite sort in which a therapist speaks one-to-one with a patient. Humans (students) were the patients and asked to talk (type) to their therapist.

Patient: I’m not sleeping well lately.
ELIZA: Have you any idea why?
Patient: I’m having bad dreams.
ELIZA: What are the dreams about?
Patient: My mother.
ELIZA: Tell me about your mother.

A simple program by today’s standards ELIZA found subjects, verbs, and objects in patient sentences and wove questions around one or more of them. If the program could not find any specific word to incorporate in its reply it output something more general like “why?” Most patients could tell that ELIZA was a machine but only after enough interaction that they realized ELIZA’s answers weren’t getting at anything. But initially, and in brief transactions, many patients thought they were speaking (typing) to a human being. But here’s where it gets really interesting for this argument. There came a point in work with ELIZA that some students, even knowing that ELIZA was a machine, not only continued to interact with it (some for long sessions), but reported experiencing therapeutic value! Some students said the sessions reduced stress and helped them think about their lives. The sessions “had meaning” in the broad sense, they had significance to the student.

The first question we want to ask is: were these interactions of meaning or significance to ELIZA? We assume not. We normally take it there is “nothing it is like” to be ELIZA, there is no consciousness there, no free will, no subjectivity. All of ELIZA’s replies are necessarily and sufficiently determined by a few hundred lines of code controlling the CPU and memory registers of a non-conscious automaton. One alternative view (taken by Chalmers and others) is there is something minimally “to be like” ELIZA, there is some subjectivity there though we cannot, from the human viewpoint “get at” what it might be like. Thomas Nagel (“What is it Like to be a Bat” 1974) deliberately chose an example (the bat) that to most people would have a subjective experience of some kind. Nagel’s argument is that it is in principle impossible for us to access bat-experience subjectively. His conclusion is taken to apply to any other subjective experience including that of other humans.

What would happen if we made two ELIZA programs interact? From a third-party perspective it would be a conversation between a therapist and a patient, that is two persons. But we know that this is not the case. We can explain all the behavior of both sides with reference to nothing but algorithms and programmable hardware, and we have good reason to believe that these are both necessary and sufficient causes of the observed behavior. We wouldn’t normally think to say that either side experienced any “therapeutic value”, semantic understanding or indeed had any internal experience of the interaction at all. Why not? Two reasons. One is that we do not impute any consciousness to ELIZA, and not having any consciousness, ELIZA cannot have and will at all. We normally take for granted that some consciousness is a necessary ground of any sort of willing. Will is only experienced, only exists, subjectively and never, like Hume’s cause, in the third person. My theme here focuses on the will so I want to stress the causal determinism (both necessary and sufficient) of the combination of algorithm and hardware is what robs the automation of anything that could conceivably be called “will”.

Now suppose we substitute real human beings for the two ELIZAs but stipulate that neither has a free will. The interaction is, in a manner perfectly analogous to “algorithm and hardware”, causally determined by states of the brains of the two humans. This causal relation is both necessary and sufficient to bring about every question and response there being no genuine “will” about it. So what is different about these two cases? Why (and where) can there be meaning and significance in the humans but not the automatons? The difference is the humans are (or could be) conscious – I stipulated only that they had no free will.

In the literature on free will and philosophy of mind one often finds that deniers of free will are not always deniers of consciousness. That is, although there is no genuine will there is experience, something subjective, and meaning arises in that arena. But consciousness itself is problematic for the same reason as free will. As Sean Carroll (“The Big Picture: On the Origins of Life, Meaning, and the Universe Itself” 2015) put it “thought can’t cause physics”. But if consciousness is real, then by some mechanism physics causes thought, subjectivity, and that should be equally impossible. There is, to put it bluntly, no more evidence in all of modern science that physics causes thought (subjectivity) than there is (from a third-party perspective remember the two ELIZAs) that thought causes physics. Consciousness and free will are two sides of the same coin.

If consciousness is real, and therefore experience can have meaning, then one must hold that physics causes [nonmaterial] thought. Rejecting this leaves only epiphenomenalism or eliminative materialism. The first makes experience (the subjectivity we experience every day) an illusion, while the second says it isn’t even illusory but nonexistent, something experience itself makes incoherent. Think of having a few orgasms in some clinical setting. The clinician asks you “which orgasm was the most powerful?” You say “the second.” The clinician, monitoring the behavior of every nerve in your body, says “No, my instruments tell me the first was more powerful.” The question comes down to who are you going to believe? The report of the clinician or the orgasm qualia you experienced? I stress here that it isn’t the orgasm, the measureable biological phenomena of nerve and muscle, but the subjective quality of the experience that matters.

The above example applies to qualia in general, but orgasms are particularly individual and subjectively qualified. It would be absurd to hold the third-party measurement had logical priority over the subjective experience. The quality of an orgasm is in its subjective experience and nowhere else. It would also be absurd to hold that an orgasm was illusory (epiphenomenalism) or nonexistent (eliminative materialism). An “illusory orgasm” is no more possible than a “square circle”. But none of this means there isn’t some brain state associated with every experience including experiences of thinking or choosing. If subjective experiences (think orgasms) are real, if they mean anything to a subject, there must be at least a logical separation between brain states and subjective experience. This is the gap so well described by David Chalmers (“The Conscious Mind: In Search of a Fundamental Theory” 1996 and “The Character of Consciousness” 2010), and that forces one to accept a property dualism of some sort.

In his 2015 book “Free Will a Philosophical Reappraisal” Nicholas Rescher asks us to consider that there is some brain state literally simultaneous with “the thought”. The question is not which is physically antecedent (and so causal) but logically antecedent and so initiating. Rescher is a materialist, so his scheme must work from the side of physics. He argues the relation between physics and thought is not causal in the normal sense that physics understands it. Instead of a cause he calls it an initiation. He makes two distinctions here. Initiations are atemporal. Rescher (a process ontologist) holds an “event view” of cause in which events unfold (cause) other events. What is important about all event unfolding is its temporality. Events have duration (however short or long) and “causing events” must precede result unfolding in time. By contrast, initiations are simultaneous with their physical expressions. Crucially they are not “events”. Rescher calls them “eventuations”. In Rescher’s view the eventuations go both ways. Brain states eventuate thoughts, and sometimes a certain class of thoughts we commonly call choices or decisions eventuate brain states.

Although Rescher does not try to resolve the mystery of the interaction metaphysically he doesn’t have to. What he shows is the reasonableness of the relation going both ways. If physics can evoke consciousness, then consciousness can, correspondingly, evoke physics. A second consequence of initiations is that there is some brain state just before a decision or choice in thought which is not sufficient to guarantee evocation of the brain state correlated with the thought. Of course the “thought correlate” is compatible with that prior state. It must be one of the following states that can evolve from the prior state. That it does evolve requires the prior state (or some other compatible prior state) but also the initiating thought which remember by Recher’s view is not strictly a cause. This is important because the neuroscientist need not accommodate any thought. One brain state (an event with temporal duration and so causal powers) is traceable backwards through (temporal) series of other brain states, the prior unfolding into the latter (as in ELIZA) without ever detecting the inflection point where a thought had non-temporal control.

Rescher’s distinction gives us the possibility of free will but at the cost of some logical dualism. If one accepts such a dualism then there is no unique problem with free will. But if one rejects all dualism in favor of eliminative materialism, then not only free will, but consciousness itself (and so subjective orgasm) is impossible. The only escape from such a trap is the ad hoc move of declaring that physics causes thought but not the other way around. There is no particular reason to believe this is the case however for even in this view, the basic metaphysical problem of the mechanism remains. If someday neuroscience does resolve the matter of how physics causes consciousness and demonstrates its sufficiency, it is reasonable to suppose they will discover at the same time how it is that consciousness [sometimes] causes (eventuates) physics.

My original statement “no metaphysical middle ground between free will and automatonism” has now come to the identity between eliminative materialism and automatonism. We have no reason to suppose that consciousness is real (think orgasm) and free will is not. Each must interact with physics in what might well be the same mechanism, some non-temporal cause not yet identified but that crosses Chalmers’ gap. But where does all this leave us on the meaningfulness of arguing with automatons? If you accept that consciousness is in some sense real then there is no choice but to accept some dualism. Once you accept that, there is no reason not to think that libertarian free will of some capacity is real also. If you reject this and insist on eliminative materialism then neither free will nor consciousness is real, and you must accept this in the face of that very experience that leads you to this conclusion. In short, the conclusion is incoherent and that means eliminative materialism is an epistemological nihilism.

Epiphenomenalism fares little better here. There are no epiphenomena in the physical universe apart (purportedly) from consciousness itself, no evidence that physics can cause epiphenomena. If consciousness is epiphenomenal so are its contents including judgments, thoughts, and everything built upon them; our mathematics and all of what we take to be empirical knowledge. Suppose we (and who is this “we” given the epiphenomenal nature of consciousness?) use our mathematics and science, build a real (not simulated) airplane, step into that airplane and it flies.

Is our flight experience something real (remember the orgasm) or also an epiphenomenal illusion? If illusion, what mechanism (the interaction problem) entails such a reliable connection between the illusion and the world? Physics produces an illusory phenomenon able, nevertheless, to make discoveries and use them to engineer devices that can only work if the discoveries (mental phenomena after all) match purportedly independent physics across time. Planes don’t only fly occasionally or by happenstance. Properly designed, built and maintained they fly every time. The only alternative to this extraordinary coincidence is there is no “independent world” at all.

What saves epiphenomenalism from metaphysical nihilism is that they must hold (being materialists) that it isn’t anything subjective (in this case discoveries and their connection to application) resulting in these engineering marvels, but brain states determined in an engineer’s deep past. None of what we take to be “subjective experience”, for example thoughts about airplane wings, can have any causal relation to the production and flying of airplanes. Experience tells us this is patently absurd. Rescher’s notion of initiation might help here but physics (and traditionally materialism) does not recognize any atemporal cause.

If eliminative materialism or epiphenomenalism is true then human beings cannot be anything more than complex automatons whose “initial state” goes at least as far back as conception. Possibly it goes back further, but just as an automaton cannot know what states of the world led to its being “turned on”, it would be impossible for humans to know one way or another if what fixes [illusory] choices goes back any farther than conception of your body.

Either way, it doesn’t matter because there is no you in anything that you do, choose, believe, or think. There is your body of course, but what issues from it is no different in principle than what issues from ELIZA or for that matter a robot floor cleaner. There is no reason for any conscious and free willed being to accept anything that issues from you as anything more than properly (let us say) formed propositions in the English language. The signs (words) carry standard meanings to the conscious recipient but the issuer counts for nothing being unable to have any “genuine opinion”, that is subjectively (though it may falsely report having such opinions), to consider one way or another.

Note that this does not mean that propositions expressed by automatons are not true. They may well be true, but if they are it is purely by chance that such truth is expressed through this particular channel compared to any other. There is no reason to credit the source other than to recognize the expression came from this source. The expressive vehicle has no “stake in the game”. It makes perfect sense to take the propositions of automatons seriously in the same sense that it makes sense to take a chess move by Big Blue seriously. But at the same time, it makes no sense to further argue or debate an automaton or give it credit for being clever. As clever as their behavior might appear to us (who have consciousness and free will) the cleverness (though not the truth) is imputed to the automaton by us.

Consequences

So what happens if you debate an automaton and as a result your argument and alters its behavior? Nothing is going on other than your output becoming its new input and deterministically re-vectoring the automaton’s report. There isn’t any mind there to change and arguing with it becomes nothing more than a game played with the objective of affecting the course of its behavior. One might interact with ELIZA merely to try to invoke a particular response. But note that an automaton (or other determined entity) changes our free minds all the time. How many books have I read whose contents have persuaded me to alter my opinions or beliefs? Of course we normally assume that a conscious free-willed person writes the book, but there is no reason this must be the case.

Being free willed I allow the arguments (by accepting as valid and good and choosing to alter my beliefs, behavior, motives) in the book to have the impact on me that they have. Linguistically, crediting the book with “changing my mind” is merely (usually) a proxy for according its author that credit. But the book is neither conscious nor free willed and yet the book, by my reading, and not its author, is the proximate cause of my change of opinion.

At the end of the day then debating an automaton simply makes no sense. Winning such a debate is like winning a chess match against Big Blue. On the conscious side it might be satisfying and it provides new inputs to the automaton, but we have not thereby altered any mind. No person acknowledges any “good argument” on our part. If the automaton has a designer she might come to recognize something novel about my argument. I might be impacting some mind at second order here, but among the foundation pillars of materialism an insistence there is no designer.

So what do we do with an entity who looks just like a free willed person but claims to be an automaton? There are three possibilities: 1) the entity is lying, 2) the entity is mistaken, and 3) the entity is an automaton. Notice the three alternatives concern only the status of the free will claim. An automaton can produce true propositions. Theoretically, a mind might fruitfully engage with an automaton, even learn something from it. But fruitfulness is precluded if the subject at issue is or inevitably involves the no-free-will claim. As it turns out, most philosophical issues are entangled with the no-free-will claim. Obviously metaphysics and epistemology touched above, but also ethics (any subject having any socio-political import; anything on our world involving interaction between entities that look like people) and aesthetics (can an automaton experience beauty?); all the classic philosophic sub-disciplines.

If the entity is lying there is no point in arguing because we do not know the motivation behind the lie and thus even a knock-out argument serves no purpose. If the entity is an automaton then again there isn’t any point arguing because no argument exists that would make the truth other than it is. Big Blue is an automaton no matter how hard we try to convince it otherwise. Indeed we might cause Big Blue to report that it isn’t an automaton, a mistake by the machine. Reporting free will (or consciousness) when none exists does not change the fact of the matter. We have done nothing more than caused a deterministic system to mis-adapt in a small way, a Pyrrhic victory if ever there was one. Big Blue’s mistaken report need not affect its chess playing skills.

That leaves “being mistaken” by a conscious entity. Here at least there is, presumptively, a mind to be changed. In theory, some argument can affect it, could make the conscious entity recognize that it must in fact be free willed. While possible, such an argument isn’t likely to be found. Why? Because the individual concerned believes the falsehood (often asserted by authorities like physicists and philosophers) “there is nothing but physics” and “thought cannot cause physics” (even bearing in mind the causal distinctions made above). Ironically many of these same authorities see no inconsistency in physics causing thought. We cannot prove the reality of free will or even consciousness in any logically rigorous way any more than we can disprove it. Human beings (I speak biologically here) who claim “no free will” believe this (typically) for metaphysical reasons. If physicists are correct as far as they (all science) can legitimately claim and there is nothing but physics to be found by physical means, then the only possible evidence of the reality of consciousness and free will is what we experience subjectively in the daily business of our lives.

Either we assume that human beings on Earth who deny any free will are mistaken by intellectual error, a choice (free willed) to accept a falsehood, or we take them at their word and they are not, in fact, free willed. If we take the second alternative, continued interaction is nothing more than a game played with a sophisticated ELIZA. Of course in our real world, some mix of the these is also possible. Some of those who report lacking free will are simply mistaken, while others might genuinely lack it. But all of this only matters to free willed human beings on one side or the other. If a free willed being mistakenly believes she has no free will, she might be enlightened, liberated, saved by our interaction with them — however unlikely this is. If the being on one side has no free will, really is an automaton, arguing with it about this is a waste of time.

By contrast if there is no free will on either side, then everything is a “waste of time” because all interaction would be meaningless; epistemological nihilism. There would be nothing “to know”, only what determined physical behavior, a process physics does correctly recognize as purposeless and therefore also metaphysically meaningless. Why should all of us automatons bother to do anything at all? The answer should be plain. The capacity to ask that last question cannot issue from a true automaton. To an automaton, the answer must be determined, perhaps “to maintain its existence”; not a rationale or purpose (of a mind) but a blind switching of state. To question the meaningfulness of existence presupposes some subjectivity whose experience, and so existence, it is. If subjective experience is real then physics causes (perhaps atemporally initiates as in Rescher) thought, and though obscure there is no a priori reason why thought shouldn’t cause (initiate) physics by the same mechanism.

Comparing Foursquare Port to Zinfandel Cask Rum

Comparing Foursquare Port to Zinfandel Cask Rum

Foursquare seems to come up with an endless variety of good rums. I discovered the Port Cask Finish last year, and then found a retailer who also carried the Zinfandel Cask Blend. I’ve reviewed each of these separately and also the Foursquare 2004. But these two in particular seemed so similar I wanted to see what they were like side-by-side.

2rumsc

As goes information concerning the production and aging of these rums I can do no better than to quote the blog site of the fatrumpirate. The links following the quotes will take you directly to his reviews of these two rums as my own are linked above.

“The Port Cask Finish is a blend of pot and column distilled rum all distilled, blended and bottled at Foursquare. The Port Cask Finish is actually a bit misleading. Many producers would rate it as “double aged”. The rum is aged for 3 years in Bourbon Barrels and is then re-casked into 220 litre Port Casks for a second maturation of 6 years.”

“[The] Zinfandel Cask Blend is a mix of Pot and Column distilled rum which has been first aged in Bourbon casks before being finished in Zinfandel casks. In total the rum has been aged for 11 years.”

Both rums are produced at the Foursquare distillery in Barbados. The Port Cask comes at 40% AVB and the Zinfandel Cask at 43%. This isn’t much of a difference, but it is noticeable on the swallow. I am operating on the assumption that both of these rums start out in the same distillate and the whole of their differences comes from the aging process, 9 years for the Port and 11 years for the Zinfandel. The above linked website does not say for how long the zinfandel version ages in ex-zin barrels, but I have to believe that it is for many of those 11 total years.

Both rums come without additives, no extra sugar or coloring. “Honest rums” as this phrase is used on all the blogs these days. I have read that the ex port and zinfandel barrels used were dry. There was no wine sloshing around as is often the case with other wine-finished rums. This isn’t necessarily a bad thing by the way. I suspect there is a little bit of Spanish sherry in Dos Maderas 5+5 (review linked) and that is one delicious rum.

I’ve been through one whole bottle of the Port Cask at this point, but so far only this (pictured) bottle of the Zinfandel Cask. As a result, this particular Zin version has evolved for about 3 weeks in its opened bottle, but the Port Cask only a week and as you can see I’ve had only 3 or 4 glasses from the pictured bottle. So to some extent I am comparing apples to oranges, but I hope the comparison will still be useful.

2rumsb

I’ll not go into all the swirling legs business here I did that in the earlier reviews linked above. But I do want to call attention to the color of these rums which you can see from the photos is as nearly identical as it can be. I sometimes think the Port Cask is a tiny bit darker, but some photo experiments suggest this is just a trick of the light.

On the nose, the Zinfandel Cask is sharper, there is more alcohol. There is also raisin, grape, some burnt caramel and light brown sugar, more than a hint of tobacco and oak. There isn’t a lot of sweetness in the aroma and only the barest hint of “pot still funk”. Oak is more prevalent but the aromas are nicely distinguished. By contrast the Port Cask is much more mellow and melded. It’s harder to tease out separate notes, but they are definitely sweeter. No oak to speak of in the Port Cask, some dark fruit, molasses, a little vanilla, and maybe almond. I don’t notice any tobacco or funk, but something like a hint of milk chocolate. I don’t think of rums as smokey compared to bourbon, but between these two, the Zinfandel has more burnt notes.

The flavors are as different as the noses. The Port Cask is smoother with less fire on the swallow and a short to medium but sweet finish. There is dark plum and raisin, brown sugar, black cherry, and chocolate, but like the aromas, they are more melded than the flavors of the Zin. The Port gives very little alcohol taste on the tongue and no funk. The rum is creamy and gets creamier as you go through the glass. I don’t detect any oak in it.

The Zinfandel Cask is less sweet and less creamy but it has some of both. It has a much longer after taste carrying a bit of oak bitterness as does the initial flavor. It is a little less smooth and the alcohol makes it hard to find distinct fruit notes, but I do get burnt brown sugar notes even without any sweetness. There is no funk I detect in the flavor. Although less sweet and more distinctly oaky than the Port Cask, the Zinfandel Cask is bolder, a more manly rum. It carries a forceful flavor kick compared to the subtlety of the Port Cask which is more rounded and nuanced. At least this is so at their bottled strength.

2rumsd

I did try an experiment, adding just enough water to a half ounce of the Zinfandel Cask to bring it down to 40% ABV. As expected there was less alcohol on the nose and a little less fire on the swallow, but still not as smooth as the Port Cask. The water maybe brought out a little raisin in the flavor of the Zin, and there was some funk there too, but way in the background. The Zinfandel Cask was still more oaky and not as creamy as the Port Cask though. The two remained quite distinct so it isn’t only the ABV making these two good rums different.

2rumse

These are both excellent rums and both on the distinctly drier side of the rum world. In some ways they come out the way you might expect. Port is sweeter and less acidic than Zinfandel and this comes across in the noses and the flavors. Both in the mid $40 price range near me, I will certainly be keeping them around as long as I can. As I understand it both are generous (thousands) if limited bottlings. Once they are gone, well, you know…

2rumsa