SIO27: What Grounds Our Morals? With Aaron Rabi

Joining me for the second time is philosopher and now podcaster, Aaron Rabi! Aaron specializes in ethics, and I have questions. I’m fairly certain that our morality ultimately boils down to consequences, for reasons I explain, but not all philosophers see it that way. Aaron and I have a very interesting discussion on what exactly grounds our morals, given that there are problems to be found in every moral system. Does consequentialism ultimately win out, as I have previously opined? Find out!

Find Aaron’s podcast here!

Leave us a Voicemail: (916) 750-4746!

Support us on Patreon at:

Follow us on Twitter: @seriouspod


For comments, email

Questions, Suggestions, Episode ideas? email:

Direct Download

5 Replies to “SIO27: What Grounds Our Morals? With Aaron Rabi”

  1. I’m not sure if I agree or disagree on consequentialism being the de facto fallback solution. What seems clear to me is that morality is something that arose out of our burgeoning self-awareness as we evolved and became capable of more complex conceptualizations about the pro-social behaviors that our species unwittingly adopted and inherited via natural selection.

    As far as I can tell, in overly simplistic terms, humans have spent thousands of years explaining and expanding on intuitions and urges that are the result of arbitrary/idiosyncratic demands which were necessary to survive in particular environments in the ways we happened to survive them, as well as those which are necessary to survive in the environments we currently inhabit.

    I think ethics is a muddy, slapped-together thing that we try to generalize and make objective-ish statements about, as if there were some coherent structure inside of it.

    In many ways, I think the experience of morally behaving is similar to something that came up on the episode about bias, in which the guest noted that given two job applicants, without articulating the thought “a man would be better for this job” our biases could draw our attention to a particular trait that inexplicably (but logically justifiably) seems relevant to the hiring decision, while ignoring a different (but equally justifiable) quality in the other applicant. Depending on context, what I consider to be “the right thing” might be a combination of muddled assumptions about virtue, consequences, preference (et al.), and which of those seems most relevant for any given decision could be a result of bias or priming or empathy or intuition (et al.).

    On a more specific point, isn’t consequentialism vulnerable to the same kind of criticism that’s being made of virtue ethics (i.e., “it all comes back to consequences”), which is to say, why is it good for our actions to result in good outcomes? I know there’s a way to read that which sounds like “why is it good that things are good?” or “why are good things good?”, but I think there’s a non-tautological question in there about whether we can assert that creating more good is in itself “good” without the assumption that goodness is a virtue.

    Maybe this can be accounted for by defining “good” in a way that makes consequential arguments consistent, but it seems like without some assumption of virtue, there’d be no reason why an ethics of creating the most good would be more valid than an ethics of inflicting the most pain.

    Another thought, and this might go without saying, but isn’t it possible that the tendency to revert to consequentialism is not the result of identifying consequentialism inherent in an ethical system, but of our ability/tendency to perceive of existence in a narrative fashion, and as such, to process information using consequentialist language in our conceptualization?

    I’m reminded of internet trolls who are so focussed on using “hypocrisy” and “inconsistency” to dismiss others’ opinions that they reflexively twist others’ statements into claims they can easily refute, rather than attempting to understand the claims being made.

    Just as you mention that deontologists may be mistaking the rules as the thing in itself, couldn’t it be the case that consequentialists are mistaking the explanation/mechanisms of understanding as the thing in itself?

  2. Thanks for the link to Aaron’s podcast! It may, in fact, be “The One True Podcast” (sorry, Irreligiosophy guys). Our morals being a moot point for the next eight years, we should, indeed, Embrace the Void.

  3. Loved it, Thomas. I really enjoy these philosophical discussions.

    Plus I am on your side – in the end it’s all about consequences and those arguments “they would take out all of your organs to save X people…” are bad ’cause you just need to ask yourself “Would people like to live in a world like this where visiting a doctor might be a death sentence”. Same goes for those first order rights – “Would you like to live in a world when…..”

    Anyway, as I said, loved this epi.

  4. Maybe someone has made this comparison before, but I thought of a metaphor that I think works pretty well, and is (roughly?) what Tomas was saying at the end.

    For a long time, we thought all the laws of motion were described by Newton’s laws. And they work almost perfectly almost all of the time. But in really extreme cases they break down. That didn’t make Newton’s laws wrong. It just turns out there’s a more fundamental underlying physics which works out to Newton’s laws in most cases, but can also handle the edge cases (and gives you a truer understanding of reality).

    Furthermore, it turns out there’s quantum mechanics under that, and we don’t know how they match up exactly but I don’t know what that does to the analogy.

    I think this matches closely with the rights-based morals (or I think rules consequentialism was the term Aaron used). Rights are a classical way of looking at morals. But they are kind of arbitrary and decided by discussion as Aaron said. Underlying all that is actually consequences I think, because as Thomas said you can always think of a more extreme situation where someone’s rights can be taken away. That to me seems like approaching the speed of light or a black hole. It’s extreme and probably almost never gets encountered. And it’s unintuitive.

Leave a Reply