Morality Is Still on Very Flaky Foundations

This was a post I wrote several months ago in response to what I knew about effective altruism movement in the Bay area. I now have met other EA communites and am more uncertain. However, I believe it still raises some interesting points and could be useful to others.

The Original Problem

Most people I know consider themselves to be relatively moral individuals, no matter what kind of morality they ascribe to.  Almost every single piece of literature we read and a large majority of politics hangs onto the ideal of moral righteousness in one form or another.  So when people like Peter Singer and Eliezer Yudiskowsky goes out and state that conventional morality is bullshit, people freak out. The basic argument goes that we typically judge morality based on initions we have, rather than facts and rigorous analysis.  And this leads to strange things, like seeking moral vengeance against people who kill one or two individuals, while not blaming people with the ability to save millions of lives but not doing so.  Our intuitions were created from evolution optimized for cavemen, and whatever social norms and media we are subjected to.  They are are faulty due to numerous cognitive biases, and a moral standing upon them is difficult to de-couple from complete  moral relativism, an incredibly undesirable state for intellectual morality. So in a way, most modern morality is built upon strong intuitions.  The issue is that these intuitions are inconsistent and provably flawed for the modern era.  If we live out our entire lives and build societies based on these moral intuitions, then we are arguably doing almost everything on top of some very shaky foundations. I’ll use a few comics from the excellent Logicomix to help illustrate my point.  That book detailed the mathematical venture into complete proof and certainty against intuition, and I feel like that was very similar to what we are now going through with morality.

basic

Rationality to the Rescue

How to we fix this?  The existing fight against moral intuition has come from  utilitarianism and effective altruism.  These groups try to deduce why our important intuitions exist to help prove which ones are morally true and which are cultural and/or evolutionary artifacts.  Utilitarianism was somewhat of an attempt to create an overal structure that is somewhat intuitively reasonable but most importantly, numerical.  Basically, we define “goodness” in some way (often defined as “happiness”, but possibly “awesomeness” or any other word you might consider to be a reasonable ultimate human goal) and define moral good as to whatever reaches that.  The Effective Altruist group kind of grew out of this to help define good life practices (like donating to efficient charities rather than cute ones) that help optimize this number.  Often this seems unintuitive (for example, it may be very justifiable to not give to homeless people close to you and instead donate to charities you will never see), but can be defended well with math. Basic practices here are to do things like encouraging people to live lives that favored large donations to efficient charities, rather than typical bleeding-heart goodwill.  Others work on projects “orders of magnitude” more useful than anyone else in mainstream society, including researching the Singularity, ending human aging, and human computer uploading.  On paper these are incredible useful activities, because they are so much better than what is mainstream.  People doing these can feel quite morally superior to the rest of society, and still feel the logically reasonable ability to complain morally about war criminals and resist moral relativism.

line

Above shows a basic diagram of how I see the the “morality” of individuals based on something like utilitarianism.  While I chose “Expected Average Lives Saved (or Equivalent)”, what we are really talking about is some measure of goodness.  Peter Singer well reasons that people who choose not to do good should be faced with similar scrutiny as those who choose to do harm.  Likewise, the only “moral” position is to do as “much as possible”, which is generally defined as “Effective Altruism”, that box on the right.  The groups mentioned all fit into this general category, and from their perspective, they are all somewhat in the same position.

Trouble In Paradise

This all works out well, until even more effective rationalists appear.  Perhaps they come in the form of strict AI-risk reducers, or something else we haven’t seen yet.

line2

Keep in mind that each bar is an order of magnitude greater than the last one, so in comparison to the “more effective altruists”, the “effective altruists” look 10x to 100x as suboptimal.  The “effective altruists” claim that regular people were imoral because they only save 1 life rather than 1000 (resulting in 999 unnecessary deaths), but here come this other group complaining that the “effective altruists” are saving 1k rather than 100k lives (resulting in 99,000 unnecessary deaths).

The trouble is that one of the main justifications for the work of many effective altruists is that they are much “better in comparison” to everyone else.  Being “better” only makes one “good” when one only cares about comparisons.  Was the “best” Nazi a “good” person, because he or she should be compared to other Nazis, or a “bad” one, because we happen to have non-nazis to compare him or her to?  If one’s morality is rooted in its comparison to that of other’s, then one can go from “good” to “bad” just by the appearance of a new group.

The Annoying Pupils

Rejecting intuitive solutions presents all kinds of other nasty issues, if you were to take it to the extreme.  This would mean that there is almost no point where one can really say “oh, this makes basic sense, so I’ll accept this premise and work from there”.  And when one decides to create an entire personal philosophy off of preferring rationality to intuition, the rabbit hole can go much deeper than individual prefers.  At every level of logical thought, there can be someone new who complains that it’s not completely rational, and little rational reason for proving that it shouldn’t be.

Typically when we make logical decisions, we ultimately come down to a few intuitive things.  For example, we may optimize a business strategy for revenue, but we still make the intuitive decision that revenue is what we should be optimizing for.  One can go further and “rationally” explain why revenue is statistically expected to be the best way for the business to grow, but from there, there’s the intuitive assumption that the growth of the business is a good thing.  Several other intuitive assumptions are probably made in the rest of the model as well, it is almost impossible to make a complete model without them.

So the big question is how to decide where to make intuitive judgements vs. rational ones.  But to decide this question, there should ideally be a rational answer.  If there were an intuitive one, that would mean an intuitive foundation, which leads us back to where we started.  And finding a rational answer that we agree upon can be incredibly difficult.

Levels of Goodness Intuition

Let’s begin with an example.  When deciding on donating to a poorly run childcare charity compared to a well run one, the choice is relatively obvious within most definitions of “good”.  So within many definitions of “good”, the answer could be made relatively easily.

But take something more extreme: what if we need to choose between the lives of 4 cows compared to 1 human being?  Or 1000 insects (expected, though highly unlikely) compared to 1 human being?

The typical answer that I’ve heard for this, is to make an intuitive judgement about how important a cow is to a human.  Perhaps cows are 1/3rd as intelligent, then 4 cows “utility” would be greater than that of one human.  But the “utility” is a very difficult thing to define, and many people really only care to optimize human utility.

human

Even worse is when this taken to the computational extreme.  Many futurists advocate human brain uploading to machines.  They claim that simulated cognition is equivalent to physical cognition, and simulations of us could be much better taken care of than real ones.  Others find this scenario as repulsive and wrong.

physical

Some take another step and say that instead of us being uploaded into an amazing world, we should go straight to wireheading, into a state where our brains get pure “happiness” without realized experiences.  This is very controversial but seems very counter-intuitive and strange to other values we have.  As lesswrong points out,

“A civilization of wireheads “blissing out” all day while being fed and maintained by robots would be a state of maximum happiness, but such a civilization would have no art, love, scientific discovery, or any of the other things humans find valuable.”

Some take this further and say that if simulations of us are more “optimal” than us, then wouldn’t simulations of even “happier” beings be more optimal?  Or perhaps an A.I. should just simulate “pleasure” itself, essentially ending living sentience as we know it and replacing it completely with machines simulating things we can not now possibly imagine.  There are some that feel like for any given definition of “happiness”, this would be preferable, but others who feel like it is morally wrong for us because of its lack of most existing human values.

terminator

The one “completely rational” utilitarian answer that makes sense (is intuitive as being rational) to me is just that, that ultimately the most utility would come from replacing all current sentience with completely artificial beings (or being).  Essentially, not only letting the Matrix or Terminator overlords win the fight, but rushing for this to happen, fighting for it, and doing whatever is necessary.  On the strictest definitions of utility I could imagine, this would be the most analytically optimal thing.  Of course, that is for now, until I hear of another idea that could be 10x to 100x as “optimal” as this one.

line3

But of course, essentially all people reject this final bit.  The wire heading promoters will say that they are promoting the best outcome (and thus are doing by far the most efficient utilitarian work), and claim that the “terminator” scenario is morally apprehensible because of how wrong it _feels _in comparison.   The uploading enthusiasts will promote theirs as the best outcome while claiming that the wire heading outcome is unnatural and obviously wrong.  The human utopians will promote theirs as the best outcome while claiming that human uploading is un-human in comparison.  And local charity enthusiasts will insist that their work is the best, and the intuitive good feelings it provides would be missed by the “effective altruists”, and thus would wouldn’t be reasonable.

Also, I’ll note that quite a few people in the field of futuristic high-technology “effective altruism” are very much intent on themselves being a part of this.  That they themselves would get to live forever or be uploaded.  When asked why they prefer this to something that could mathematically produce much higher expected value, I hear answers like “this is already good enough!”  But how do we possibly decide rationally what “good enough” means, especially when there is something obvious better?

The great difficulty here, as I see it, is that the extreme moral stance, with as little intuition as possible, seems to be the terminator scenario or similar, areas that seems completely morally strange and unintuitive to absolutely everybody.  But if one were to do more “reasonable” things, one may make order-of-magnitude losses in efficiency for the sake of achieving a more intuitive answer, which was the entire problem we started out against.  We could make a trade-off in between (like many I know do), but such a trade-off would also be an intuitive one.

second

Where to Go From Here

If one does accept some intuitive limitations from terminator support (or other intuitive options like keeping 10% of income because it “feels right”), at least we could admit it and realize that we aren’t that much different from those “inefficient altruists” or even “regular people” who make different intuitive limitations than we do.  This is in many ways moral relativism and leads to a vague and somewhat frustrated form of morality, but at least not a hypocritical one.  Being honest to ourselves is a good first step.

I believe that really finding a rational ends that we can agree upon will take quite a bit of change to our own intuitions   It may just be a matter of time, like it was to get reduce other limitions (nationalism, sexism, racism, etc).  Of course, the issue is that the time required may be longer than available before apocalyptic / world changing event, so it is quite likely this will never be settled.

I do believe that this should be figured out and discussed much more rigorously.  Because from what I’ve seen, many of the most important moral issues are still surrounded by relatively emotional and intutive debate.  And for us being “rationality nerds”, this is quite a huge deal.

Comments