The Need for a Rigorous Morality

When making life decisions about morality, reasons aren’t always good enough.  We need numbers. Quick challenge:  Try calculating the answers to the following questions, optimizing for the benefit of mankind.

  •  How much should you tip waiters?
  • Should you visit a family reunion or stay at home and do extra work?
  • Should you go to a party at least once this month, or stay home and do work?
  • Should you become a doctor or a politician?

These are four everyday questions for someone trying to do good in the world in a somewhat optimal way.  For any of them we could come up with a few basic hypothesis followed by assumptions followed by a very uncertain decision.  However, I wonder what the optimal answers could be using all of existing human knowledge.  My guess is that these answers should be much better than what one of us may come up with in a few minutes or years. I’m looking for a rigorous infrastructure from which to make choices to optimize definitions of what is good with as high certainty as possible.  I define “good” in a utilitarian sense, but this could be modifiable.  Even in “utilitarian”, a definition of “utility” is quite flexible and open-ended.  The important thing is that there is an efficient way of optimizing for definitions of good, many of which are quite similar anyways. I expect many members of this site to already use simple Fermi calculations when making important decisions.  Right now that does seem to be about the most rigorous thing one can do.  However, Fermi calculations are very much constrained by the information you have available to you.  I still think Fermi calculations are important, but I would like a system to give us standards and heuristics to make them more feasible and useful. For example, take the problem above, “how much should you time waiters?”  A basic Fermi model may make estimations on all of the costs and benefits of tipping waiters, then do a few calculations and arrive at an answer. Advantages of tipping a waiter: - a1) Help out the waiter financially - a2) Make the waiter feel better - a3) Make the waiter treat you better later on - a4) Make yourself feel better - a5) Convince yourself that you are a better person

Advantages of not tipping a waiter: - b1) More money to support existential risk / a better cause - b2) Convince yourself that you are a better person (for some hardcore utilitarians)

Perhaps you conclude that by far the most important advantage is a4 and the most important disadvantage is b1, but even then this is a highly uncertain equation.  How do you best quantify how much better the tip would make you feel, and how will that impact your productivity in life?  Exactly how much will a few cents reduce existential risk?  Simple answers to these questions lead to more questions until one finally uses either intuition (my salary is fixed, so a productivity increase would probably have a “small” benefit) or heuristics (I’ve read somewhere that $1 is worth 3 future lives), which are what finally allow one to solve the problem analytically.  However, this rests on several assumptions, is highly uncertain, and takes a lot of time to do. A substantially more rigorously thought out moral system would be able to provide answers like: “When an average San Francisco waiter gets $1 extra income, global utility increases by approximately 15 +- 7 “utils”.  When an average San Francisco waiter is tipped $1, the emotional gratification upon that waiter increases by approximately 5 +- 3 utils. I don’t think this is impossible, just difficult.  The important thing to remember is that we don’t need a framework that is excellent or anywhere near perfect. We just need one that gives us answers that are a bit better than we would normally do.  Given that humans tend to not think about issues like this all too much, this may not be that difficult. Challenges with a Rigorous Morality 1. Accepting optimization as a moral system.  Many people object to the idea of optimizing as a moral framework.  Because this community is so accepting of utilitarian ideas, I won’t spend much time on this. 2. Complexity.  The calculation above of the tip got to be slightly complex.  A full and complete calculation could be arbitrarily large.  For example, just to calculate the benefit of a1, you may first look at happiness to income curves, then estimate the average family and friend size of the waiter to begin predicting the positive influence on people around the waiter.  There may be clever ways to simplify and approximate this, but it would take a lot of thought and still result in a very complex answer.  Most fermi problems are relatively simple, and humans definitely aren’t used to figuring things like this out on a routine basic.  A rigorous, scientific morality would do the heavy lifting and provide simple, but relatively correct, answers. 3. Uncertainty.  Many calculations involved in deciding what helps the world most are very uncertain.  Calculations may involve many assumptions, each of which would carry some uncertainty.  So the end result is incredibly open ended.  Most scientific statements aren’t statistical, but essentially every single social statement is.  Will purchasing a dog provide over $100 in extra productivity to you per day?  We could run scientific studies on how dogs have helped other people become happy, and how happiness in your specific specialization results in productivity, but even then there would still be quite a bit of uncertainty.  More difficult problems are obviously worse.When doing math, any calculations of large social phenomena will have a lot of uncertainty.  Even with the best of existing knowledge, we may only be able to estimate the benefit of a dollar on existential risk reduction by a few orders of magnitude.  However, even though this is very messy, it should still be better than our basic intuitions.  In my experience in engineering, most problems have relatively minor amounts of uncertainty (+- 30 to 60% at most).  Here it can be significantly more.  This calls for the need to understand large amounts of uncertainty.

Principals of a Rigorous Morality: I believe that this way of thinking about morality and optimization leads us to the study and understanding of complex systems.  The tipping example can show us the beginnings of what may be described as a large “system” of assumptions and calculations.  The study of complex systems and networks has become quite popular in the last few decades and may help when structuring a system. 1. A graph-like structure.  A large system of fermi calculations (a relatively small one described in the tip example) can be seen to be made up of many similar structures or nodes.  In this case, we can say that a node represents a calculation, and a connection represents the use of the result of one calculation for another calculation.  Likewise we may be able to understand the structure of the system using a knowledge of graph theory and systems theory. 2. Heuristics (Simple outputs).  While a rigorous morality may be very complicated, it should be able to be interacted with in a relatively simple way.  A doctor may need years of medicine experience, but a patient should only need to know how to take a pill.  Likewise, a rigorous morality should be used to find take-aways and rules of thumb that can easily be applied to modern life. 3. Propagation of Uncertainty.  A rigorous morality would focus on the connections between different assumptions, each with different levels of uncertainty.  We can expect that this effectively will create “bottlenecks” whereby assumptions with high uncertainty mostly determine the uncertainty of the result.  Therefore it is important to focus on reducing uncertainty throughout a chain of assumptions such to minimize the uncertainty of the system, which is assumed to generally happen by focussing on the most uncertain elements of the system.  For example, if one has absolutely no idea how effective a nonprofit will be in existential risk reduction, most efforts to determine the exact amount of money saved by tipping will not be expected to improve the system’s decision. 4. Integration of intuitions.  A rigorous morality cannot be made up of only scientific data because most important assumptions in this field do not have scientific data.  Instead there will be situations where the most accurate expected answer is an intuitive one.  This may be a weighted sum of the intuitions of many people, and if so, that will need to be set up.     5. Value of Information.  Most assumptions will not be worth modeling.  The time and effort that it takes to extend a rigorous morality should probably be decided on a basic of what is within the rigorous morality. For example, the tip example may not be worth spending much time on.  However, if I were to find out that every single person in the world would take my advice, I would spend more time on it (but there would be an ideal level of time to spend.)

Conclusion A rigorous morality, as defined in this document, is expected to be very difficult to fully create.  It is expected that multiple revisions and lots of research and development will be necessary to create something really powerful.  However, I believe that it is definitely worth starting.  Rationality should win.