San Francisco Party Ideas

Because most Silicon Valley parties are stale and repetitive.

The NSA Party!

By far the most secure party on this list.  Nothing will be stolen from anyone at this party, because all attendees and conversations will be recorded (on video and audio), and fully transcribed.  This information may or may not be “hacked”, making it available for everyone with an internet connection, several days post party completion.

The Anonymous Party

Everyone wears guy fawkes masks, making everyone anonymous and unrecognizable. The event will begin with a political campaign to extradite Bradley Manning, but people will likely get bored quickly, turning the event instead into a prank calling exposition.  Illegal activity will be encouraged.  Then the police will arrive and it may or may not be discovered that the party organizers were actually FBI informants.  Sign up would be done publicly on Facebook, for convenience.

The Philanthropist Party

Here we publicly pledge to give away most of our money to charity, on the one condition that we become billionaires first.  

The Silicon Valley Job Creation Party

At this party, we’ll hire a well payed programmer to make a machine to serve us drinks.  A minority of the profits from ticket sales will altruistically go to an immigration bill targeting Indian programmers with hydraulics experience.

The Fox News Party

This will actually just be a fun party, just to throw people off.

The Party Venture to Save the World!

The party managers create a brand new type of charity event that is incredibly effective.  The party becomes a surprising success, and within only 2 hours, sees the pledges of tens of thousands of dollars to Greenpeace. At a randomized time, a public company “purchases” the venture for branding purposes and the party abruptly ends, offering everyone 36 seconds to exit. 

The Social Entrepreneur Party

At this party no one is permitted to talk about social ventures, but instead to discuss how important it is to have parties that promote social ventures.

The 1% Party

This is pretty much already a typical Bay Area party.

The 99% Party

Essentially the same group of people from the 1% party, but here complaining about not being invited to the the 0.01% party (which is much fancier).

The Instagram Party

Picture taking is encouraged, especially through cell phones.  Attendees win prizes like “most popular” in online badge form.  Managers get all photos taken at the event, and proceed to sell them all on iStockPhoto to help pay for the party.

The “Dress Like an Anti-Internet Bill Party”

No description needed, use your imagination a bit.

The Global Warming Party

The party takes place in a huge hall with blocks of ice instead of chairs and tables.  Heat will be on full, so the party will transition into a pool party, then a hot tub party.  During this time we’ll discuss the importance of issues like bank corruption and SOPA.  

The Time Devil: A Very Depressing Look at How Time Discounting Can Ruin Us

This was originally my final research paper for The Economics of Technology and Growth class at Claremont Mckenna College. Basically, what I am aiming to prove here is that technology can be harmful to us mainly because of our cognitive biases.  In this case I choose time discounting because it is well studied and known, but it is only one of several possible options.  I attempt to use mathematics and simulations to see just how much the technological results of our time discounting could be sub-optimal for us.  Feel free to download this on, or download the PDF or Word document.  This work is all now under Creative Commons Attribution License.  Enjoy.

Depiction of Devil in Negotiation (Unknown)


There is widespread acknowledgement between economists and psychologists that humans use time discounting when making decisions. (Shane Frederick, 2002) (Gal Zauberman, 2009)  This research examines net-negative choices that humans may willingly make because of this discounting.  First a “devil curve” model is described which studies the maximum possible utility loss a human may experience with very specific new choices.  The discount loss is calculated under different conditions of period number and discount factor.  The possibility and extent of trade between individuals with different time discount functions is studied.  Finally the implications of this are used to observe the impact of time discounting on the benefits of technology and choice in a mathematical simulation.  It is found that time discounting has the potential to cause great disutility, and may already doing so.

A Short Fable

You’re walking through the woods and a devil appears.  Before you can run, he claims that there’s not enough freedom in society.  Humanity needs more choices, more ideas, more technologies.  You’re suspicious, but he swears that he won’t lie or deceive you.  The Devil promises you an infinite number of choices.  He will tell you exactly how much good each will do you throughout your life. You trust the devil enough to be honest and decide you might as well listen to him.  You’ve had a strong belief in freedom and choice and trust yourself to be logical. After all, how bad could this possibly be?

The First Choice

He offers you a large delicious juicy red apple.  He tells you that eating it will give you a small headache, but not for 20 years.  You are quite hungry.  The headache will result in about two times as much pain as this apple will bring you joy now.  It’s your choice.  Will you accept it?
Figure 2: Juicy Red Apple (Tembhekar)

Probably.  The most optimistic estimates on human discount rates estimate it at around 5.6%.  Using hyperbolic discounting this would mean that a typical human discounts utility from 20 years in the future by approximately 53%. (Steffen Andersen, 2011)  Therefor the victim in this case would value the headache at approximately (1-0.53)*2 = 0.94 times the enjoyment of the apple now.  This person would accept the deal with the devil.

The Utility and Discount Curves

Let’s consider how much damage these deals can do.  First we assume hyperbolic discounting of utility with time.  Often economists use exponential discounting for convenience, but hyperbolic discounting has been shown to be a more accurate model for predicting human behavior. (Steffen Andersen, 2011)

Hyperbolic vs. Exponential Discount Factors (Moxfyre)


The graph and equations above show the differences between hyperbolic and exponential curves.  In general a hyperbolic curve has a longer “tail” then its exponential equivalent.  Most of the results in this paper are only shown for hyperbolic discount functions. Humans on average lost significantly more utility when exponential functions were tried, although this was in part because hyperbolic and exponential curves with the same discount factors were compared to each other, as opposed to choosing ones with similar integrals instead. To model the deal with the devil, we assume that the human is expecting to live 50 years with constant utility for that time.  This constant utility is standardized to 1.  We assume a discount factor of 0.1, or 10%.  This is higher than the most optimistic estimates, but less than many others. The following curves represent utility and the “discount coefficient” (The amount that utility in a given period is discounted due to the discount factor).


Mathematical Theory of Devil Choices

Art of a Deal with the Devil (Pacher)
The individual can be modeled to have a decision function, D, which is used to decide whether or not to accept choices.  “Greedy choices” are ones in which a person would sacrifice utility from the future in order to gain utility in the present.  In these cases, the “decision equation” can be used to tell when the person will make a choice.



When ΔD > 0, the choice will be made. In this case we are studying “devil choices”, which are “greedy choices” with maximum possible harm to the participant.  To achieve this, a decision is given in which just enough utility is given in the present to get the participant to agree to give up an amount of utility in the future.  This can be assumed to be a negligible amount, and thus ΔD ~ 0 and we find the following relationship.


The second important equation is the net utility calculation, which determines the net utility difference after a choice is made.  ΔU is the change in utility.


Combining these two equations, we find the following


Finally, we set Uf to 1 (as previously assumed), and dp to 1 (which is always true).  Thus, we find the following relationship between utility loss and discount coefficient for each period.


The Devil Curve

This utility loss of devil choices can be shown graphically, as shown below.  Conveniently, the utility loss for each period can be thought of as the distance between the discount coefficient and the original expected utility at that period.

Maximum Discount Loss for One Period

The sum of losses for all periods (“The Region of Discount Loss”) can be represented in the region between the utility curve and the discount curve, as shown below.  In the condition of Devil Choices, the line separating both regions is considered to be the “Devil Curve”.

Devil Curve

In this case, with D = 0.1 and t = 50, the region of discount loss was calculated to be 63%.  This means that the devil would able to steal 63% of the victim’s utility just by giving him specific choices that he accepted.  Given these reasonable parameters, and lots of specific freedoms, the otherwise rational victim will ruin most of his life.

It Gets Worse

This previous model had a simplifying assumption that is very convenient to the human.  It was assumed that the human could only trade away positive utility.  The utility graph had a bottom at 0 utils.  This may well not be the case, if the human can experience pain and torture without the ability for suicide. Below are models that show how utility floors of negative 1 and negative 2 would change the result of the encounter.  The Devil curve is scaled to visually represent for the greater possible trade region, with the calculations done accordingly.  This does not mean the time discounting curve itself changes, just that it applies a similar proportional impact on the available tradable utility.


Thus, if a human is capable of experiencing twice as much pain as they were expecting to experience pleasure throughout their average life, then he or she would give up 190% of their entire lifetime utility to the devil.  On average, their life will now be full of misery and suffering.  Of course, if the person could possibly feel more pain, it would be even worse.    This situation wouldn’t be much better if the human were to have a smaller discount factor, as will be shown in Figure 6.  For any non-zero discount factor, a very large discount loss is possible.
Some Possible Results of Freedom (Angelico)

Parameter Variation in the Devil Curve

The devil curves with 0 floor utility were generated for all combinations of three values of period number and discount factors.  Period numbers of 5, 30, and 100 were used because they span a wide range of typical remaining human lifespans.  A very old person may only expect to live a few more years, and thus would only make decisions for utility in that time.  These period numbers can also be seen as the possible time that one can influence with current choices. The discount factors of 0.05, 0.1, and 0.3 provide a good range for what economists and psychologists typically assume it really is.   These factors can be thought of as representing people with different abilities of self-control, from a priest to a reckless daredevil.  It is possible that these numbers may even change with the mood of individuals. Of course, these factors can also represent countries or organizations.  A government in power for a limited amount of time may only be interested in that time of power.  Fortune-500 companies have lifetimes of between 40-50 years. (Bloomberg Businessweek, 2008)  Governments may have different discount factors based on how frustrated their constituents are.  Businesses may have different discount factors in accordance to how much they focus on monthly reports and short-term profits.

Devil Curves for Variations in t (period numbers) and D (discount factor) Devil Curves for Variations in t (period numbers) and D (discount factor)[/caption] Figure 6: Devil Curves for Variations in t (period numbers) and D (discount factor) These results generally conform to what is expected.  As the period number increases, the discount loss increases, as the individual has more time periods to sacrifice.  As the discount factor increases, the individual is prone to trading future utility more. The specific numbers themselves are quite interesting.  There is significantly more variation in discount loss between the period numbers then between the discount values studied.  Even “saints” with very low discount values are still susceptible to great harm by this devil.  While it’s definitely not much of a factor for old people (the riskiest of whom have discount losses of up to 32%), this definitely seems significant for everyone else.

The Devil Inside

Fortunately appearances by the Devil don’t seem to be very common these days.  However, the Devil Curve model showed us how bad the choices available to us could be. Here we consider two ways that net-harmful decisions could still be made.  The first is by trading “Discount Utility”, and the second is by chance when common choices and technologies.

Discount Utility and Trade

In the previous model, the devil was assumed to know the human’s decision curve precisely, and make decisions accordingly.    In essence, the “trade surplus” was completely delivered to the supplier, the devil.  When humans trade with each other the recipient of the trade surplus is not as clear, but the extent of trade should be similar to the devil deal. Consider two individuals with different discount curves.  One has a discount factor of 0.1, the other has a discount factor of 0.2.  In this case one unit of utility in year 20 is worth approximately 0.33 in year 1 the first individual, but would be worth approximately 0.2 for the second.

Person 1 (D = 0.1)

Person 2 (D = 0.2)

Decision Utility (1)



Decision Utility (2)



We can see that the different time discounting factors are essentially comparative advantages for both people, but in willingness to make certain decisions instead of production capabilities.   In this case, persons 1 and 2 will accept trades of between 0.2 and 0.3 present utils from person 1 to person 2 in exchange for 1 future util.  The net utility caused by both possible trades is shown below.

Person 1 (D = 0.1)

Person 2 (D = 0.2)

Utility Gain
(1 : 0.2 trade)



Utility Gain
(1 : 0.3 trade)



This total “decision surplus” can be represented in graphical form, similar to the previous curves.  The entire discount loss region of the individual with the higher discount curve should always be exchanged from the low-discount rate individual to the high-discount rate individual.  The only region in question is the area between both individuals’ discount curves.


As this graph makes apparent, for individuals with similar discount curves, this small region isn’t even that important.  It is simply the negotiation room.  On the whole, a large discount loss region will be exchanged between them, causing an incredibly large shift in utility. This is an incredibly disturbing finding because it implies that individuals with even very small differences in their discount curves would lead to massive utility imbalances.  Perhaps fortunately, these kinds of trades mostly aren’t possible yet. One implication of this is that organizations with relatively low discount factors will seek out groups with high discount factors who are willing to trade, and greatly damage them by making very large exchanges.  For example, credit companies finding at-risk individuals, and banks lending to political organizations in unstable countries.   These trades are often limited by a difficulty in holding long-term contracts.  For example, the ability to declare bankruptcy or to have a limited liability company could make it very difficult for “predatory” companies to collect on future profits, limiting their trading potential with susceptible individuals and thus protecting these individuals.  This may be one of the few cases where weak contracts may be beneficial to a society: here they ultimately protect people and organizations from themselves. This may also hint at the presence of an informal “market of discount utility”.  For example, careers requiring different discount factors will be compensated accordingly.  Careers that require little, or enjoyable, training, will be compensated less than those that require lots of upfront sacrifice, even if both groups of careers are equivalent in total net utility without the compensation.  Similar possibilities exist for infrastructure and technology improvements on the behalf of countries and companies in settings of potential trade.

Technology and the Utility vs. Decision Matrix

Here we consider the impacts of time discounting on the net utility provided by technology.  In essence technology can be modeled as the presence of choice.  The conception of a technology itself does nothing unless individuals choose to purchase and use it.  Thus new technologies can be modeled in similar ways as choices in the devil model. We assume that technologies do not naturally have any moral or time bias.  People are always coming up with new ideas, each of which can be represented as a choice with varying utility contribution over a given time set.  To attempt to imagine all possible technology utility curves, we simulate each new technology as a random utility function over time.  Each point on this curve (or noisy line) has an equal probability of being anywhere between 1 and -1, representing the addition or removal of utility. To decide whether or not to accept a technology, a person will multiply their time discount curve by the technology’s utility curve, resulting in the “decision function”.  The sum, or integral, of this function is called the “decision factor”.  If this were positive the individual would be willing to accept the technology, if it is negative he or she wouldn’t be.
The Utility Function for a new technology, the Time Discount Curve, and the resulting Decision Function ]( The Utility Function for a new technology, the Time Discount Curve, and the resulting Decision Function

We also study the “total utility”, which is the integral of the lifetime utility of a given technology.  When this is positive the technology will be net-beneficial in utility, and when it is negative the technology will be net-harmful. In this case we are primarily interested in impacts of “bad choices taken”, which are those with positive decision factors but negative total utility.  These are the choices we take because of our time discounting bias, but which will ultimately harm us. In regards to total utility and decision factors, four main categories emerge, one being the “bad choices taken”.  These can be put in matrix form as shown below.  The placement of each category matches its quadrant in the following simulation plot.

  Negative Decision Factor Positive Decision Factor
Positive Total Utility Good Choices Not Taken Good Choices Taken
Negative Total Utility Bad Choices Not Taken Bad Choices Taken

Using Matlab, several thousand “technologies” were simulated.  The decision factors and total utility of can be plotted on a scatterplot.  White lines are placed to highlight the x and y axis, dividing the plot into 4 quadrants.  This plot is called a “Utility Vs. Decision Matrix”.  First, we show what this plot looks like for a simulation using a period size of 50 and a discounting factor of 0.1.


As shown, the technologies generally span from the bottom left to the top right.  This makes sense, as the utility curves were generated with no correlation between present and future utilities.  People take technologies primarily based on what’s good in the short term, and there’s no negative correlation between this region and the left utility region, so on average these decisions result in positive utility. The “decision average” is the mean total utility of all taken choices, which are all those on the right of the y-axis.  This is shown as a red line in quadrant 1.  The “optimal average” is the mean total utility of all taken choices without the “bad choices”, which are those in the upper right quadrant. This is shown is the red line in quadrant 1. The discount loss is the percentage difference between the optimal and decision averages, representing the average utility lost per choice due to the presence of the bad choices.  The percentage of all choices taken (on the right half) that are bad (on the bottom right quadrant) is also shown, as “bad choices taken”.  This number represents the percentage of choices that an individual will take that will be net-harmful. The results of this simulation tell us a few things.  There are more “good choices” then bad ones, with bad choices making up 16% of the total choices taken.  On average these reduce our benefits from technology by 22%, which is quite significant.  If this model were true it would mean that approximately one of every six choices you make will your life worse.
Does this still look delicious? (Evan-Amos, 2011)

Parameter Variation in Utility Vs. Decision Simulations

Utility vs. decision simulations were done with all combinations of discount factors of 5%, 10%, and 30%, and period size of 5, 30, and 100.  The results are shown in the graphs below.
Utility vs. Decision Factor simulations for variations in P (period numbers) and D (discount factor) Utility vs. Decision Factor simulations for variations in P (period numbers) and D (discount factor)

There general patterns of these simulations correspond which is what is expected.  In general, the U vs. D plot is more linear and less scattered the less the discount rate and the less the period size.  This makes sense: without either, there would be no “bad choices” and thus no points would fall in the fourth quadrant. These results are similar to what we would have expected from the results of the devil models. The numbers don’t vary as much as may have been suspected.  It seems that as a rule of thumb, people may assume that at least 10% - 20% of technologies are harmful to themselves, and should make decisions accordingly.

Room for Improvement

There are several possible factors that are not in this model, but would suggest a more harmful or beneficial view of choices if they could be incorporated.

Harmful Factors

  1. Other Biases
    Time discounting is only one of the many biases humans have.  The impacts of our biases of non-rational risk taking are not considered.  Neither are the cognitive biases, or even false popular beliefs.
  2. Marketing
    This model assumed perfect information about the results of technological choices.  This is very rarely the case, as marketers often misrepresent their products.  It is unclear if “bad choices” will be misrepresented disproportionately, but it seems to.
  3. Externalities
    What about choices that when taken, help or harm other people?  People seem to prefer their own utility to that of others.  This has poor implications for the devil curve, but the implications in regards to technology would depend on the power people have over each other.

Useful Factors

  1. Group Regulation
    The government occasionally bans or taxes items that are considered harmful for individuals, such as cigarettes and heroin.  Social groups often provide cultural influence to bias people away from actually generally seen as short sighted (for example, adultery).
  2. Diminishing Marginal Returns
    In this model people trade utility, but this is never really the case in real life.  Instead technologies provide emotions and material benefits, which are bound with diminishing marginal returns, possibly restricting individuals from taking many of specific harmful actions (eating cookies, for example).


This paper presents a rather stark analysis of humanity’s ability to deal with choice.  The devil curves show potential utility losses of up to 88% given reasonable parameters.  The inquiry on utility trade brings up the possibility that given better trading tools, incredibly large utility shifts may occur between otherwise logical individuals.  Finally the Utility vs. Decision simulations reveal that due to time discounting, approximately 15% of our existing technologies could be net-harmful to us.  This demonstrates that time discounting may be incredibly damaging to modern society.   It is recommended that further analysis is done to understand ways to counter this force.

Appendix A.

Utility vs. Decision Matrix Parameter Variation using an Exponential Time Discounting Curve.


This uses the same parameters as the equivalent chart using a Hyperbolic Time Discounting Curve.  Note that the data is much more spread out on average, leading to pessimistic results for humanity.

Works Cited

Angelico, F. Das Jüngste Gericht, Detail. Wikimedia Commons, Florence. Bloomberg Businessweek. (2008). The Living Company: Habits for survival in a turbulent business environment. Bloomberk Businessweek. Evan-Amos. A McDonald’s Big Mac hamburger, as bought in the United States. Wikipedia Commons. Gal Zauberman, B. K. (2009). Discounting Time and Time Discounting: Subjective Time Perception and Intertemporal Preferences. Journal of Marketing Research: Vol. 46, No 4, 543-556. Moxfyre. Hyperbolic vs. Exponential Discount Factors. Wikipedia Commons. Pacher, M. Michael Pacher 004. Wikimedia Commons. Shane Frederick, G. G. (2002). Time Discounting and Time Preference: A Critical Review. Journal of Economic Literature, 351-401. Steffen Andersen, G. W. (2011). Discounting Behavior: A Reconsideration. Durham University. Tembhekar, A. A Red Apple. Wikipedia Commons. Unknown. Mayor Hall and Lucifer. Wikipedia Commons, Public Domain.


This work is under the Creative Attributions License.  This means you can do whatever you like with it, just cite me somewhere.

Morality Is Still on Very Flaky Foundations

This was a post I wrote several months ago in response to what I knew about effective altruism movement in the Bay area. I now have met other EA communites and am more uncertain. However, I believe it still raises some interesting points and could be useful to others.

The Original Problem

Most people I know consider themselves to be relatively moral individuals, no matter what kind of morality they ascribe to.  Almost every single piece of literature we read and a large majority of politics hangs onto the ideal of moral righteousness in one form or another.  So when people like Peter Singer and Eliezer Yudiskowsky goes out and state that conventional morality is bullshit, people freak out. The basic argument goes that we typically judge morality based on initions we have, rather than facts and rigorous analysis.  And this leads to strange things, like seeking moral vengeance against people who kill one or two individuals, while not blaming people with the ability to save millions of lives but not doing so.  Our intuitions were created from evolution optimized for cavemen, and whatever social norms and media we are subjected to.  They are are faulty due to numerous cognitive biases, and a moral standing upon them is difficult to de-couple from complete  moral relativism, an incredibly undesirable state for intellectual morality. So in a way, most modern morality is built upon strong intuitions.  The issue is that these intuitions are inconsistent and provably flawed for the modern era.  If we live out our entire lives and build societies based on these moral intuitions, then we are arguably doing almost everything on top of some very shaky foundations. I’ll use a few comics from the excellent Logicomix to help illustrate my point.  That book detailed the mathematical venture into complete proof and certainty against intuition, and I feel like that was very similar to what we are now going through with morality.


Rationality to the Rescue

How to we fix this?  The existing fight against moral intuition has come from  utilitarianism and effective altruism.  These groups try to deduce why our important intuitions exist to help prove which ones are morally true and which are cultural and/or evolutionary artifacts.  Utilitarianism was somewhat of an attempt to create an overal structure that is somewhat intuitively reasonable but most importantly, numerical.  Basically, we define “goodness” in some way (often defined as “happiness”, but possibly “awesomeness” or any other word you might consider to be a reasonable ultimate human goal) and define moral good as to whatever reaches that.  The Effective Altruist group kind of grew out of this to help define good life practices (like donating to efficient charities rather than cute ones) that help optimize this number.  Often this seems unintuitive (for example, it may be very justifiable to not give to homeless people close to you and instead donate to charities you will never see), but can be defended well with math. Basic practices here are to do things like encouraging people to live lives that favored large donations to efficient charities, rather than typical bleeding-heart goodwill.  Others work on projects “orders of magnitude” more useful than anyone else in mainstream society, including researching the Singularity, ending human aging, and human computer uploading.  On paper these are incredible useful activities, because they are so much better than what is mainstream.  People doing these can feel quite morally superior to the rest of society, and still feel the logically reasonable ability to complain morally about war criminals and resist moral relativism.


Above shows a basic diagram of how I see the the “morality” of individuals based on something like utilitarianism.  While I chose “Expected Average Lives Saved (or Equivalent)”, what we are really talking about is some measure of goodness.  Peter Singer well reasons that people who choose not to do good should be faced with similar scrutiny as those who choose to do harm.  Likewise, the only “moral” position is to do as “much as possible”, which is generally defined as “Effective Altruism”, that box on the right.  The groups mentioned all fit into this general category, and from their perspective, they are all somewhat in the same position.

Trouble In Paradise

This all works out well, until even more effective rationalists appear.  Perhaps they come in the form of strict AI-risk reducers, or something else we haven’t seen yet.


Keep in mind that each bar is an order of magnitude greater than the last one, so in comparison to the “more effective altruists”, the “effective altruists” look 10x to 100x as suboptimal.  The “effective altruists” claim that regular people were imoral because they only save 1 life rather than 1000 (resulting in 999 unnecessary deaths), but here come this other group complaining that the “effective altruists” are saving 1k rather than 100k lives (resulting in 99,000 unnecessary deaths).

The trouble is that one of the main justifications for the work of many effective altruists is that they are much “better in comparison” to everyone else.  Being “better” only makes one “good” when one only cares about comparisons.  Was the “best” Nazi a “good” person, because he or she should be compared to other Nazis, or a “bad” one, because we happen to have non-nazis to compare him or her to?  If one’s morality is rooted in its comparison to that of other’s, then one can go from “good” to “bad” just by the appearance of a new group.

The Annoying Pupils

Rejecting intuitive solutions presents all kinds of other nasty issues, if you were to take it to the extreme.  This would mean that there is almost no point where one can really say “oh, this makes basic sense, so I’ll accept this premise and work from there”.  And when one decides to create an entire personal philosophy off of preferring rationality to intuition, the rabbit hole can go much deeper than individual prefers.  At every level of logical thought, there can be someone new who complains that it’s not completely rational, and little rational reason for proving that it shouldn’t be.

Typically when we make logical decisions, we ultimately come down to a few intuitive things.  For example, we may optimize a business strategy for revenue, but we still make the intuitive decision that revenue is what we should be optimizing for.  One can go further and “rationally” explain why revenue is statistically expected to be the best way for the business to grow, but from there, there’s the intuitive assumption that the growth of the business is a good thing.  Several other intuitive assumptions are probably made in the rest of the model as well, it is almost impossible to make a complete model without them.

So the big question is how to decide where to make intuitive judgements vs. rational ones.  But to decide this question, there should ideally be a rational answer.  If there were an intuitive one, that would mean an intuitive foundation, which leads us back to where we started.  And finding a rational answer that we agree upon can be incredibly difficult.

Levels of Goodness Intuition

Let’s begin with an example.  When deciding on donating to a poorly run childcare charity compared to a well run one, the choice is relatively obvious within most definitions of “good”.  So within many definitions of “good”, the answer could be made relatively easily.

But take something more extreme: what if we need to choose between the lives of 4 cows compared to 1 human being?  Or 1000 insects (expected, though highly unlikely) compared to 1 human being?

The typical answer that I’ve heard for this, is to make an intuitive judgement about how important a cow is to a human.  Perhaps cows are 1/3rd as intelligent, then 4 cows “utility” would be greater than that of one human.  But the “utility” is a very difficult thing to define, and many people really only care to optimize human utility.


Even worse is when this taken to the computational extreme.  Many futurists advocate human brain uploading to machines.  They claim that simulated cognition is equivalent to physical cognition, and simulations of us could be much better taken care of than real ones.  Others find this scenario as repulsive and wrong.


Some take another step and say that instead of us being uploaded into an amazing world, we should go straight to wireheading, into a state where our brains get pure “happiness” without realized experiences.  This is very controversial but seems very counter-intuitive and strange to other values we have.  As lesswrong points out,

“A civilization of wireheads “blissing out” all day while being fed and maintained by robots would be a state of maximum happiness, but such a civilization would have no art, love, scientific discovery, or any of the other things humans find valuable.”

Some take this further and say that if simulations of us are more “optimal” than us, then wouldn’t simulations of even “happier” beings be more optimal?  Or perhaps an A.I. should just simulate “pleasure” itself, essentially ending living sentience as we know it and replacing it completely with machines simulating things we can not now possibly imagine.  There are some that feel like for any given definition of “happiness”, this would be preferable, but others who feel like it is morally wrong for us because of its lack of most existing human values.


The one “completely rational” utilitarian answer that makes sense (is intuitive as being rational) to me is just that, that ultimately the most utility would come from replacing all current sentience with completely artificial beings (or being).  Essentially, not only letting the Matrix or Terminator overlords win the fight, but rushing for this to happen, fighting for it, and doing whatever is necessary.  On the strictest definitions of utility I could imagine, this would be the most analytically optimal thing.  Of course, that is for now, until I hear of another idea that could be 10x to 100x as “optimal” as this one.


But of course, essentially all people reject this final bit.  The wire heading promoters will say that they are promoting the best outcome (and thus are doing by far the most efficient utilitarian work), and claim that the “terminator” scenario is morally apprehensible because of how wrong it _feels _in comparison.   The uploading enthusiasts will promote theirs as the best outcome while claiming that the wire heading outcome is unnatural and obviously wrong.  The human utopians will promote theirs as the best outcome while claiming that human uploading is un-human in comparison.  And local charity enthusiasts will insist that their work is the best, and the intuitive good feelings it provides would be missed by the “effective altruists”, and thus would wouldn’t be reasonable.

Also, I’ll note that quite a few people in the field of futuristic high-technology “effective altruism” are very much intent on themselves being a part of this.  That they themselves would get to live forever or be uploaded.  When asked why they prefer this to something that could mathematically produce much higher expected value, I hear answers like “this is already good enough!”  But how do we possibly decide rationally what “good enough” means, especially when there is something obvious better?

The great difficulty here, as I see it, is that the extreme moral stance, with as little intuition as possible, seems to be the terminator scenario or similar, areas that seems completely morally strange and unintuitive to absolutely everybody.  But if one were to do more “reasonable” things, one may make order-of-magnitude losses in efficiency for the sake of achieving a more intuitive answer, which was the entire problem we started out against.  We could make a trade-off in between (like many I know do), but such a trade-off would also be an intuitive one.


Where to Go From Here

If one does accept some intuitive limitations from terminator support (or other intuitive options like keeping 10% of income because it “feels right”), at least we could admit it and realize that we aren’t that much different from those “inefficient altruists” or even “regular people” who make different intuitive limitations than we do.  This is in many ways moral relativism and leads to a vague and somewhat frustrated form of morality, but at least not a hypocritical one.  Being honest to ourselves is a good first step.

I believe that really finding a rational ends that we can agree upon will take quite a bit of change to our own intuitions   It may just be a matter of time, like it was to get reduce other limitions (nationalism, sexism, racism, etc).  Of course, the issue is that the time required may be longer than available before apocalyptic / world changing event, so it is quite likely this will never be settled.

I do believe that this should be figured out and discussed much more rigorously.  Because from what I’ve seen, many of the most important moral issues are still surrounded by relatively emotional and intutive debate.  And for us being “rationality nerds”, this is quite a huge deal.

Plotting Capability vs. Ambition

When I was in school (up to college), I was often frustrated by a lack of ambition in students around me.  The main problem in my area, to me, was a lack of ambition.  Surely, all these people could have much more awesome and useful lives if they strived for more? Of course, others see the world as a place of too much ambition, something that I see here in San Francisco.  Many entrepreneurs have hopes that are unbelievably and unrealistically high, and it’s quite apparent that they will eventually get let down.  Also of note are the yuppies who may always hope to gain 30% more income.  Here the issue is perhaps over-ambition and eventual disappointment.

Basic Capability vs. Ambition Graph

Here’s a basic plot of Ambition vs. Capability.   To help illustrate it’s meaning, I’ve broken it up into 4 quadrant archetypes.  Delusional people have lots of ambition (“I’m going to invent a singularity!”) but very little capability.  Heroes have lots of ambition and the capability to make it happen.  ”Genius slackers” are people who have lots of resources to do good, but lack the care to do it.  And the “cripples” are those who aren’t in the position to do much, and don’t care to anyway.


Theoretically, we could imagine a line being drawn to match each level of ambition with the exact amount of capability that it can achieve.  Basically this would state, “a person who can make at most $90k per year would hope to make 90k per year”.  Here I consider this line to be the “ideal”, and will repeatedly show it as a reference for comparison.


Individuals above this line have ambition exceeding capability, leading to eventual disappointment.  Individuals below this line have capability exceeding ambition, leading to a missed opportunity for their capability.  Here I consider this as deadweight economic loss.


Now, to be fair, it may be that this “Capability-Ambition Ideal” is not actually ideal for each individual or for society.  It just represents the exact match of the two.  Hypothetically, one can imagine that an individual striving to do more than they are capable of may be more productive than he or she would otherwise, and thus may benefit society more than if striving for what is possible.  This may be because he or she will only reach what is possible if he or she strives for much more, leading to personal disappointment but capability efficiency.  On the other hand, it may be optimal for the individuals’ well-being to strive for less than they are capable of, and enjoy life or do other things instead.


So far “Capability” and “Ambition” have been talked about in very general terms, but let’s make it much more specific.  Let’s say we want to look at the average ambition for each group of people with similar capabilities.  Then we can create a curve for the “Actual Average Ambition” within a society or set of people.  We can also look at “capability” and “ambition” for one specific variable, such as income. I imagine that the U.S. Monetary Capability vs. Ambition Plot would look something like what is shown above.  In general I understand that people hope for more money then they get, leading to people working very hard and continually being disappointed.


On the other hand, I imagine that with regards to helping the world (utility maximization), people aren’t nearly as ambitious as they have the capability to achieve.  A possible plot is above. That’s all I have for now.  There’s definitely a lot more that can be done here.  My guess is that it wouldn’t be too difficult to actually make some of these graphs.  Of course, knowing exactly how capable someone is is impossible, so for it to be realistic it should all be statistical, but I think these first iterations help show the basic points.

Don’t Bash Entrepreneurs for Not Being Ambitious Enough

Seriously, please stop bashing entrepreneurs for not being ambitious enough.  It’s like bashing nonprofit workers for not working hard enough to help people in need.

Hackers, Clearly Not Being Very Ambitious ]( “If only these people were more ambitious, then our economy would be doing well.” ( photo by Elena Oliva)[/caption]

The basic argument, that I’ve heard time and time again, is that most entrepreneurs in the valley are working on BS photo social sharing apps with no grand vision or relevance to most people on the planet.  And that it’s their fault for not caring enough. I know a lot of entrepreneurs.  Most are far more ambitious than anyone else seems reasonable.  But even so, here’s a convenient list to help change the conversation from somewhat personal attacks to much broader systemic debate.  

A short list of reasons not to demand entrepreneurs to be more ambitious:

  1. They’re told not to be.
    Every single VC and angel I’ve talked to has asked for specific business models out of huge ideas and markets.  They’re the ones who provide the capital, it’s near impossible to try to make something big without them.
  2. They should be told not to be.
    The entire Lean Startup movement pretty much promotes the antithesis of grand visions.  And it has a point.  Major companies almost always begin small and with much more modest intentions then they eventually obtain.
  3. Photo apps sell.
    Educational apps often don’t.  There are thousands of ambitious and optimistic apps out there, and no one uses them.  Just because Instagram gets the headlines doesn’t mean that all entrepreneurs are trying to build Instagram.  It just means that Instagram is one of the few ideas that consumers like enough to make it successful.  Successes in the industry reflect the market much more than the creators.
  4. “Greedy” entrepreneurs are orthogonal to “ambitious” ones.
    Entrepreneurs trying to sell out quickly may be quite incompatible to those with grandios visions.  But my guess is that they aren’t competing with each other.  The venture industry will grow as large as there are promising startups, so the photo sharing entrepreneurs aren’t hogging the valley from the ambitious ones.  The issue (if there aren’t ambitious ones), is just that.  The other entrepreneurs are practically a different class of people and are no more to blame for not being “ambitious entrepreneurs” than anyone else with the ability to learn entrepreneurship.  Which means almost everyone.
  5. Being ambitious gets very ugly.
    I know several people who have lost hundreds of thousands of dollars, or/and years of their lives (and tons of energy and optimism) failing at incredibly ambitious ideas.  We can’t expect what is essentially martyrdom from a group that’s already self selecting.  For every Steve Jobs there are 10 others (some quite similar) who don’t make it for whatever reason, and most suffer dearly.  We all know of stories where entrepreneurs have risked everything under small chances to succeed.  But the definition of small chances tells us that most people who max out their credit scores to do something incredible will fail horribly.
  6. Most people think ambition is crazy.
    While an interestingly vocal group of so-called intellectuals wonder upon the lack of idealism in the tech sector, the vast majority of people who witness it dismiss it and often actively discourage it.  I bet many of these exact intellectuals have completely rejected or dismissed emails from people with incredibly ambitious hopes and claims.  These requests often sound delusional.  Consider an email that states “I’m making something that will replace email!” or similar.  It’s probably a very thin line actually between claims that are considered too uninteresting to pay attention to, and those that are too ambitious to be sane for most people. Both sides are very noisy.   The success of “non-ambitious” startups over “ambitious” ones isn’t personal, it’s systematic, like most issues people argue over.  Also, the moral shortcomings of entrepreneurs are significant, but not necessarily different from the moral shortcomings of everyone else.    

How to Tell if a Web Company Is Evil

I’m sick of companies cherry picking specific features to explain how they help people.  Specific features will always exist, and the existence of them ruins the point.  The big question is, is it really the purpose of the company to help people, or is helping people an occasional externality of other goals they are trying to achieve?  It’s hard to give someone credit for a positive action that was done for a malicious or unrelated purpose.
Altruistic vs. Power Ven Diagram

Here’s an Altruism/Power Ven Diagram.  On the left are decisions a company makes to help people, on the right are decisions that give them power.  Figuring out the intentions of a company may have little to do with how much they are helping people, because that intersection may be quite large.  The bigger question is how they pay attention to the other parts of the diagram.

Optimizing for People

If a company were optimizing for people, it’s actions may look like this.

If a company optimized for its corporate shareholders, it's actions would be here.

If a company optimized for its corporate shareholders, it’s actions would be here.

The big question is which of these two above graphs better represents the actions of a company, not how much there is of an intersection to show to the press.  Here’s a graph of what I see as being decisions in each sector, you can decide for yourself what the aims of your favorite web companies may be.

Web Application Altruism/Power Daigram
Web Application Altruism/Power Diagram

Doesn’t Power Eventually Help Users?

This is the main argument from all power grabbers, ever.  That “with more resources, we can do more good,” so decisions to increase power eventually do help users.  While this can be used to justify almost any action, there does exist the fact that without power even the most idealistic or altruistic groups will not succeed.  It also makes sense that those in power will be those optimized to get power over all other things. It may be that most existing companies (organizations, really) only exist to gain power, where the argument is that the power will eventually be (and occasionally is) used for the goals of these organizations (which are supposedly things other than pure shareholder value). But here, how do you identify groups with good intentions from the rest?  From the outside view, they look almost exactly the same, because they are optimizing for power whenever options are presented. It is true that trade-offs need to be made, but for them to be made well there should be some greater model and rigorous attention to the tradeoffs.  Could companies sponsor studies on how to optimize the numeric trade-offs between short term power grabs and long term civilian value?  It may be hard, but given that these organizations sometimes seem to actually care (or want to care) about value to society, it probably wouldn’t cost much compared to other things they are doing.

The Economic Near-Ideal

Of course, one solution would be to create companies where there are few trade-offs.  One can argue that companies like 10gen have nice models where they very well align their own power increases with societal benefit.

The economic near-ideal

Here companies aren’t defined by how good they are trying to be, but by how much their selfish actions are correlated with altruistic ones.  Here the decision is much more about deciding what kind of company to make (and for outsiders, deciding rules and publicity), then deciding what features to implement.  Yet it also may mean that the more altruistic people will favor industries where this correlation is high, leaving other industries as easy prey for less idealistic people.

Designing the Feedhaven Ecosystem

Feedhaven Diagram 1.0

We’ve been working a lot recently to figure out what the Feedhaven system should look like.  Right now it’s relatively simple, but to properly work in an ecosystem of applications it would need some additional complexities.

Feedhaven Instances
In this case a Feedhaven Instance / Server is shown to the right.  A user can set up their own Feedhaven instance, or can use a company’s hosted instance (like  This instance is in charge of providing a GUI for the information, storing data on users, handling API keys, and handing the standard Feedhaven data.  This should be able to work on many NoSQL databases, though we plan to begin with MongoDB.

Mozilla Persona seems like a good fit for authentication, especially because it can be standardized among all Feedhaven instances and applications.  Hypothetically Persona could provide proof that an application has confirmation from a user (to send to other applications), but I’m not sure how this would work.  It’s also open source, distributed, easy to set up, and quite minimal.  With Persona integration in the browser, log-in may eventually be a relatively easy thing.

Data Validation
Hypothetically all items and feed data would be validated with external JSON schemas with version numbers.  I imagine that for large amounts of data the schemas would have to be cached, and it may be that validation isn’t required, but is possible with an extra function.

Simple Translation Apps/ Simple Feedhaven Apps
The idea for this is that users can create API keys with specific permissions at their Feedhaven Instance.  Then this can be given to an external application to allow them to access Feedhaven data without registering with Feedhaven.  Applications that do this would have restricted API call allowances, but this should be fine for many apps.  This is how many major APIs already work; with API tokens for some apps that don’t have to register. These simple apps could do things like read emails from Gmail, convert them to a common email format, and send these to a specific Feedhaven Feed.  They could write or delete specific feeds, and read new common emails from these feeds and then send them off to Gmail.  Whether the app does polling or subscribing to access this content from Feedhaven would be up to the application.  In addition, these apps can be used to just view Feedhaven data.

Translation Manager Apps
There are hundreds of thousands of applications to talk to, and much of the code to translate each and send/receive to Feedhaven will be the same/similar.  I would imagine/hope that platforms could exist that could accept hundreds or thousands of snippets for conversion with different apps and schemas, and run them as needed. For this, rather than using a standard API token, these apps would have to register directly to each Feedhaven instance.  This should be standardized and done via API calls, when needed.

Future Development
Of course, the system still needs to be built. The idea is that ecosystem would be fully modular so each part could be built independently based on well defined interfaces.  However, this will take a lot of trial and error as the interfaces are changed based on the workings of the components.  Updates will come at this blog.

Consumer Fishtraps


Fishtraps are devices that are easy to enter and difficult or impossible to exit from.  Many products and web apps work the same way. One example may be the monthly service that requires a short website to sign up to but a very long phone call process to stop.  Web services can do similar by making it really simple to sign up and start adding information, but almost impossible to export your data once you use the service. Companies spend a lot of time and effort optimizing their entrences.  There are several books on Landing Page Optimization / SEO / AB testing on acquiring users.  There’s basically nothing on how to allow users to gracefully leave. Easy web exiting capabilities for consumers would mean data (and permissions and API) interoperability which would mean open standards.  This is one big reason why open standards are important.  But open web standards over the last 10 years have been very rough and frustrating, much because of a reluctance of companies to allow their users to easily leave their platforms.  

The Facebook Example

To start using Facebook, there’s one clean landing page.  Once you get in, Facebook will give you lots of help finding friends for you.  They will then recommend you to people who “might” know you, to help make sure that you get comfortable quickly.  The entire process is incredibly simple and easy. But say you get tired of Facebook, and want to try Google+.  You can close your Facebook account, but they will keep the information forever (allowing you to re-enter as easily as possible).  If you search through the option settings, you may find that on the bottom of the “general account settings” page, there’s a way to “download a copy” of your Facebook data.  This brings you to another page, as shown below.

Facebook Archive Page

If you click “Start My Archive”, you actually won’t get that much data.  The full archive link is actually that small “expanded archive” link on the bottom.  From there you enter your password and wait. Several hours later you’ll get an email letting you know your archive is ready.  It comes as a folder with several HTML webpages of your pictures, friends, private messages, and profile information.  Not CSV, not JSON, but HTML data. Then you realize that even that doesn’t have the status updates of you or your friends.  I don’t know how many people use Facebook primarily as a photo sharing service, but I’m going to go on a limb and say that the main thing people do on Facebook is post to and read from status updates.  So Facebook is essentially not giving the most important user data to them. And even then, there’s really no way I know of importing any of that data to Google+ or Twitter or Diaspora.  Because in order to do that, these programs would have to make a program that you would download, just to format this HTML and then upload it to another service. The saddest part is that even this is considered better than average for web services.


The only existing protection against this seem to be the EU’s legal action and occasionally consumer sentiment. Hypothetically, it could be possible for a browser extension to warn users about lock-in on visits to website sign-up pages.  If enough people used these or did some research, it could be possible that companies would take notice and respond accordingly.  We really just need people to care. In the mean time, just realize.  When you are on these platforms, you are essentially in a fish trap.  Leaving may technically be feasible, but it’s made to be near impossible.   For more info on “evil web design” I recommend checking out “ Evil By Design” by Chriss Nodder.The Sloth section goes into detail in what I am calling the fishtrap.

An Information Availability Graph

Recently I’ve been thinking about the economics of sharing person information.   I get the sense that one reason why social networks are so important is because they allow you to share much more information than you would be otherwise.  In general, people seem to think that this a good thing, and I think that these following diagrams may help explain why.   By looking at information sharing in an economic way it may be easier to understand the existing inefficiencies of the “market” and some theoretical reasons for the emergence of social networks and group privacy settings within them.

A person has the most information about themselves available (typically). A much smaller subset of information is available to the public.

A person has the most information about themselves available (typically). A much smaller subset of information is available to the public.

First, I’ll start with the basic plot.  Imagine that you ordered everyone in the world by how willing you were to share information with them, and then plotted how much you would be willing to share with each person.  You would be all the way on the left, deserving of accessing everything.  Then all the way on the right is the general public.  You may share a few pictures of yourself and an email address, but many people choose to hide a lot of information from everybody.

The Comfort Curve shows the maximum amount of information that a person would feel comfortable sharing with X amount of people. The "optimal" setting would be for this to be completely filled with information.

The Comfort Curve shows the maximum amount of information that a person would feel comfortable sharing with X amount of people. The “optimal” setting would be for this to be completely filled with information.

The result of your ordering and plotting may be a “comfort curve”, which shows exactly how much information availability you would be comfortable with.  We can generally assume that this is what is optimal for the world, given the constraint of you being OK with it.  You being OK is particularly important because that is what you may be willing to share.  Theoretically we could make a model showing how much information should be available for a utilitarian scenario instead, but that would be much more complicated and not as relevant.

Different platforms allow different groups to access different amounts of your data.

Different platforms allow different groups to access different amounts of your data.

The privacy of Facebook (and other social networking tools) allow them to promote sharing that would not go on websites.  This privacy enables more people to gain information (that you are fine with them getting) than would be otherwise.

Facebook added the concept of "types" of friends to help promote sharing. More information availability is possible with more groups to share with.

Facebook added the concept of “types” of friends to help promote sharing. More information availability is possible with more groups to share with.

One thing a lot of people noticed was that as Facebook became more popular, people became less comfortable sharing on it.  Facebook originally went from being a tool for college friends to one of general acquaintances and colleagues.  Later Facebook allowed users to split their friends up into groups and share with each group separately, to help mitigate this.  Other social networks (Diaspora, Google+) have done this first (and arguably better).

The limitation of groups and mediums limits the information that can become available.

The limitation of groups and mediums limits the information that can become available.

However, unless you have n groups for n friends, there’s always going to be some information you won’t share to some people because of the limitation of group number.  This “inefficiency” is shown in red.

The ability and convenience to share information to specific groups also limits information availability, as shown.

The ability and convenience to share information to specific groups also limits information availability, as shown.

In addition, other information is either be inconvenient or impossible to share.  You probably don’t have access to a lot of information that exists and is about you.  You probably don’t share near all of the information with your facebook friends that you would feel comfortable with them having, because the cost of putting much of it on Facebook far exceeds the benefits that your friends would get from it.  For example, you may not mind at all if your close friends could get info on your GPS location most of the time, or what book you are currently reading, but it may be difficult/inconvenient to push that data to Facebook.


In conclusion, I think this may present an interesting model from which to build on.  It would be very interesting to make this quantitative, but realize that it would be very complicated.  Similar models could be made for businesses, organizations, etc.  This also doesn’t discuss what other people want to see; only what you are willing to show them. One of my goals with Feedhaven is to minimize both of these inefficiencies. Hypothetically, all of the information available to you (no matter the app) could have one generalized retrieval system.  In addition, one could imagine systems whereby it were incredibly simple and elegant to split up friends into tiny groups and to edit settings for each one.

What ‘Value’ Should Mean

I would begin this post by stating that ‘value’ is one of the most mis-interpreted words in this era, except for the abysmally poor interpretation of all words, ever.

The fact is that ‘values’ are really important if you define them correctly.  I think that most versions of the term are bit wrong, so I’ll try to give my shot at what a “value” is.

Value: A general concept that is associated with ‘goodness’.

This is a minor variation on that of’s, which is:

Noun. ”The regard that something is held to deserve; the importance or preciousness of something: “your support is of great value”.”

With either of these definitions it seems difficult to say that “values” have declined at all, contrary what our stereotype of old grumpy people would argue.  Only that they have changed.

Interestingly, people have studied the science behind memes (or ideas), but to my knowledge, values haven’t been looked at too much.  (Even meme science is still a relatively esoteric field for some reason).  The way I look at them, values are constructed from ideas.  Ideas are something of the brick to value’s buildings.  The ideas are important, but often they are hard to understand without seeing the bigger picture.  I believe that in comparison, values represent the bigger picture.  This means that instead of looking at the profile of ideas (or, the ‘meme-space’) regarding a group of people, a more useful map would be to pay attention to the values.

Values guide people.  Values choose the different ideologies of Republicans and Democrats.  I imagine that changes in values map more directly in the actions of organizations than changes those of ideas or personal qualities.  A fundamental structure in the background of social organizations is their variance of specific values.

For example, look at Bill Gates compared to Larry Ellison.  They may have similar ideas in their heads, but what’s really important right now are their differences of values.  I’m not saying that Bill Gates is more altruistic, but that his greater value of specific types of semi-charitable actions has led him to pursue those things to a great extent.  Larry Ellison has a different set of values, and lives accordingly.  Obviously he does not value open source software too much, if he did, that could make a dramatic change in the direction of Oracle.

Values can have positive feedback.  If you lightly value baseball, you may join a baseball team, where you will be surrounded by positive memes and associations to go with baseball.  You value it more, and gradually fall down the rabbit hole, considering more importance to what may otherwise be a somewhat trivial game, than to anything else in life.  You might have similar skill sets (besides some new muscle memory in Baseball), similar intelligence, etc, as other people, but your value system will put you in an incredibly different situation.

Hitler and Winston Churchill were probably quite similar in most regards.  They probably knew much of the same history.  They both got enjoyment from sweetness and fat, both had to sleep every night.  However, their value systems were different in a few key areas, and combined with their incentive systems, they were opposite heads of a way.

Perhaps there is much more variation in human values than human genes, and if so it may be more worthwhile to emphasize spreading your values more than your genes.

This is a really messy subject.  I’ll come back to it later.  Most of the interesting work may come in future posts, but I wanted to define this a bit.

The Internet Is “Real Life”

The idea that the physical world some of us normally live in is “real” in any objective or absolute sense is a bit out of date and silly.  It seems like there’s a decent chance we may be in a simulation, and likely a greater chance we’re “in” something we couldn’t understand or comprehend if we were to find evidence for it.


So I’m sick of people considering online living as absolutely inferior to “physical” living.

There are some valid reasons for how people sometimes use specific technologies to unintentionally alienate and permanently damage themselves.  But these are specific to our existing state and definitely not to “all of technology” or the entire concept of the internet.  Intellectuals used to warn that people were absorbed in the artificial world of books which were tearing them from “reality”.  Now books are associated with goodness and intellect so no one really cares any more.  The same has yet to happen to the internet.

Gamers sometimes facilitate this illusion by using the term “IRL” (In Real Life) to refer to the non-gaming world.  I think this may be a destructive term.  The world they are referring to may not be less real than the one they are in, only something very different.  

This does leave the awkward question of what to exactly call this plane of existence.  It seems strange to call it a reality, unless one considers ‘reality’ a loose term that is expanded to online universes and ones we don’t know about.  Comic writers used the word “universe” to discern the places between different stories, but this signifies them as non-combining entities, while the ‘realities’ I’m referring to often take place somewhat inside each other.  I’m sure this is one bit that went into the term IRL as not a definitive one but as something people will use until someone can come up with something better.

One awesome quote from Wikipedia:

“Some  sociologists engaged in the study of the Internet have predicted that someday, a distinction between online and real-life worlds may seem “quaint”, noting that certain types of online activity, such as sexual intrigues, have already made a full transition to complete legitimacy and “reality”. [2]

Note: A few weeks after writing this post I read more into “Away from Keyboard”, which I think is an appropriate term. However, I prefer “meatspace”.

Bringing All Things Down to Wikipedia’s Level

wikipedia You’ve probably heard of several stories of people posting false information on Wikipedia.  It seems to be the number one type of story I hear about Wikipedia.  And these stories are used to explain why we shouldn’t be allowed to site Wikipedia or pay much attention to it as our primary source of knowledge.

The trouble with this thinking is that most people seem to pay less attention to the stories of poor academic writing, or news articles, or just about every other thing written in any language ever.

Wikipedia acknowledges that it has problems with accuracy.  Most other groups have similar problems, but many won’t say so, for obvious reasons.

My main fear is that universities and organizations ban Wikipedia references as a way to feign truth where little exists.  The stark reality, from my perspective, is that the vast majority of information all of us see is misrepresented, incomplete, false, and irrelevant.  People, journalists, scientists, often tell stories instead of facts, and even the facts are carefully chosen (though not always intentionally).  Bias is absolutely everywhere and so far Wikipedia seems to be the best item, to my knowledge, to try to counter it.  It’s a tiny attempt in comparison to the size of the problem, but it seems unfortunate to pin it as the enemy.

I think that people like to use Wikipedia as a scapegoat and shove it under a bus.  But the problem isn’t Wikipedia, the problem is the inherently biased nature of humanity and lack of care or infrastructure to address this.

So I agree that Wikipedia is uncertain.  But it’s probably much better than what’s in my head currently about almost everything ever.  I think we need to just admit that most of what we know is very flawed and accept that all communicated and known information is imperfect, because it is.  Incredibly so.

The Lean Blog Post

It pains me to no avail that people consider blog posts as unique snapshots of time rather than pure information, similar to a personal wiki.  By this I mean that it is quite rare for bloggers to continually change their posts in significant ways long after publication.  Perhaps posts could have version numbers and history similar to wikipedia.  Rather than being time stamped, they would be somewhat timeless. Here’s one version of how I imagine that iteration should work on a blog.

The Lean Blog Post method. I used cacoo to make this, sorry it's cut off. If anyone requests I'll make a nicer version :)

It’s really difficult to figure out what people will like in advance.  But it seems like viral hits happen really quickly, so few people expect to be able to change content mid-storm.  Thus they sometimes try to future-proof everything by making it better such that if/when it becomes viral, it will be ready and optimized for it.

But this means they write much less! This is one model that would be cool in the theory of long-term traffic, but definitely is not optimized for “viral storms”.  It’s a pity.

Digital Monopolies and Dictatorship

Facebook may be colored blue because Mark Zuckerberg happened to be red-green colorblind.  Now over 1 billion people see a blue page for 5+ hours a month because of it.  This is one small and rather innocent example of what happens when one person gets incredible power, which the web enables in ways never really seen before.

Red-Green colorblindness test

Facebook, with no red or green.

In The Facebook Effect, David Kirkpatrick mentions a few stories of Mark Zuckerberg analyzing user behavior over social networks.  What Mark realized, very early on, was that social networks had incredible economies of scale.  I don’t think the book mentioned Reed’s Law, but I do think it is particularly relevant. From Wikipedia:

Reed’s law is the assertion of  David P. Reed that the  utility of large  networks, particularly  social networks, can  scale exponentially with the size of the network.

Phrased differently, social networks are essentially incredibly monopolistic.  While there used to be competition at the start of the social network space, the game is now rigged and the king of the hill is miles up.  It’s as if, for some strange reason, the most popular coffee shop in San Francisco by 2014 would gain permission to replace 80% of all international global coffee shops.  Even if there was an initially fair playing ground, and no obvious unfair play, the results can be very scary.

Now Facebook has over one billion users, who spend an average of over 10 minutes a day on the service.  That adds up to a lot of time by a lot of people.  I spend more time with Facebook personally than I do doing my taxes or relating to my government.  In many ways, Facebook is more important than most world governments, especially given how universal it is.  It’s far larger and time consuming than the vast majority of social structures now and throughout all history.

Time Spent on Social Networking Sites Time Spent on Social Networking Sites

The 10 minutes on Facebook per day leaves out the influence of Facebook on the rest of the web, where their social buttons and integration is heavily tied to sites that I bet most people spend way more time on.  Facebook has a strange and powerful control of the future of the internet.  They ask and millions of websites put up metadata tags in a way that seems unparalleled to anything seen before.

Even after IPO, Mark Zuckerberg still owns over 58% of all of Facebook voting shares,  He effectively controls everything Facebook.  While it is refreshing that he wasn’t screwed (as happens with many entrepreneurs), this high level of control is frightening.  Thousands of people worked on the product for possibly tens of thousands of man years, millions of people have directly participated in making the platform so incredibly successful, but because of the strange way the first few years played out Mark was able to keep over 58% of absolutely all control to himself.  It’s true that he “only” owns about 20% of the equity, but does this matter in comparison?


A particularly popular political comic about Facebook

What does it mean when one person possesses control over millions like this? Online locations are not that much different from physical ones.  Facebook has become a fundamental technology relied on by hundreds of millions of people.  It doesn’t seem much different or less important than the beginnings of the electric system or telephone network.  And one guy owns it.  When AT&T, controlled by a small management team, had too much power the government split it up.  It was then worth about one half as much as Facebook now (adjusted for inflation).

Mark seems to be a benevolent dictator.  He’s probably much better than the alternative of a set of mediocre managers and board members with interests mixed between corporate profit and self promotion, which is the standard in the industry.  People admit that Facebook would have gotten sold a long ago without him, and I believe that his vision for the future of Facebook is more long-term than anyone else conventionally put in power of technical companies.  But it does open up the question of what government systems we think are just and reasonable.

One of many Facebook protests, that no one really paid attention to or cared about. One of many Facebook protests, that no one really pays attention to or cares about.

When Rome changed to a dictatorship (essentially), it started out quite well.  Augustus was a smart guy who was essentially an effective benevolent dictator.  But future leaders were not so kind or intelligent, and everything fell apart shortly thereafter.  The one saving grace with regards to Facebook is that because of technological disruption it is quite possible that Facebook’s empire will fall or become obsolete before they run into the worst problems of incredibly personal control.  But it’s not certain and is definitely not a good long term fail safe. The question of the power roles of founders is the fundamental question of Political Philosophy.  While academics debate this in ivory towers, coders are on the ground making important decisions now.  And they seem to be deciding on dictatorships and anarchies.  For all the talk of how we love democracy, there’s surprisingly little of it being created.  We jam Democracy down the throats of tiny countries in Africa but don’t seem to notice it for the emergence of gigantic digital communities and infrastructure many times the size.  There are some reasons for this of course, but I believe this represents a serious case of cognitive dissonance that should be rigorously investigated and understood.  My honest guess is that we’ll determine that we very much need to disperse power online, but also realize that Democracy isn’t as important or holy to us as we’d like to believe.

Mark doesn't dress like this, but he possesses far more power than perhaps all who have.
Mark doesn’t dress like this, but he may possess more power than almost all who have.

Facebook even attempted Democracy, but did it poorly.  As in, worse than China or Iran poorly (though, to be fair, not with nearly as dire consequences so far).  They put their new data use policy and statement of rights up for vote with the general public.  350,000 users ended up voting, and 6/7th of them voted “no”.  Unfortunately Facebook required that 300 million+ people voted for quorum by the closing date, which obviously did not happen, so Facebook went ahead and made the changes anyway.  If something similar were to happen to a third world country we’d consider them completely corrupt and juvenile.  Here it happens to us and we don’t care to notice.  Read  this and this for more info.

The bottom line is that now, millions of people have no say in a community they are all contributing to.  In fact, they  violently oppose every single update Facebook gives them, but the one ruler doesn’t care.  Perhaps he shouldn’t.  But we need to do some careful considering of what power structures and governments we are willing to accept and promote.  We also should re-consider what democracy really is and how important it should be.  Things will only get more messy from here.  

Can Anyone Predict the Long-term Future of the Internet? Part 1

For most of my life I wanted to be a hardware engineer (at first inventor) because that’s where I thought the future would be.  My basic theory was that all the great historic inventors worked with hardware, so that would be where the future was going as well.  I went so far as to spend 4 years getting an incredibly painful engineering degree to make this happen.

Halfway through that degree I took a trip to Singapore for study abroad, and there I realized I was probably making a wrong decision.  I continued to rationalize it for the next two years to stay motivated, but I realize now I probably would have been better off studying computer science.

In Singapore I took long walks every night and thought about science and futurism (rather than doing homework, which you weren’t required to turn in).  During this time I was working on Holono; not because I wanted to work on software, but because I really wanted to see it get made.  The incredibly safe atmosphere made it convenient to walk around at night time, which is my best time for coming up with ideas.  I’d spend 2-4 hours a night thinking, and partially because of Holono, paying a lot of attention to the internet.

Here I came up with with a lot of my own ideas for what the internet could do.  I’ll announce them later on, but it amazed me what I came up with.

Can Anyone Predict the Long-term Future of the Internet?

Here’s a trend I’ve noticed: when discussing internet technologies, people talk about the next 1-5 years.  When discussing hardware, people talk in the next 200.

Take a look at these magazine covers.  Or these, or these.  They typically feature hardware or tough sciences that won’t be developed commercially for 5-50 years, or web products that are out right now.  I had a club called Future Tech in college where we would discuss future technology, and we would typically do the same thing.  All cool “future tech” was either hardware or A.I.

The weirdest thing is that the internet companies are the ones getting funded.  They’re the ones doing ridiculously well, and they’ve only been around for the last 20 years.

The main explanation I have for that is that no one really understands the internet.  Compared to nanobots and hydrogen energy, the internet feels very confusing and unpredictable.  Lectures can discuss long term advances in hardware can present cool looking graphics with relatable visions.  But the few discussions I’ve seen on the long term vision of the internet is far more esoteric.  The internet utopian movement seems long dead and forgotten, as well the movement behind the semantic web, which to my knowledge, only about 50 people understand.  Yet everyone knows about electric cars.

The lean startup grew from the internet.  Unlike hardware and scientific tech, where breakthroughs come through optimizations and numeric improvements (this car is more efficient, this drug is far more effective), recent internet companies often create completely new value propositions.

One common assumption is that because we can’t see the future of the internet, there won’t be one.  I have many incredibly intelligent friends who consider the internet to be fairly complete, with the major future advances to be expected in fields like medicine, robotics, etc.  And with the current discussion of the internets’ future, this makes a lot of sense.  Yet I don’t see any numeric proof of this or anything.  Very few people predicted any of the existing web companies (Facebook, Google, Twitter, Groupon, etc).  Internet business incubators are growing (they are about all of the incubators that exist, really).  Unlike clean tech or biotech VCs, internet VC firms are doing well and are staying bullish.

Yet the real evidence for me has come from my ideas.  When I consider the future of hardware, I typically wind myself coming up with ideas that would either help a small subset of the population or take forever to create and require technology that doesn’t exist yet.  When I consider the future of the internet, I come up with things that could be made today and possibly help billions of people.

In addition, I’m quite sure that many of the next big fields will happen inside the internet.  In the same way that the electric industry was an unexpected creation using the byproducts of the steel industry, and the computer industry was an unexpected creation using the byproducts of the electric industry, and the internet industry came from the computer industry, I bet things will come from the internet.

Awards Imply False Certainty

In high school I got sick of most award systems.  Prizes were given like “best science student” or “2nd best musician” like “best” really meant something.  I also was questionable of whether the negative externalities created by these systems offset the positive benefit for the recipients and close friends, but I think the more pressing issue is the implied certainty.

obligatory average photograph of standard trophies
Obligatory average photograph of standard trophies

What the hell does it mean to be the “best science student”?  Most of the time people have little idea what “best” actually refers to, but rather assume it’s within a space of shared understanding that the local community will agree upon.  This is not always true, especially when there are disagreements about the winner.  Should the “best” book be one with more character emphasis, or verbal eloquence?  In some cases it’s obvious how to rank items according to this vague space of shared understanding, but in other cases it’s apparent that the space isn’t narrow at all.

Even this assumes a level of consistency of mindset and competency of judges, which may be and often is highly questionable.  Just because a person may be an “expert” at their field doesn’t mean they are any good at it.  They may just be the best of ignoramuses.

We can of course overcome this in part by assigning high uncertainty to our prizes and phases.

“This award is given to the person who our team of three specific, somewhat randomly chosen but assumingly competent (according to our school’s current department standards)  people.  Using our current understanding of what likely makes up “excellent”, we have predicted that this entree is the most likely to be the best one.  If we had chosen a different group of judges from a similar group and this had been done again, we predict that there would be a 40% chance of this entree still winning 1st place.”

Of course, few organizations will ever do this.  It hurts the often false sense of their own authority.  It makes the competition seem rather unimportant, even though it likely is.

My CounterPlan

In high school I designed a small representation of this, in a “Seal of Perceived Excellence”.  I finally purchased a custom stamp in the beginning of college.  I considered it accurate on my homework, when I would spend a lot of time doing a job I thought was decent, but knowing I probably got some fundamental elements wrong and had little idea of how well I actually did.  While now I look back on it and find the design a bit childish, I still like the point and would later be interested in extending it elsewhere. Ozzie Gooen Seal of Perceived Excellence Ozzie Gooen Seal of Perceived Excellence

Why Obsessives May Hate Scheduling Meetings

Not Harvey Mudd, but close.

I remember at Harvey Mudd College I would often be really interested in really specific technical topics and go through great difficulties to talk to anyone about them, especially the professors.  Many of the engineering professors would be out during all of the convenient times for me (after 5pm M-T, and most of Friday because most seemed to be out consulting).   Sure I could schedule meetings, but that didn’t feel right.  I wanted to walk in and start talking.

It recently dawned on me what very quick discussion was so important to me.  It was because I knew I would loose interest quickly and wanted to get my thoughts out while they were with me.  I would often hours or days on end obsessing on an issue, and all of it was fresh and new to my memory right then.  Scheduling a meeting a week after would throw everything off. Not only was it not convenient, but I had no way of knowing what I would be interested in the next week. I notice this now.  I plan meetings about partnerships, then get really excited about product development and loose interest in the meetings.  It makes it really hard to hold discussions with other people in the area.  Therefore it seems very important to have a large variety of people in my general vicinity that will handle my somewhat unpredictable interests while being close enough not to have to schedule an appointment any more than a few hours away. 

Fortunately I’m living with 17 other smart people in Rise SF, which definitely is a plus.  But the location is restricting (Twin Peaks, SF) and I don’t interact routinely with large groups of people outside the house. Maybe this is one reason why building 20 and similar tightly packed intelectual networks work so well.  Even if people can meet each other doesn’t mean it will be convenient; them being able to stop in at a whim may be significantly more efficient and productive.  This way they can discuss what they are interested in, in the moment they are interested in it.  

More information on Building 20 here, toward the end.  It’s a fascinating read.

Better Copy for $5 With Sentiment Analysis

From Mechanical Turk: >  ”Whether you want to track sentiment of tweets for a new product release or monitor sentiment of posts in a customers forum, Mechanical Turk makes it easy to assess sentiment quickly so you can make informed decisions.”

Sentiment Analysis on Mechanical Turk

Sentiment Analysis on Mechanical Turk

The interface is relatively simple.  You pick  an item (a slogan, for example), a question (“rate how much you like this slogan”), and then upload a .csv file with a list of all the possible slogans you want to test out.  You can choose anywhere between 10-20 ratings for each, which we’ve found to give a pretty consistant response. We tried out this technique with several slogan ideas we were considering.  But before I show you the response, try guessing the order that people preferred the following list: - “Eyes Save Lives” - “Charity Done Right” - “Donate Without Paying” - “We Give You Superpowers” - “Help Things You Care About” - “Join the Mission, Save the World”

Our first test included 15 titles, each which had 10 votes.  At $0.02 per vote, that’s $3.00 (plus a bit extra for using Turk), which is ridiculously cheap for market research. When all the votes come in, you can see an analysis that looks like this:

Making a Really Confusing Hackathon in 24 Hours (#ifwehadglass)

Google's #ifihadglass competition page Google’s #ifihadglass

After Google’s #ifihadglass competition came out, I thought it might be fun to make some entrees, given our research at Harvey Mudd in wearable computing.  While I was put off by the $1500 price tag, Rahul pointed out that they would likely have an ebay value quite a bit over that, so I decided to go with it.  We came up with the idea of organizing a small hackathon to come up with entrees, and after me asking a few people on Facebook, decided to make it happen.  However, this was on Wednesday, and the only day fit for the Hackathon was Saturday (it ended the week after), so we didn’t have much time to make it happen.

The domain was available, so I decided to go with it.  Then I searched iStock Photo for an image somewhat similar to the google one, but with more people.  I found one, made a Twitter page, made a Facebook event, and finally started a website.  Taking significant inspiration from the Google Glass website, I put together some custom CSS to make my own similar version, as shown below.

Original #ifwehadglass hackathon website

Original #ifwehadglass hackathon website 

The hardest part was the hosting it, which I did on Thursday evening after going to bed at 6.  After a bit of searching I found Site44, which hosts websites straight out of Dropbox.  Awesome. Submitted it on Hacker News hoping to make front page.  Kind of did.

Hacker News (selected) reactions below.
Hacker News comments on #ifwehadglass

The last thing was to edit the website so that everyone could have their own page for their own entrees.  We made subdirectories for each entrant who signed up on Facebook.  Originally the plan was that each person would be able to access only their own folder.  However, we quickly realized that this would be a limitation.  First, it was impossible to share a folder from an app, so we had to switch from Site44 to one of Joe’s many servers.  Then we realized that it’s impossible to share a folder with a few people, and share subfolders with other people.  So instead we just gave everyone access to everything.  In this case it’s fine, but you can imagine that it’s not a very scalable.  Come on, Dropbox! Some attempts were made at getting someone from Google to come with a Google Glass for the event, but they were made in vain.  Fortunately, the event was fun for the people who made it.