Machine intelligence, part 2

This is part two of a a two-part post—the first part is here.


Although there has been a lot of discussion about the dangers of machine intelligence recently, there hasn’t been much discussion about what we should try to do to mitigate the threat. 

Part of the reason is that many people are almost proud of how strongly they believe that the algorithms in their neurons will never be replicated in silicon, and so they don’t believe it’s a potential threat.  Another part of it is that figuring out what to do about it is just very hard, and the more one thinks about it the less possible it seems.  And another part is that superhuman machine intelligence (SMI) is probably still decades away [1], and we have very pressing problems now.

But we will face this threat at some point, and we have a lot of work to do before it gets here.  So here is a suggestion.

The US government, and all other governments, should regulate the development of SMI.  In an ideal world, regulation would slow down the bad guys and speed up the good guys—it seems like what happens with the first SMI to be developed will be very important.

Although my general belief is that technology is often over-regulated, I think some regulation is a good thing, and I’d hate to live in a world with no regulation at all.  And I think it’s definitely a good thing when the survival of humanity is in question.  (Incidentally, there is precedent for classification of privately-developed knowledge when it carries mass risk to human life.  SILEX is perhaps the best-known example.) 

To state the obvious, one of the biggest challenges is that the US has broken all trust with the tech community over the past couple of years.  We’d need a new agency to do this.

I am sure that Internet commentators will say that everything I’m about to propose is not nearly specific enough, which is definitely true.  I mean for this to be the beginning of a conversation, not the end of one.

The first serious dangers from SMI are likely to involve humans and SMI working together.  Regulation should address both the case of malevolent humans intentionally misusing machine intelligence to, for example, wreak havoc on worldwide financial markets or air traffic control systems, and the “accident” case of SMI being developed and then acting unpredictably.

Specifically, regulation should: 

1)   Provide a framework to observe progress.  This should happen in two ways.  The first is looking for places in the world where it seems like a group is either being aided by significant machine intelligence or training such an intelligence in some way. 

The second is observing companies working on SMI development.  The companies shouldn’t have to disclose how they’re doing what they’re doing (though when governments gets serious about SMI they are likely to out-resource any private company), but periodically showing regulators their current capabilities seems like a smart idea.

2)   Given how disastrous a bug could be, require development safeguards to reduce the risk of the accident case.  For example, beyond a certain checkpoint, we could require development happen only on airgapped computers, require that self-improving software require human intervention to move forward on each iteration, require that certain parts of the software be subject to third-party code reviews, etc.  I’m not very optimistic than any of this will work for anything except accidental errors—humans will always be the weak link in the strategy (see the AI-in-a-box thought experiments).  But it at least feels worth trying.

Being able to do this—if it is possible at all—will require a huge amount of technical research and development that we should start intensive work on now.  This work is almost entirely separate from the work that’s happening today to get piecemeal machine intelligence to work.

To state the obvious but important point, it’s important to write the regulations in such a way that they provide protection while producing minimal drag on innovation (though there will be some unavoidable cost).

3)   Require that the first SMI developed have as part of its operating rules that a) it can’t cause any direct or indirect harm to humanity (i.e. Asimov’s zeroeth law), b) it should detect other SMI being developed but take no action beyond detection, c) other than required for part b, have no effect on the world.

We currently don’t know how to implement any of this, so here too, we need significant technical research and development that we should start now. 

4)   Provide lots of funding for R+D for groups that comply with all of this, especially for groups doing safety research.

5)   Provide a longer-term framework for how we figure out a safe and happy future for coexisting with SMI—the most optimistic version seems like some version of “the human/machine merge”.  We don’t have to figure this out today.

Regulation would have an effect on SMI development via financing—most venture firms and large technology companies don’t want to break major laws.  Most venture-backed startups and large companies would presumably comply with the regulations.

Although it’s possible that a lone wolf in a garage will be the one to figure SMI out, it seems more likely that it will be a group of very smart people with a lot of resources.  It also seems likely, at least given the current work I’m aware of, it will involve US companies in some way (though, as I said above, I think every government in the world should enact similar regulations).

Some people worry that regulation will slow down progress in the US and ensure that SMI gets developed somewhere else first.  I don’t think a little bit of regulation is likely to overcome the huge head start and density of talent that US companies currently have.

There is an obvious upside case to SMI —it could solve a lot of the serious problems facing humanity—but in my opinion it is not the default case.  The other big upside case is that machine intelligence could help us figure out how to upload ourselves, and we could live forever in computers.  Or maybe in some way, we can make SMI be a descendent of humanity.

Generally, the arc of technology has been about reducing randomness and increasing our control over the world.  At some point in the next century, we are going to have the most randomness ever injected into the system. 

In politics, we usually fight over small differences.  These differences pale in comparison to the difference between humans and aliens, which is what SMI will effectively be like.  We should be able to come together and figure out a regulatory strategy quickly.

Thanks to Dario Amodei (especially Dario), Paul Buchheit, Matt Bush, Patrick Collison, Holden Karnofsky, Luke Muehlhauser, and Geoff Ralston for reading drafts of this and the previous post. 

[1] If you want to try to guess when, the two things I’d think about are computational power and algorithmic development.  For the former, assume there are about 100 billion neurons and 100 trillion synapses in a human brain, and the average neuron fires 5 times per second, and then think about how long it will take on the current computing trajectory to get a machine with enough memory and flops to simulate that.

For the algorithms, neural networks and reinforcement learning have both performed better than I’ve expected for input and output respectively (e.g. captioning photos depicting complex scenes, beating humans at video games the software has never seen before with just the ability to look at the screen and access to the controls).  I am always surprised how unimpressed most people seem with these results.  Unsupervised learning has been a weaker point, and this is probably a critical part of replicating human intelligence.   But many researchers I’ve spoken to are optimistic about current work, and I have no reason to believe this is outside the scope of a Turing machine.

Machine intelligence, part 1

This is going to be a two-part post—one on why machine intelligence is something we should be afraid of, and one on what we should do about it.  If you’re already afraid of machine intelligence, you can skip this one and read the second post tomorrow—I was planning to only write part 2, but when I asked a few people to read drafts it became clear I needed part 1.


Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity.  There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could.  Also, most of these other big threats are already widely feared.

It is extremely hard to put a timeframe on when this will happen (more on this later), and it certainly feels to most people working in the field that it’s still many, many years away.  But it’s also extremely hard to believe that it isn’t very likely that it will happen at some point.

SMI does not have to be the inherently evil sci-fi version to kill us all.  A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal (most goals, if you think about them long enough, could make use of resources currently being used by humans) wipes us out.  Certain goals, like self-preservation, could clearly benefit from no humans.  We wash our hands not because we actively wish ill towards the bacteria and viruses on them, but because we don’t want them to get in the way of our plans.

(Incidentally, Nick Bostrom’s excellent book “Superintelligence” is the best thing I’ve seen on this topic.  It is well worth a read.)

Most machine intelligence development involves a “fitness function”—something the program tries to optimize.  At some point, someone will probably try to give a program the fitness function of “survive and reproduce”.  Even if not, it will likely be a useful subgoal of many other fitness functions.  It worked well for biological life.  Unfortunately for us, one thing I learned when I was a student in the Stanford AI lab is that programs often achieve their fitness function in unpredicted ways.

Evolution will continue forward, and if humans are no longer the most-fit species, we may go away.  In some sense, this is the system working as designed.  But as a human programmed to survive and reproduce, I feel we should fight it.

How can we survive the development of SMI?  It may not be possible.  One of my top 4 favorite explanations for the Fermi paradox is that biological intelligence always eventually creates machine intelligence, which wipes out biological life and then for some reason decides to makes itself undetectable.

It’s very hard to know how close we are to machine intelligence surpassing human intelligence.  Progression of machine intelligence is a double exponential function; human-written programs and computing power are getting better at an exponential rate, and self-learning/self-improving software will improve itself at an exponential rate.  Development progress may look relatively slow and then all of a sudden go vertical—things could get out of control very quickly (it also may be more gradual and we may barely perceive it happening).

As mentioned earlier, it is probably still somewhat far away, especially in its ability to build killer robots with no help at all from humans.  But recursive self-improvement is a powerful force, and so it’s difficult to have strong opinions about machine intelligence being ten or one hundred years away.

We also have a bad habit of changing the definition of machine intelligence when a program gets really good to claim that the problem wasn’t really that hard in the first place (chess, Jeopardy, self-driving cars, etc.).  This makes it seems like we aren’t making any progress towards it.  Admittedly, narrow machine intelligence is very different than general-purpose machine intelligence, but I still think this is a potential blindspot.

It’s hard to look at the rate or improvement in the last 40 years and think that 40 years for now we’re not going to be somewhere crazy.  40 years ago we had Pong.  Today we have virtual reality so advanced that it’s difficult to be sure if it’s virtual or real, and computers that can beat humans in most games.

Though, to be fair, in the last 40 years we have made little progress on the parts of machine intelligence that seem really hard—learning, creativity, etc.  Basic search with a lot of compute power has just worked better than expected. 

One additional reason that progress towards SMI is difficult to quantify is that emergent behavior is always a challenge for intuition.  The above common criticism of current machine intelligence—that no one has produced anything close to human creativity, and that this is somehow inextricably linked with any sort of real intelligence—causes a lot of smart people to think that SMI must be very far away.

But it’s very possible that creativity and what we think of us as human intelligence are just an emergent property of a small number of algorithms operating with a lot of compute power (In fact, many respected neocortex researchers believe there is effectively one algorithm for all intelligence.  I distinctly remember my undergrad advisor saying the reason he was excited about machine intelligence again was that brain research made it seem possible there was only one algorithm computer scientists had to figure out.)

Because we don’t understand how human intelligence works in any meaningful way, it’s difficult to make strong statements about how close or far away from emulating it we really are.  We could be completely off track, or we could be one algorithm away.

Human brains don’t look all that different from chimp brains, and yet somehow produce wildly different capabilities.  We decry current machine intelligence as cheap tricks, but perhaps our own intelligence is just the emergent combination of a bunch of cheap tricks.

Many people seem to believe that SMI would be very dangerous if it were developed, but think that it’s either never going to happen or definitely very far off.   This is sloppy, dangerous thinking.

[1] I prefer calling it "machine intelligence" and not "artificial intelligence" because artificial seems to imply it's not real or not very good.  When it gets developed, there will be nothing artificial about it.

Startup advice, briefly

This is a very short summary with lots left out—here is the long version:

You should start with an idea, not a company.  When it’s just an idea or project, the stakes are lower and you’re more willing to entertain outlandish-sounding but potentially huge ideas.  The best way to start a company is to build interesting projects. 

On the other hand, when you have a “company” that you feel pressure to commit to an idea too quickly.  If it’s just a project, you can spend more time finding something great to work on, which is important—if the startup really works, you’ll probably be working on it for a very long time.

Have at least one technical founder on the team (i.e. someone who can build whatever the company is going to build).

In general, prefer a fast-growing market to a large but slow-growing one, especially if you have conviction the fast-growing market is going to be important but others dismiss it as unimportant.

The best startup ideas are the ones that seem like bad ideas but are good ideas.

Make something people want.  You can screw up most other things if you get this right; if you don’t, nothing else will save you.

Once you’ve shifted from “interesting project” to “company” mode, be decisive and act quickly.  Instead of thinking about making a decision over the course of week, think about making it in an hour, and getting it done in the next hour.

Become formidable.  Also become tough—the road ahead is going to be painful and make you doubt yourself many, many times.

Figure out a way to get your product in front of users.  Start manually (read this:

Listen to what your users tell you, improve your product, and then listen again.  Keep doing this until you’ve made something some users love (one of the many brilliant Paul Buchheit observations is that it’s better to build something a small number of users love than something a lot of users like).  Don’t deceive yourself about whether or not your users actually love your product.

Keep your burn rate very low until you’re sure you’ve built something people love.  The easiest way to do this is hire slowly.

Have a strategy.  Most people don’t.  Occasionally take a little bit of time to think about how you’re executing against your strategy.  Specifically, remember that someday you need to have a monopoly (in the Peter Thiel sense).

Read this before you raise money:

Learn to ask for what you want. 

Ignore what the press says about you, especially if it’s complimentary.

Generate revenue early in the life of your company.

Hire the best people you can.  However much time you’re spending on this, it’s probably not enough.  Give a lot of equity to your employees, and have very high expectations.  Smart, effective people are critical to success.  Read this:

Fire people quickly when you make hiring mistakes.

Don’t work with people you don’t have a good feeling about—this goes for employees (and cofounders), partners, investors, etc.

Figure out a way to get users at scale (i.e. bite the bullet and learn how sales and marketing work).  Incidentally, while it is currently in fashion, spending more than the lifetime value of your users to acquire them is not an acceptable strategy.

Obsess about your growth rate, and never stop.   The company will build what the CEO measures.  If you ever catch yourself saying “we’re not really focused on growth right now”, think very carefully about the possibility you’re focused on the wrong thing.  Also, don’t let yourself be deceived by vanity metrics.

Eventually, the company needs to evolve to become a mission that everyone, but especially the founders, are exceptionally dedicated to.  The “missionaries vs. mercenaries” soundbite is overused but true.

Don’t waste your time on stuff that doesn’t matter (i.e. things other than building your product, talking to your users, growing, etc.).  In general, avoid the kind of stuff that might be in a movie about running a startup—meeting with lawyers and accountants, going to lots of conferences, grabbing coffee with people, sitting in lots of meetings, etc.  Become a Delaware C Corp (use Clerky or any well-known Silicon Valley law firm) and then get back to work on your product.

Focus intensely on the things that do matter.  Every day, figure out what the 2 or 3 most important things for you to do are.  Do those and ignore other distractions.  Be a relentless execution machine.

Do what it takes and don’t make up excuses.

Learn to manage people.  Make sure your employees are happy.  Don’t ignore this.

In addition to building a great product, if you want to be really successful, you also have to build a great company.  So think a lot about your culture.

Don’t underestimate the importance of personal connections.

Ignore acquisition interest until you are sure you want to sell.  Don’t “check the market”.  There is an alternate universe somewhere full of companies that would have been great if they could have just avoided this one mistake.  Unfortunately, in this universe, they’re all dead.

Work really hard.  Everyone wants a secret to success other than this; if it exists, I haven’t found it yet.

Keep doing this for 10 years.

The Software Revolution

In human history, there have been three great technological revolutions and many smaller ones.  The three great ones are the agricultural revolution, the industrial revolution, and the one we are now in the middle of—the software revolution. [1]

The great technological revolutions have affected what most people do every day and how society is structured.  The previous one, the industrial revolution, created lots of jobs because the new technology required huge numbers of humans to run it.  But this is not the normal course of technology; it was an anomaly in that sense.  And it makes people think, perhaps subconsciously, that technological revolutions are always good for most people’s personal economic status.

It appears that the software revolution will do what technology usually does—create wealth but destroy jobs.  Of course, we will probably find new things to do to satisfy limitless human demand.  But we should stop pretending that the software revolution, by itself, is going to be good for median wages.

Technology provides leverage on ability and luck, and in the process concentrates wealth and drives inequality.  I think that drastic wealth inequality is likely to be one of the biggest social problems of the next 20 years. [2] We can—and we will—redistribute wealth, but it still doesn’t solve the real problem of people needing something fulfilling to do.

Trying to hold on to worthless jobs is a terrible but popular idea.  Trying to find new jobs for billions of people is a good idea but obviously very hard because whatever the new jobs are, they will probably be so fundamentally different from anything that exists today that meaningful planning is almost impossible.  But the current strategy—“let’s just pretend that Travis is kidding when he talks about self-driving cars and that Uber really is going to create millions of jobs forever”—is not the right answer.

The second major challenge of the software revolution is the concentration of power in small groups.  This also happens with most technological revolutions, but the last truly terrifying technology (the atomic bomb) taught us bad lessons in a similar way to the industrial revolution and job growth.

It is hard to make an atomic bomb not because the knowledge is restricted (though it is—if I, hypothetically, knew how to make an atomic bomb, it would be tremendously illegal for me to say anything about it) but because it takes huge amounts of energy to enrich Uranium.  One effectively needs the resources of nations to do it. [3]

Again, this is not the normal course for technology—it was an idiosyncrasy of nuclear development.  The software revolution is likely to do what technology usually does, and make more power available to small groups.

Two of the biggest risks I see emerging from the software revolution—AI and synthetic biology—may put tremendous capability to cause harm in the hands of small groups, or even individuals.  It is probably already possible to design and produce a terrible disease in a small lab; development of an AI that could end human life may only require a few hundred people in an office building anywhere in the world, with no equipment other than laptops. 

The new existential threats won’t require the resources of nations to produce.  A number of things that used to take the resources of nations—building a rocket, for example—are now doable by companies, at least partially enabled by software.  But a rocket can destroy anything on earth.

What can we do?  We can’t make the knowledge of these things illegal and hope it will work.  We can’t try to stop technological progress.

I think the best strategy is to try to legislate sensible safeguards but work very hard to make sure the edge we get from technology on the good side is stronger than the edge that bad actors get.  If we can synthesize new diseases, maybe we can synthesize vaccines.  If we can make a bad AI, maybe we can make a good AI that stops the bad one.

The current strategy is badly misguided.  It’s not going to be like the atomic bomb this time around, and the sooner we stop pretending otherwise, the better off we’ll be.  The fact that we don’t have serious efforts underway to combat threats from synthetic biology and AI development is astonishing.

To be clear, I’m a fan of the software revolution and I feel fortunate I was born when I was.  But I worry we learned the wrong lessons from recent examples, and these two issues—huge-scale destruction of jobs, and concentration of huge power—are getting lost.



[1] A lot of the smaller ones have been very important, like the hand axe (incidentally, the hand axe is the longest-serving piece of technology in human history), writing, cannons, the internal combustion engine, atomic bombs, fishing (many people believe that fishing is what allowed us to develop the brains that we have now), and many more. 

[2] It is true that life is better in an absolute sense than it was a hundred years ago even for very poor people.  Most of the stuff that people defending current levels of wealth inequality say is also true—highly paid people do indeed make inexpensive services for poor people.

However, ignoring quality of life relative to other people alive today feels like it ignores what make us human.  I think it’s a good thing when some people make thousands of times as much money as what other people make, but I also don’t resent paying my taxes and think we should do much more to help people that are actually poor.  The social safety net will have to trend up with the development of technology. 

[3] Or at least, one used to:


The most important story of 2014 that most people ignored was the Chinese economy overtaking the US economy.  (This is using the purchasing power parity metric, which incorporates differences in the price of goods, but the Chinese economy will overtake on other metrics soon enough.)

This shouldn’t have caught anyone by surprise; US growth has stagnated while Chinese growth has continued to do pretty well (the chart below shows inflation-adjusted economic growth rates in China vs. the US since 1978).  The US has become less competitive globally—for example, other countries have surpassed our education system, and we have structural and demographic challenges other countries don’t and that create significant expenses.

The historical track record of the largest economy being overtaken by another is not good.  Sometimes it’s violent.  (For example, Germany and the UK in 1914.  Though neither were the largest in the world, the world was less globalized.  They were the largest in the region and very focused on each other.)  Sometimes it’s a long, slow slide of denial into stagnation and decreasing relevance.

It’s almost unthinkable for most people born in the US in the last 70 or so years for the US not to be the world’s superpower.  But on current trajectories, we’re about to find out what that looks like. [1] The current plan seems to be something like “managed decline”, or gradual acceptance of reduced importance.

The US gets huge advantages from being the world’s largest economy (as mentioned earlier, other countries wanting this sometimes leads to major conflict).  For example, our currency is the most important currency in the world, and we can do things like run a trillion dollar deficit without anyone getting too concerned.  People generally have to buy energy (oil) in our currency, which adds a great deal of support (though we’ve already seen the very beginnings of the PetroYuan).  Also, we get to have the world’s most powerful army.

The current business model of the US requires the dollar to be the world reserve currency, though the Chinese currency is rapidly becoming a viable alternative. At some point, China will relax its currency controls, allowing trade and offshore investment to grow rapidly.  The Renminbi (RMB) will probably rise in value (though some people think the opposite will happen in the short term) and China will become an important financial center.

The most critical question, speaking as a hopeful US citizen, is whether or not it’s possible for one country to remain as powerful as another with four times less people.  The US has never been the world’s largest country by population, but it has been the largest economy, and so it’s clearly possible for at least some period of time.

How has the US done this?  One important way has been our excellence in innovation and developing new technology.  A remarkable number of the major technological developments—far in excess of our share of the world’s population—have come from the US in the last century.

The secret to this is not genetics or something in our drinking water.  We’ve had an environment that encourages investment, welcomes immigrants, rewards risk-taking, hard work [2], and radical thinking, and minimizes impediments to doing new things.  Unfortunately we’ve moved somewhat away from this.  Our best hope, by far, is to find a way to return to it quickly.   Although the changes required to become more competitive will likely be painful, and probably even produce a short-term economic headwind, they are critical to make.

The US should try very hard to find a way to grow faster.  [3] I’ve written about this in the past.  Even if there weren’t a competitor in the picture, countries historically don’t do well with declining growth, and so it’s in our interest to try to continue to keep growth up.  It’s our standard advice to startups, and it works for organizations at all levels.  Things are either growing or dying.

The other thing to try to do is figure out a way to coexist with China.  Absent some major surprise, China and the US are going to be the world superpowers for some time.  The world is now so interconnected that totally separate governments playing by different rules are not going to work (for example, if people can manufacture goods in China without regard for environmental regulations, they’ll be cheaper than US goods, but they’ll harm the environment for people in the US eventually).  Instead of the normal historical path of increasing two-way animosity until it erupts in conflict, maybe we can find a way to both work on what we’re really good at and have governments that at least partially cooperate.

Thanks to Patrick Collison, Matt Danzeisen (especially to Matt, who provided major help), Paul Graham, and Alfred Lin for reading drafts of this.

[1] As a related sidenote, “exceptionalism” in the US has become almost a bad word—it’s bad to talk about an individual being exceptionally good, and certainly bad to talk about the country as a whole being exceptional.  On my last visit to China, the contrast here was remarkable—people loved talking about how amazing certain entrepreneurs were, and the work the country as a whole was doing to make itself the best in the world.

The other stark contrast is how much harder people in China seem to work than people here, and how working hard is considered a good thing, not a bad thing.

[2] One thing that I’ve found puzzling over the last ten or so years is the anger directed towards people who choose to work hard.  This is almost never from actually poor people who work two minimum wage jobs (who work harder than all the rest of us, pretty much) but from middle class people.  It’s often somewhat subtle—“It’s so stupid that these people play the startup lottery.  What idiots.  They should just consult.” or “Startups need to stop glorifying young workers that can work all day and night”—but the message is clear.  My explanation is that this is simply what happens in a low-growth, zero-sum environment.

[3] A significant and relevant headwind for US growth is that current US policy often encourages investment and job growth outside the US; domestic companies can borrow at low rates because of 0% Fed lending, build factories (and create jobs) in other countries, and then hold/reinvest those profits offshore rather than pay high taxes repatriating their funds.


I recently got to be a guest at a FarmLogs board meeting.  I was struck by how much of an impact the company was having on the world, and how just a couple of years ago it seemed like they were doing something so small.

FarmLogs is a great example of a company that started out with a seemingly boring idea--a way for farmers to store their data in the cloud--and has developed into a way for farmers to run their entire farm, gather all the data about a particular piece of farmland, and optimize production.  The company is now used by 20% of US row crop farms, and those farms are all more productive than they were without the software.

Eventually, FarmLogs can become the operating system for efficient, semi-autonomous farms.

Technology is about doing more with less.  This is important in a lot of areas, but few as important as natural resources.

We need technology like this to meet the resource challenges that the planet will continue to face as the population grows and standards of living continue to increase.  In fact, we need another hundred companies like this. 

The good news is that it’s doable.  FarmLogs is only three years old (YC Winter 2012).  The company has used less than $3 million of capital so far, and with it they have already helped farmers gain hundreds of millions of dollars in efficiency.  The software revolution is making it possible to create world-changing companies relatively quickly and with relatively modest resources.

And importantly, they started out doing something that any two programmers (with domain expertise in their market) could have done.

Policy for Growth and Innovation

I get asked fairly often now by people in the US government what policy changes I would make to “fix innovation and drive economic growth” [1][2] (for some reason, it’s almost always that exact phrase).

Innovation is obviously important—the US has long been the world’s best exporter of new ideas, and it’d be disastrously bad if that were no longer the case.  Also, I don’t think our society will work very well without economic growth, and innovation is what will drive growth from where we are now.  While it’s true that people are better off in absolute sense than they were a few hundred years ago, most of us are more sensitive to our wealth increasing over short time-scales (i.e. life getting better every year) than how fortunate we are relative to people who lived a long time ago. [3]  Democracy works well in a society with lots of growth, but not a no-growth (i.e. zero-sum) society.  Very low growth and a democracy are a very bad combination.

So here is my answer:

1) Fix education.  We have to fix education in this country.  Yes, it will take a long time to have an effect on output, but that’s not an excuse for continuing not to take serious action.  We currently spend about 4% of the federal budget on education.  The problems with education are well-documented—teachers make far too little, it’s too difficult to fire bad teachers, some cultures don’t value education, etc.  Many of these are easy to fix—pay teachers a lot more in exchange for a change in the tenure rules, for example—and some issues (like cultural ones) are probably going to be very difficult to fix.

Without good education (including continuing education and re-training for older people), we will never have equality of opportunity.  And we will never have enough innovators.

I think it’s most important to fix the broken parts of the current system, but also to decide we need to spend more money on education.

One bright spot is that most of the world now has Internet access at least some of the time and there are truly remarkable resources available online to learn pretty much anything anyone could want.  It amazes me that I can become relatively proficient on any subject I want, for free, from a $50 smartphone nearly anywhere in the world.  There is probably a way to combine online education with real-world mentorship, activity, and group interaction in a way that makes the cost of quality education far lower than it is today.

Spending money on education, unlike most government spending, actually has an ROI—every dollar we spend on it ought to return more dollars in the future.  This is the sort of budget item that people should be able to agree on.  As I wrote in the above-linked post, we will likely need both entitlement spending reductions and revenue increases to make the budget work.

2) Invest in basic research and development.  Government spending on R&D keeps decreasing.  There are certain things that companies are really good at doing; basic research is usually not one of them.  If the government wants more innovation, then it should stop cutting the amount of money it spends producing it.  I think current policy is off by something like an order of magnitude here.

Like education, this is in the category of an “investment”, not an “expense”.

3) Reform immigration.  If talented people want to come start companies or develop new technologies in the US, we should let them.  Turning them away—willfully sending promising new companies to other countries—seems terribly shortsighted.  This will have an immediate positive effect on innovation and GDP growth.  Aside from the obvious and well-documented economic benefits (for high-skilled workers especially, but for immigration more generally), it’s a matter of justice—I don’t think I deserve special rights because I happened to be born here, and I think it’s unfair to discriminate on country of birth.  Other than Native Americans, all of our families are fairly recent immigrants.

We need reasonable limits, of course, but our current limits are not the answer.  On our current path, in the not-very-distant future, we will be begging the people we are currently turning away to come and create value in the US. 

Many people say we don’t need immigration reform because people can work remotely.  While remote working works well for a lot of companies, and I expect it to continue to work better as time goes on, it doesn’t work well for all companies (for example, it would not work for YC), and it shouldn’t be the only option.  It also sends money and competency out of our economy.  The common answer of “let the US companies open overseas offices” always sounds to me like “further slow US economic growth and long-term viability”.

Companies in the Bay Area already largely hire from elsewhere in country—companies are desperate for talented people, and there aren’t enough here to go around.  Even with this, tech wages keep going up, and good people who already live in the Bay Area keep getting jobs.

4) Cheaper housing.  This is not a problem everywhere in the US, but it is in a lot of places.  The cost of housing in SF and the Bay Area in general is horrific.  There just isn’t enough housing here, and so it’s really expensive (obviously, many people make the rational decision not to live here).  Expensive housing drives up the cost of everything else, and a lower cost of living gives people more flexibility (which will hopefully lead to more innovation) and more disposable income (which will hopefully stimulate economic growth).

Homeowners generally vote and want to preserve their property value; non-homeowners generally vote less often.  So efforts to build more housing, or make housing less attractive as an investment, usually fail when they go to a vote.  For example, a recent proposal to allow more house building in SF failed with an atrociously low voter turnout.

In general, I think policy should discourage speculation on real estate and encourage housing to be as inexpensive as possible.  I think most people would do better owning assets that drive growth anyway.

In the Bay Area specifically, I think policy should target an aggressive increase in the housing supply in the next 5 years and undo many of the regulations currently preventing this.

5) Reduce regulation.  I think some regulation is a good thing.  In certain areas (like development of AI) I’d like to see a lot more of it.  But I think it often goes too far—for example, an average of $2.5B and 10 years to bring a new drug to market strikes me as problematic.

Many of the companies I know that are innovating in the physical world struggle with regulatory challenges.  And they’re starting to leave.  The biggest problem, usually, is that they just can’t get clarity out of the massive and slow government bureaucracy.  In 2014, 4 companies that I work with chose to at least partially leave the US for more friendly regulatory environments (3 for regulatory violation or uncertainty, and 1 for concern about export restrictions).  Many more kept their headquarters here but chose somewhere else as their initial market (including, for example, nearly all medical device companies, but also drone companies, nuclear fission companies, pharmaceutical companies, bitcoin companies, etc etc etc).

This is not good.  We live in a global society now, and not all countries are as backward about immigration as we are.  If our best and brightest want to go start companies elsewhere, they will do so. [4] 

I think one interesting way to solve this would be with incentives.  Right now, as I understand it, regulators mostly get “career advancement” by saying “no” to things.  Though it would take a lot of careful thought, it might produce good results if regulators were compensated with some version of equity in what they regulate.

Again, I think some regulation is definitely good.  But the current situation is stifling innovation.

6) Make being a public company not be so terrible.  This point is related to the one above.  I’d hate to run a public company.  Public companies end up with a bunch of short-term stockholders who simultaneously criticize you for missing earnings by a penny this quarter and not making enough long-term investments.

Most companies stop innovating when they go public, because they need very predictable revenue and expenses.

In an ideal world, CEOs would ignore this sort of pressure and make long-term bets.  But the inanity on CNBC is distracting in all sorts of ways—for example, it’s always surprising to me how much employees react to what they hear about their company on the news.

I’ve seen CEOs do the wrong thing because they were scared of how “the market might react” if they do the right thing.  It’s a rare CEO (such as Zuckerberg, Page, Cook, and Bezos) who can stand up to public market investors and make the sort of bets that will produce long term innovation and growth at the expense of short term profits.

There are a lot of changes I’d make to improve the situation.  One easy one is that I’d pay public company directors in all stock and not let them sell it for 5 years.  That will produce a focus on real growth (in the current situation, making $200k a year for four days of work leads to directors focusing on preserving their own jobs).

Another is that I’d encourage exchanges that don’t trade every millisecond.  Liquidity is a good thing; I personally don’t see the value in the level of “fluidity” that we have.  It’s distracting to the companies and sucks up an enormous amount of human attention (one of the things I like about investing in startups is that I only have to think about the price once every 18 months or so).  If I had to take a company public, I’d love to only have my shares priced and traded once every month of quarter.

A third change would be something to incent people to hold shares for long periods of time.  One way to do this would be charge a decent-sized fee on every share traded (and have the fee go to the company); another would be a graduated tax rate that goes from something like 80% for day trades down to 10% for shares held for 5 years. 

Another thing the government could do is just make it much easier to stay private for a long time, though this would have undesirable side effects (especially around increasing wealth inequality).

7) Target a real GDP growth rate.  You build what you measure.  If the government wants more growth, set a target and focus everyone on hitting it.

GDP is not a perfect metric, especially as the software revolution drives cost of goods gets driven lower and lower.  What we really need is a measurement of “total quality of life”.  This will be tough to figure out, but it’s probably worth the time to invent some framework and then measure ourselves against it

There are obviously a lot of other policy changes I think we should make, but on the topics of growth and innovation, these 7 points are what I think are most important.  And I’m confident that if we don’t take action here, we are going to regret it.

[1] Incidentally, innovation does not always drive job growth, even when it drives GDP growth.  The industrial revolution was something of an anomaly in this regard.  I’ll write more about what this means later.

[2] There are a bunch of other policy changes I would make—for example, I’d increase the minimum wage to something like $15 an hour—that are important and somewhat related to this goal but not directly related enough to include here.

[3] While access to knowledge, healthcare, food, water, etc. for people in developed countries is far better now than any time in history, extreme inequality still feels unfair (I’ll save my social rant for another time, but I think the level of extreme poverty that still exists in the world is absolutely atrocious.  Traveling around the developing world is an incredible wake-up call.) 

[4] Sometimes the government people ask “Would you ever move YC out of the US?” with nervous laughter?  I really like it here and I sure hope we don’t, but never say never.

A new team at reddit

Last week, Yishan Wong resigned from reddit.

The reason was a disagreement with the board about a new office (location and amount of money to spend on a lease).  To be clear, though, we didn’t ask or suggest that he resign—he decided to when we didn’t approve the new office plan.

We wish him the best and we’re thankful for the work he’s done to grow reddit more than 5x.

I am delighted to announce the new team we have in place.  Ellen Pao will be stepping up to be interim CEO.  Because of her combination of vision, execution, and leadership, I expect that she’ll do an incredible job.

Alexis Ohanian, who cofounded reddit nine and a half years ago, is returning as full-time executive chairman (he will transition to a part-time partner role at Y Combinator).  He will be responsible for marketing, communications, strategy, and community.

There is a long history of founders returning to companies and doing great things.  Alexis probably knows the reddit community better than anyone else on the planet.  He had the original product vision for the company and I’m excited he’ll get to finish the job.  Founders are able to set the vision for their companies with an authority no one else can.

Dan McComas will become SVP Product.  Dan founded redditgifts, where in addition to building a great product he built a great culture, and has already been an integral part of the reddit team—I look forward to seeing him impact the company more broadly.

Although my 8 days as the CEO of reddit have been sort of fun, I am happy they are coming to a close and I am sure the new team will do a far better job and take reddit to great heights.  It’s interesting to note that during my very brief tenure, reddit added more users than Hacker News has in total.

A Question

I have a question for all the people that use their iPhone or Android to complain on Twitter, Facebook, or reddit about the lack of innovation… 

Or message their friends on WhatsApp or Snapchat about how Silicon Valley only builds toys for rich people in between looking at photos from their family across the world in Dropbox and listening to almost any song they want on Spotify while in an Uber to their Airbnb…

What were you doing 10 years ago?

I think it’s remarkable how much of what people do and use today didn’t exist 10 years ago.  And I hope that 10 years from now, we’re using things that today seem unimaginably fantastic. 

And while I’d like to see us turn up the pace on innovation in the world of atoms, I hope we keep up the current blistering progress in the world of bits.   I’ve really enjoyed working with some of the energy and biotech companies we’ve funded at YC and hope we see a lot more companies like SpaceX and Tesla get created.

There are some things technological innovation alone won’t help with and that we need to address in other ways—for example, I think massive wealth inequality is likely to be the biggest social problem of our time—but it seems to stretch credulity to claim that we have a lack of innovation.

I’m always in awe of the remarkable technological progress we make decade over decade.  I think it’s important to try not to lose your sense of wonder about this.

Board Members

Over the last five years, there has been an incredible shift in leverage from investors to founders.  It’s good in most ways, but bad in an important few.  Founders’ desire for control is good in moderation but hurts companies when it gets taken to extremes. 

Many founders (or at least, many of the founders I talk to) generally want few to no outsiders on their boards.  A popular way to win an A round in the current environment is to not ask for a board seat.  Some investors are happy to do this—it’s certainly easier to write a check and go hang out on the beach than it is to take a board seat.  And this trend is likely to continue, because these new investors are generally willing to pay much higher prices than investors that want to be involved with the company.

But great board members, with a lot of experience seeing companies get built, are the sort of people founders should want thinking about their companies every day.  There are a lot of roles where experience doesn’t matter in a startup, but board members usually aren’t one of them.  Board members are very useful in helping founders think big and hire executives. 

Board members are also a good forcing function to keep the company focused on execution.  In my experience, companies without any outsiders on their boards often have less discipline around operational cadence. 

Finally, board members stick with the company when things really go wrong, in a way that advisors usually don’t.

Board members certainly don't have to be investors.  If founders do choose to take money without an involved board member, I think it’s very important to get an advisor with a significant equity position that will play the role of a board member (or better, actually put them on the board). 

Personally, I think the ideal board structure for most early-stage companies is a 5-member board with 2 founders, 2 investors, and one outsider.  I think a 4-member board with 2 founders, 1 investor and 1 outsider is also good (in practice, the even number is almost never a problem).

As a side note, bad board members are disastrous.  You should check references thoroughly on someone before you let them join your board.

The companies that have had the biggest impact and created the most value have had excellent board members (and executives).  I don’t believe this is a coincidence.

It’s a good idea to keep enough control so that investors can’t fire you (there are a lot of different ways to do this), but beyond that, it’s important to bring in other people and trust them to help you build the company.


Thanks to Mike Moritz for reviewing a draft of this.