Financial Misstatements

First-time startup CEOs make a lot of mistakes, mostly due to ignorance.

One particularly bad one is misunderstanding or misusing basic financial terms.  I started noticing this in Y Combinator applicants a couple of years ago, but see it now in startups at all stages (including some YC companies). 

It is very important to make accurate financial statements to investors, and it is well worth the time it takes to learn the difference between concepts like “revenue” and “GMV” (gross merchandise volume) and revenue from a “contract” or “LOI” (letter of intent).  Most terms have very specific definitions, and it’s well worth a little bit of time learning what these are.  When in doubt, you will never get in trouble for defining the way you’re using a financial term too precisely.

I’ve seen people use GMV for revenue or refer to an LOI as a contract many times in the past year when talking to investors.  This is a felony.

Although investors should be doing more diligence than is currently in fashion, this issue is on the founders to fix.

The Post-YC Slump

At the end of a YC batch, the general consensus among the partners is that about 25% of the companies are on a trajectory that could lead to a multi-billion dollar company.  Of course, only a handful of them do.  Most go on to be decent or bad.

These companies have a beautifully exponential growth curve during YC, and then a few months after YC is over, it essentially flatlines.  Because it would be so much better for us if this did not happen, we wonder a lot about why.

The main problem is that companies stop doing what they were doing during YC—instead of relentlessly focusing on building a great product and growing, they focus on everything else.  They also work less hard and less effectively—the peer pressure during YC is a powerful force.

The startups justify this to themselves in all sorts of ways—“We’re doing some longer-term strategic work.  You wouldn’t understand.” “We’re cleaning up our technical debt.” “We’re building out the organization.” “We’re focusing on PR for this month.  I’m going to speak at 6 conferences and writing two thought leadership pieces.” “We are different; growth isn’t our most important thing.” We’ve heard all of these from startups that have gone on to disappoint.

In general, startups get distracted by fake work.  Fake work is both easier and more fun than real work for many founders.  Two particularly bad cases are raising money and getting personal press; we’ve seen many promising founders fall in love with one or (usually) both of these, which nearly always ends badly.  But the list of fake work is long.

I tell founders to consider how directly a task relates to growing.  Obviously, building and selling are the best.  Things like hiring are also very high on the list—you will need to hire to sustain your growth rate at some point.  Interviewing lots of lawyers has got to be near the bottom.

During YC, we are ruthless about reminding startups that fake work does not count and will still get you a failed startup no matter how intensely you do it.  We are also ruthless about asking for your progress, and being honest with you if things aren’t working.  After YC, we have less contact with startups—you can go dark on us if you want.  This, by itself, is almost always a sign that a startup is doing badly.

Momentum is everything in a startup.  If you have momentum, you can survive most other problems.  If you do not have momentum, nothing except getting momentum will solve your problems.  Founders internalize this during YC; many seem to forget in the few years after YC.  Burnout seems to almost always affect founders whose startups are not doing well, and then becomes a downward spiral.  In fact, one of my top few startup commandments is “never let the company lose momentum”.

There are a few other common problems.  One is a feeling of “we made it” that comes after a big financing round and a reduction in intensity.  A related problem is that after you’ve raised a lot of money or become somewhat well-known, it’s harder to admit that things aren’t working and you need to change direction.  Also, very small startups can grow by sheer force of will, even with a bad product.  This stops working after a few months as the numbers get larger, and if you haven’t built something people love, you will not be able to continue growing.

So how can startups avoid this slump?  Work on real work.  Stay focused on building a product your users love and hitting your growth targets.  Try to have a board and peers who will make you hold yourself accountable—don’t lose the urgency that you developed during YC.  Keep sending updates on your traction to your investors and anyone else who will read them (in fact, we’re building some new software at YC to automate this for our startups in the hope that it prevent some of them from going off the rails).  Make the mistake of focusing too much on what matters most, not too little, and relentlessly protect your time from everything else.  Don’t ever let yourself feel like you’ve won before you have.  I still don’t think the Airbnb founders feel like they’ve won.  You have to keep up a high level of intensity for many, many years.

Many YC startups learns these lessons after a year or two in the wilderness, but for some it’s too late and for all it’s a waste of time.

The best startups we fund keep on doing exactly what they did during YC.  This sounds so simple and so obvious, but in practice so few founders do it.

The good news is it’s doable with deliberate effort.  If every founder (YC and otherwise) did it, the number of successful startups would probably double.

The U.S. Digital Service

A lot of us complain about how the government is not very good at technology.  The U.S. Digital Service is actually trying to do something about it, by applying the way startups build products to make government services work better for veterans, immigrants, students, seniors, and the American public as a whole.

This is clearly a good idea.  (See U.S. Digital Service Playbook for more details.)

Inspired by the successful rescue of HealthCare.gov, small teams get deployed inside government agencies to improve critical government software. 

It seems to be working.  To use HealthCare.gov again as an example, the Digital Service effort helped replace a $200 million login system that cost $70 million per year to operate (I know…) with one that cost $4 million to build and less than $4 million per year to operate, and worked better in every way.  In another example, at U.S. Citizenship and Immigration Services, a Digital Service team has been instrumental in enabling green cards to be renewed online for the first time and a growing number of other improvements to the immigrant experience.

The Digital Service attracted talent on par with the best Silicon Valley startups, including talented veterans from Amazon, Google, Facebook, Twitter, Twilio, YC, and more – engineers, designers, and product managers who have committed to do tours of duty serving the country.

As an American, I am grateful to these men and women for doing this.  Because of their work, the government will work better.

I often get asked about what people can do for a year or two to make a big impact between projects.  Here is a good answer.  Consider joining the ranks.  I think it’d be great if it became a new tradition that people from the tech world do a tour of duty serving our country at some point in their careers.  We need better technology in government.

Projects and Companies

In the early days of my startup, I used to get slightly offended when people would refer to it as a “project”.  “How’s your project going?” seemed like the asker didn't take us seriously, even though everything felt serious to us.  I remember assuming this would stop after we announced a $5 million Series A; it didn’t.  I kept feeling like we’d know we made it when people started referring to us a company.

I now have the opposite belief.  It’s far better to be thought of—and to think of yourself—as a project than a company for as long as possible.

Companies sound serious.  When you start thinking of yourself as a company, you start acting like one.  You worry more about pretend work involving things like lawyers, conferences, and finance stuff, and less about building product, because that’s what people who run companies are supposed to do.  This is, of course, the kiss of death for promising ideas.

Projects have very low expectations, which is great.  Projects also usually mean less people and less money, so you get the good parts of both flexibility and focus.  Companies have high expectations—and the more money out of the gate and the more press, the worse off they are (think Color and Clinkle, for example).

Worst of all, you won’t work on slightly crazy ideas—this is a company, not a hobby, and you need to do something that sounds like a good, respectable idea.  There is a limit to what most people are willing to work on for something called a company that does not exist if it’s just a project.  The risk of seeming stupid when something is just a project is almost zero, and no one cares if you fail.  So you’re much more likely to work on something good, instead of derivative but plausible-sounding crap.

When you’re working on a project, you can experiment with ideas for a long time.  When you have a company, the clock is ticking and people expect results.  This gets to the danger with projects—a lot of people use them as an excuse to not work very hard.  If you don’t have the self-discipline to work hard without external pressure, projects can be a license to slack off.

The best companies start out with ideas that don’t sound very good.  They start out as projects, and in fact sometimes they sound so inconsequential the founders wouldn't let themselves work on them if they had to defend them as a company.  Google and Yahoo started as grad students’ projects.  Facebook was a project Zuckerberg built while he was a sophomore in college.  Twitter was a side project that started with a single engineer inside a company doing something totally different.  Airbnb was a side project to make some money to afford rent.  They all became companies later.

All of these were ideas that seemed bad but turned out to be good, and this is the magic formula for major success.  But in the rush to claim a company, they could have been lost.  The pressure  from external (and internal) expectations is constant and subtle, and it often kills the magic ideas.  Great companies often start as projects.

Energy

I think a lot about how important cheap, safe, and abundant energy is to our future.  A lot of problems—economic, environmental, war, poverty, food and water availability, bad side effects of globalization, etc.—are deeply related to the energy problem. 

I believe that if you could choose one single technological development to help the most people in the world, radically better energy generation is probably it.  Throughout history, quality of life has gone up as the cost of energy has gone down. 

The 20th century was the century of carbon-based energy.  I am confident the 22nd century is going to be the century of atomic energy (i.e. terrestrial atomic generation and energy relatively directly from the sun’s fusion). [1] I am unsure how the majority of the 21st century will be powered, but I’d like to help get things moving.

Although a lot of people are working on solar, I don’t think enough people are working on terrestrial-based atomic energy, which has major advantages when it comes to cost, density, and predictability.

Given the potential importance, I’m making an exception to my normal policy of not joining YC boards for Helion Energy and UPower.  Both of these companies went through YC about a year ago.  Helion is working on fusion and UPower is working on fission; I’ve looked at many companies working on both and think these are the two best.  I’ll be the chairman of both companies and I’m also investing in the seed/A rounds for both companies. [2] 

Both companies hope to have a test reactor operating in a few years, and both companies are hiring.  If you’re interested in working on this, please get in touch.




 



[1] I’m unsure of is what the split between sun-generated (I’m just going to call it solar but I use it to include wind and biofuels) and terrestrial-generated will be.  There will only be one cheapest source of energy, and history suggests whatever that is will be fairly dominant.  So it will probably be 80/20 one way or the other.

[2] I will save my thoughts about traditional technology investors being afraid to touch expensive, long-term, high-risk high-reward projects for another time.  A lot of people talk about the need to try new things that are hard but could have huge impact; it’s important to not just talk about them but to act.  I think it’s easier for individual investors to do this than for venture funds, at least given how they are currently structured.

I don’t think investors are doing nearly enough to fund atomic energy.  With the exception of China, new fission development has effectively stopped and very few plants have been built in recent memory.  Fission has been a remarkably safe and effective power source while generating 11% of the world’s electricity—the first time I saw the data on the safety data of fission energy relative to other power sources, I thought there was an error. 

On the fusion side, only about four US fusion companies have raised venture capital in the past few decades.  The big government projects, like NIF and ITER, unfortunately have the feel of peacetime big government projects.

The days are long but the decades are short

I turned 30 last week and a friend asked me if I'd figured out any life advice in the past decade worth passing on.  I'm somewhat hesitant to publish this because I think these lists usually seem hollow, but here is a cleaned up version of my answer:


1) Never put your family, friends, or significant other low on your priority list.  Prefer a handful of truly close friends to a hundred acquaintances.  Don’t lose touch with old friends.  Occasionally stay up until the sun rises talking to people.  Have parties.

2) Life is not a dress rehearsal—this is probably it.  Make it count.  Time is extremely limited and goes by fast.  Do what makes you happy and fulfilled—few people get remembered hundreds of years after they die anyway.  Don’t do stuff that doesn’t make you happy (this happens most often when other people want you to do something).  Don’t spend time trying to maintain relationships with people you don’t like, and cut negative people out of your life.  Negativity is really bad.  Don’t let yourself make excuses for not doing the things you want to do.

3) How to succeed: pick the right thing to do (this is critical and usually ignored), focus, believe in yourself (especially when others tell you it’s not going to work), develop personal connections with people that will help you, learn to identify talented people, and work hard.  It’s hard to identify what to work on because original thought is hard.

4) On work: it’s difficult to do a great job on work you don’t care about.  And it’s hard to be totally happy/fulfilled in life if you don’t like what you do for your work.  Work very hard—a surprising number of people will be offended that you choose to work hard—but not so hard that the rest of your life passes you by.  Aim to be the best in the world at whatever you do professionally.  Even if you miss, you’ll probably end up in a pretty good place.  Figure out your own productivity system—don’t waste time being unorganized, working at suboptimal times, etc.  Don’t be afraid to take some career risks, especially early on.  Most people pick their career fairly randomly—really think hard about what you like, what fields are going to be successful, and try to talk to people in those fields.

5) On money: Whether or not money can buy happiness, it can buy freedom, and that’s a big deal.  Also, lack of money is very stressful.  In almost all ways, having enough money so that you don’t stress about paying rent does more to change your wellbeing than having enough money to buy your own jet.  Making money is often more fun than spending it, though I personally have never regretted money I’ve spent on friends, new experiences, saving time, travel, and causes I believe in.

6) Talk to people more.  Read more long content and less tweets.  Watch less TV.  Spend less time on the Internet.

7) Don’t waste time.  Most people waste most of their time, especially in business.

8) Don’t let yourself get pushed around.  As Paul Graham once said to me, “People can become formidable, but it’s hard to predict who”.  (There is a big difference between confident and arrogant.  Aim for the former, obviously.)

9) Have clear goals for yourself every day, every year, and every decade. 

10) However, as valuable as planning is, if a great opportunity comes along you should take it.  Don’t be afraid to do something slightly reckless.  One of the benefits of working hard is that good opportunities will come along, but it’s still up to you to jump on them when they do.

11) Go out of your way to be around smart, interesting, ambitious people.  Work for them and hire them (in fact, one of the most satisfying parts of work is forging deep relationships with really good people).  Try to spend time with people who are either among the best in the world at what they do or extremely promising but totally unknown.  It really is true that you become an average of the people you spend the most time with.

12) Minimize your own cognitive load from distracting things that don’t really matter.  It’s hard to overstate how important this is, and how bad most people are at it.  Get rid of distractions in your life.  Develop very strong ways to avoid letting crap you don’t like doing pile up and take your mental cycles, especially in your work life.

13) Keep your personal burn rate low.  This alone will give you a lot of opportunities in life.

14) Summers are the best.

15) Don’t worry so much.  Things in life are rarely as risky as they seem.  Most people are too risk-averse, and so most advice is biased too much towards conservative paths.

16) Ask for what you want.  

17) If you think you’re going to regret not doing something, you should probably do it.  Regret is the worst, and most people regret far more things they didn’t do than things they did do.  When in doubt, kiss the boy/girl.

18) Exercise.  Eat well.  Sleep.  Get out into nature with some regularity.

19) Go out of your way to help people.  Few things in life are as satisfying.  Be nice to strangers.  Be nice even when it doesn’t matter.

20) Youth is a really great thing.  Don’t waste it.  In fact, in your 20s, I think it’s ok to take a “Give me financial discipline, but not just yet” attitude.  All the money in the world will never get back time that passed you by.

21) Tell your parents you love them more often.  Go home and visit as often as you can.

22) This too shall pass.

23) Learn voraciously. 

24) Do new things often.  This seems to be really important.  Not only does doing new things seem to slow down the perception of time, increase happiness, and keep life interesting, but it seems to prevent people from calcifying in the ways that they think.  Aim to do something big, new, and risky every year in your personal and professional life.

25) Remember how intensely you loved your boyfriend/girlfriend when you were a teenager?  Love him/her that intensely now.  Remember how excited and happy you got about stuff as a kid?  Get that excited and happy now.

26) Don’t screw people and don’t burn bridges.  Pick your battles carefully.

27) Forgive people. 

28) Don’t chase status.  Status without substance doesn’t work for long and is unfulfilling.

29) Most things are ok in moderation.  Almost nothing is ok in extreme amounts.

30) Existential angst is part of life.  It is particularly noticeable around major life events or just after major career milestones.  It seems to particularly affect smart, ambitious people.  I think one of the reasons some people work so hard is so they don’t have to spend too much time thinking about this.  Nothing is wrong with you for feeling this way; you are not alone.

31) Be grateful and keep problems in perspective.  Don’t complain too much.  Don’t hate other people’s success (but remember that some people will hate your success, and you have to learn to ignore it). 

32) Be a doer, not a talker.

33) Given enough time, it is possible to adjust to almost anything, good or bad.  Humans are remarkable at this.

34) Think for a few seconds before you act.  Think for a few minutes if you’re angry.

35) Don’t judge other people too quickly.  You never know their whole story and why they did or didn’t do something.  Be empathetic.

36) The days are long but the decades are short.

Bubble talk

I’m tired of reading about investors and journalists claiming there’s a bubble in tech.  I understand that it’s fun to do and easy press, but it’s boring reading.  I also understand that it might scare newer investors away and bring down valuations, but there’s got to be a better way to win than that. 

I would much rather read about what companies are doing than the state of the markets.  The gleeful anticipation of a correction by investors and pundits is not helping the world get better in any meaningful way.

Investors that think companies are overpriced are always free not to invest.  Eventually, the market will find its clearing price.

I am pretty paranoid about bubbles, but things still feel grounded in reason (the thing that feels least reasonable is some early-stage valuations, but it’s a small amount of capital and still nothing I would call a “bubble”).  Even my own recent comments were misinterpreted as claiming we’re in a bubble—that’s how much the press wants to write about this.

Although they cause a lot of handwringing, business cycles are short compared to the arc of innovation.  In October of 2008, Sequoia Capital—arguably the best-ever in the business—gave the famous “RIP Good Times” presentation (I was there).  A few months later, we funded Airbnb.  A few months after that, a company called UberCab got started.

Instead of just making statements, here is a bet looking 5 years out.  To win, I have to be right on all three propositions.

1) The top 6 US companies at http://fortune.com/2015/01/22/the-age-of-unicorns/ (Uber, Palantir, Airbnb, Dropbox, Pinterest, and SpaceX) are currently worth just over $100B.  I am leaving out Snapchat because I couldn’t get verification of its valuation.  Proposition 1: On January 1st, 2020, these companies will be worth at least $200B in aggregate. 

2) Stripe, Zenefits, Instacart, Mixpanel, Teespring, Optimizely, Coinbase, Docker, and Weebly are a selection of mid-stage YC companies currently worth less than $9B in aggregate.  Proposition 2: On January 1st, 2020, they will be worth at least $27B in aggregate.

3) Proposition 3: The current YC Winter 2015 batch—currently worth something that rounds down to $0—will be worth at least $3B on Jan 1st, 2020.

Acquisitions at any point between now and the decision date are counted as their acquisition value.  Private companies are valued as of their last round that sold stock with at most a 1x liquidation preference or last secondary transaction of at least $100MM of stock.  Public companies are valued by their market capitalization.

There will be downward pressure on valuations as interest rates rise.  But I think it will be less than the upward pressure of the phenomenal innovation and earning power of these businesses.

Of course, there could be a macro collapse in 2018 or 2019, which wouldn’t have time to recover by 2020.  I think that’s the most likely way for me to lose.

This bet is open to the first VC who would like to take it (though it is not clear to me anyone who wants to take the other side should be investing in startups.)  The loser donates $100,000 to a charity of the winner’s choice.

Technology predictions

Some of these are probably apocryphal, but making predictions about the limits of technology is really hard:


Space travel is utter bilge.

- Dr. Richard van der Reit Wooley, space advisor to the British government, 1956

Computers in the future may...perhaps only weigh 1.5 tons.

- Popular Mechanics, 1949

X-rays are a hoax.

- Lord Kelvin, ca. 1900

I confess that in 1901 I said to my brother Orville that man would not fly for fifty years. Two years later we ourselves made flights. This demonstration of my impotence as a prophet gave me such a shock that ever since I have distrusted myself and avoided all predictions.

- Wilbur Wright, 1908

To place a man in a multi-stage rocket and project him into the controlling gravitational field of the moon where the passengers can make scientific observations, perhaps land alive, and then return to earth--all that constitutes a wild dream worthy of Jules Verne. I am bold enough to say that such a man-made voyage will never occur regardless of all future advances.

- Lee deForest, inventor of the vacuum tube, 1957

There is not the slightest indication that [nuclear energy] will ever be obtainable. It would mean that the atom would have to be shattered at will.

-  Albert Einstein, 1932

That is the biggest fool thing we have ever done. The bomb will never go off, and I speak as an expert in explosives.

- Admiral William Leahy to President Truman 

Anyone who expects a source of power from the transformation of these atoms is talking moonshine.

- Ernest Rutherford, 1933 

The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient.

- Dr. Alfred Velpeaum, French surgeon, 1839

Bitcoin is definitely going to be trading at $10,000 or more and in wide use by the end of 2014.

- Many otherwise smart people, November of 2013

Superhuman machine intelligence is prima facie ridiculous.

- Many otherwise smart people, 2015


(Most of these from: https://www.lhup.edu/~dsimanek/neverwrk.htm)

Machine intelligence, part 2

This is part two of a a two-part post—the first part is here.


THE NEED FOR REGULATION

Although there has been a lot of discussion about the dangers of machine intelligence recently, there hasn’t been much discussion about what we should try to do to mitigate the threat. 

Part of the reason is that many people are almost proud of how strongly they believe that the algorithms in their neurons will never be replicated in silicon, and so they don’t believe it’s a potential threat.  Another part of it is that figuring out what to do about it is just very hard, and the more one thinks about it the less possible it seems.  And another part is that superhuman machine intelligence (SMI) is probably still decades away [1], and we have very pressing problems now.

But we will face this threat at some point, and we have a lot of work to do before it gets here.  So here is a suggestion.

The US government, and all other governments, should regulate the development of SMI.  In an ideal world, regulation would slow down the bad guys and speed up the good guys—it seems like what happens with the first SMI to be developed will be very important.

Although my general belief is that technology is often over-regulated, I think some regulation is a good thing, and I’d hate to live in a world with no regulation at all.  And I think it’s definitely a good thing when the survival of humanity is in question.  (Incidentally, there is precedent for classification of privately-developed knowledge when it carries mass risk to human life.  SILEX is perhaps the best-known example.) 

To state the obvious, one of the biggest challenges is that the US has broken all trust with the tech community over the past couple of years.  We’d need a new agency to do this.

I am sure that Internet commentators will say that everything I’m about to propose is not nearly specific enough, which is definitely true.  I mean for this to be the beginning of a conversation, not the end of one.

The first serious dangers from SMI are likely to involve humans and SMI working together.  Regulation should address both the case of malevolent humans intentionally misusing machine intelligence to, for example, wreak havoc on worldwide financial markets or air traffic control systems, and the “accident” case of SMI being developed and then acting unpredictably.

Specifically, regulation should: 

1)   Provide a framework to observe progress.  This should happen in two ways.  The first is looking for places in the world where it seems like a group is either being aided by significant machine intelligence or training such an intelligence in some way. 

The second is observing companies working on SMI development.  The companies shouldn’t have to disclose how they’re doing what they’re doing (though when governments gets serious about SMI they are likely to out-resource any private company), but periodically showing regulators their current capabilities seems like a smart idea.

2)   Given how disastrous a bug could be, require development safeguards to reduce the risk of the accident case.  For example, beyond a certain checkpoint, we could require development happen only on airgapped computers, require that self-improving software require human intervention to move forward on each iteration, require that certain parts of the software be subject to third-party code reviews, etc.  I’m not very optimistic than any of this will work for anything except accidental errors—humans will always be the weak link in the strategy (see the AI-in-a-box thought experiments).  But it at least feels worth trying.

Being able to do this—if it is possible at all—will require a huge amount of technical research and development that we should start intensive work on now.  This work is almost entirely separate from the work that’s happening today to get piecemeal machine intelligence to work.

To state the obvious but important point, it’s important to write the regulations in such a way that they provide protection while producing minimal drag on innovation (though there will be some unavoidable cost).

3)   Require that the first SMI developed have as part of its operating rules that a) it can’t cause any direct or indirect harm to humanity (i.e. Asimov’s zeroeth law), b) it should detect other SMI being developed but take no action beyond detection, c) other than required for part b, have no effect on the world.

We currently don’t know how to implement any of this, so here too, we need significant technical research and development that we should start now. 

4)   Provide lots of funding for R+D for groups that comply with all of this, especially for groups doing safety research.

5)   Provide a longer-term framework for how we figure out a safe and happy future for coexisting with SMI—the most optimistic version seems like some version of “the human/machine merge”.  We don’t have to figure this out today.

Regulation would have an effect on SMI development via financing—most venture firms and large technology companies don’t want to break major laws.  Most venture-backed startups and large companies would presumably comply with the regulations.

Although it’s possible that a lone wolf in a garage will be the one to figure SMI out, it seems more likely that it will be a group of very smart people with a lot of resources.  It also seems likely, at least given the current work I’m aware of, it will involve US companies in some way (though, as I said above, I think every government in the world should enact similar regulations).

Some people worry that regulation will slow down progress in the US and ensure that SMI gets developed somewhere else first.  I don’t think a little bit of regulation is likely to overcome the huge head start and density of talent that US companies currently have.

There is an obvious upside case to SMI —it could solve a lot of the serious problems facing humanity—but in my opinion it is not the default case.  The other big upside case is that machine intelligence could help us figure out how to upload ourselves, and we could live forever in computers.  Or maybe in some way, we can make SMI be a descendent of humanity.

Generally, the arc of technology has been about reducing randomness and increasing our control over the world.  At some point in the next century, we are going to have the most randomness ever injected into the system. 

In politics, we usually fight over small differences.  These differences pale in comparison to the difference between humans and aliens, which is what SMI will effectively be like.  We should be able to come together and figure out a regulatory strategy quickly.



Thanks to Dario Amodei (especially Dario), Paul Buchheit, Matt Bush, Patrick Collison, Holden Karnofsky, Luke Muehlhauser, and Geoff Ralston for reading drafts of this and the previous post. 

[1] If you want to try to guess when, the two things I’d think about are computational power and algorithmic development.  For the former, assume there are about 100 billion neurons and 100 trillion synapses in a human brain, and the average neuron fires 5 times per second, and then think about how long it will take on the current computing trajectory to get a machine with enough memory and flops to simulate that.

For the algorithms, neural networks and reinforcement learning have both performed better than I’ve expected for input and output respectively (e.g. captioning photos depicting complex scenes, beating humans at video games the software has never seen before with just the ability to look at the screen and access to the controls).  I am always surprised how unimpressed most people seem with these results.  Unsupervised learning has been a weaker point, and this is probably a critical part of replicating human intelligence.   But many researchers I’ve spoken to are optimistic about current work, and I have no reason to believe this is outside the scope of a Turing machine.

Machine intelligence, part 1

This is going to be a two-part post—one on why machine intelligence is something we should be afraid of, and one on what we should do about it.  If you’re already afraid of machine intelligence, you can skip this one and read the second post tomorrow—I was planning to only write part 2, but when I asked a few people to read drafts it became clear I needed part 1.


WHY YOU SHOULD FEAR MACHINE INTELLIGENCE

Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity.  There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could.  Also, most of these other big threats are already widely feared.

It is extremely hard to put a timeframe on when this will happen (more on this later), and it certainly feels to most people working in the field that it’s still many, many years away.  But it’s also extremely hard to believe that it isn’t very likely that it will happen at some point.

SMI does not have to be the inherently evil sci-fi version to kill us all.  A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal (most goals, if you think about them long enough, could make use of resources currently being used by humans) wipes us out.  Certain goals, like self-preservation, could clearly benefit from no humans.  We wash our hands not because we actively wish ill towards the bacteria and viruses on them, but because we don’t want them to get in the way of our plans.

(Incidentally, Nick Bostrom’s excellent book “Superintelligence” is the best thing I’ve seen on this topic.  It is well worth a read.)

Most machine intelligence development involves a “fitness function”—something the program tries to optimize.  At some point, someone will probably try to give a program the fitness function of “survive and reproduce”.  Even if not, it will likely be a useful subgoal of many other fitness functions.  It worked well for biological life.  Unfortunately for us, one thing I learned when I was a student in the Stanford AI lab is that programs often achieve their fitness function in unpredicted ways.

Evolution will continue forward, and if humans are no longer the most-fit species, we may go away.  In some sense, this is the system working as designed.  But as a human programmed to survive and reproduce, I feel we should fight it.

How can we survive the development of SMI?  It may not be possible.  One of my top 4 favorite explanations for the Fermi paradox is that biological intelligence always eventually creates machine intelligence, which wipes out biological life and then for some reason decides to makes itself undetectable.

It’s very hard to know how close we are to machine intelligence surpassing human intelligence.  Progression of machine intelligence is a double exponential function; human-written programs and computing power are getting better at an exponential rate, and self-learning/self-improving software will improve itself at an exponential rate.  Development progress may look relatively slow and then all of a sudden go vertical—things could get out of control very quickly (it also may be more gradual and we may barely perceive it happening).

As mentioned earlier, it is probably still somewhat far away, especially in its ability to build killer robots with no help at all from humans.  But recursive self-improvement is a powerful force, and so it’s difficult to have strong opinions about machine intelligence being ten or one hundred years away.

We also have a bad habit of changing the definition of machine intelligence when a program gets really good to claim that the problem wasn’t really that hard in the first place (chess, Jeopardy, self-driving cars, etc.).  This makes it seems like we aren’t making any progress towards it.  Admittedly, narrow machine intelligence is very different than general-purpose machine intelligence, but I still think this is a potential blindspot.

It’s hard to look at the rate or improvement in the last 40 years and think that 40 years for now we’re not going to be somewhere crazy.  40 years ago we had Pong.  Today we have virtual reality so advanced that it’s difficult to be sure if it’s virtual or real, and computers that can beat humans in most games.

Though, to be fair, in the last 40 years we have made little progress on the parts of machine intelligence that seem really hard—learning, creativity, etc.  Basic search with a lot of compute power has just worked better than expected. 

One additional reason that progress towards SMI is difficult to quantify is that emergent behavior is always a challenge for intuition.  The above common criticism of current machine intelligence—that no one has produced anything close to human creativity, and that this is somehow inextricably linked with any sort of real intelligence—causes a lot of smart people to think that SMI must be very far away.

But it’s very possible that creativity and what we think of us as human intelligence are just an emergent property of a small number of algorithms operating with a lot of compute power (In fact, many respected neocortex researchers believe there is effectively one algorithm for all intelligence.  I distinctly remember my undergrad advisor saying the reason he was excited about machine intelligence again was that brain research made it seem possible there was only one algorithm computer scientists had to figure out.)

Because we don’t understand how human intelligence works in any meaningful way, it’s difficult to make strong statements about how close or far away from emulating it we really are.  We could be completely off track, or we could be one algorithm away.

Human brains don’t look all that different from chimp brains, and yet somehow produce wildly different capabilities.  We decry current machine intelligence as cheap tricks, but perhaps our own intelligence is just the emergent combination of a bunch of cheap tricks.

Many people seem to believe that SMI would be very dangerous if it were developed, but think that it’s either never going to happen or definitely very far off.   This is sloppy, dangerous thinking.



[1] I prefer calling it "machine intelligence" and not "artificial intelligence" because artificial seems to imply it's not real or not very good.  When it gets developed, there will be nothing artificial about it.