probably quantum? NISQ and where we are today


In October, my colleagues Lars Fjeldsoe-Nielsen, Maxime Le Dantec and I were honored to co-host an awesome crowd of thinkers and builders in Quantum Computing at Balderton HQ alongside the UK's National Physical Laboratory, just a night before Google announced their achievement of quantum supremacy.

I won't get into the fray as to whether Google's result amounts to supremacy or speedup, and I think this blog post by Leo at Rahko does a succinct job of summarizing the result and placing it in context. (For a more detailed take see Scott Aaronson's post) Needless to say, these are exciting times for the future of computing and for achieving a greater capacity to understand Nature.

Our gathering was motivated by John Preskill's paper Quantum Computing in the NISQ Era and Beyond. NISQ is an acronym that describes the current available quantum computing devices. They are Noisy Intermediate-Scale Quantum Computers that represent huge advances compared to the available technology a few years ago, but are still a far cry from a truly Universal Quantum Computer. In the paper, Preskill writes that "Now is an opportune time for a fruitful discussion among researchers, entrepreneurs, managers, and investors who share an interest in quantum computing." As capital has surged into this still-highly experimental field in ever greater quantities (from $70M in total quantum-focused VC in 2015 to $560M so far in 2019), it becomes critical to gather disparate viewpoints within four walls and try to separate signal from noise. (We were also inspired by BlueYard and Google's 2017 Munich gathering, A Quantum Leap.)

Over the course of the day we were lucky to have vigorous debate from company leaders like Christopher Savoie, CEO at Zapata, Ilyas Khan, CEO at Cambridge Quantum Computing, Leo Wossnig, CEO at Rahko and Justin Ging, CCO at Honeywell Quantum. These voices were complemented by many researchers from Oxford, Cambridge, UCL and other universities, by investors, and also by representatives of the UK government, including Roger McKinlay, the Challenge Director for Quantum Technologies at UK Research and Innovation.

Through the course of the afternoon we uncovered some of the challenges associated with measuring progress within quantum computing. What are the right metrics? The oft-reported total qubit number is almost certainly not a fair metric. One also has to look at measures of connectivity, fidelity, and circuit depth. Similarly to when you look at the specs for your new laptop, there is no one metric to rule them all.

We had a debate about the benefits and drawbacks of the various hardware approaches for quantum computing, including superconducting qubits, ion traps, and spin qubits. Most notably, we had agreement that superconducting qubits are easy to design with microwave electronics, but can be inherently unstable and there can be calibration issues. Ion trapped qubits have high fidelity and connectivity, but can be difficult and inaccurate to control. Spin qubits in silicon have the benefit of a pre-existing fabrication supply chain that is already manufacturing silicon chips at massive scale and low cost.

To varying degrees, all approaches are experiencing challenges scaling devices to many high quality qubits. We also lack any sort of infrastructure to allow interoperability between different QCs with different types of qubits.

A recurring theme was the necessity of teams working on hardware, software, and end-users (customers) to maintain an open dialogue. A preference one place in the stack could turn into a specification somewhere else.

On the software side, the discussion largely focused on what degree quantum algorithms would need to combine with classical and machine learning algorithms in order to be usable in the near term. Many of us were excited by the scope of using quantum computing and machine learning to augment one another, (as an example of a hybrid approach see this recent paper). All that said, we still have a ways to go in terms of demonstrating concrete value to customers.

Finally, we discussed the need for a deeper talent pool in quantum; quantum chemistry and other potential areas of near-term applications; and how quantum computing might best be regarded as a new frontier of generalized computation that is well-suited to problems requiring high dimensionality rather than high throughput.

Gathering perspectives from academia, industry, investors, and government is an important way to drive technologies further in a thoughtful fashion and we look forward to continuing the conversation with all those who joined us.


For decades theoretical physicists and computer scientists have explored what might be possible if we took a fundamentally quantum approach to computation. They’ve discovered algorithms that are theoretically able to accomplish tasks no classical algorithm has historically proven capable of. In order to execute those algorithms we require usable quantum computers that are able to perform operations on quantum bits. While a truly general purpose quantum computer is still some time in the future, the pace of advances in quantum hardware in the past few years has been astonishing.

At Balderton, we have been following the rapid emergence of quantum computing with tremendous curiosity (and a healthy dose of skepticism) for several years.

While the future scope of disruption is potentially vast, many applications of quantum computing are contingent on clearing major engineering challenges, largely around scaling the number of error-corrected qubits required to perform many quantum algorithms.

However, there are areas emerging, in particular, quantum chemistry and quantum machine learning, where quantum computing may have a disproportionate impact sooner than anticipated. Why? In chemistry, classical computers have encountered intractable problems that cannot be solved, unless we use quantum approaches to model quantum phenomena. For example, today we fix nitrogen and create ammonia using the Haber-Bosch process. The Haber-Bosch process is only 15% efficient with each pass and we use it to produce 450 million tons of nitrogen fertilizer each year. Currently a ton of fertilizer goes for around $500. Plants are able to fix nitrogen more efficiently but we don’t fully understand how because we can’t simulate nitrogenase. Using simulations to help better understand nitrogen fixation is an opportunity of tremendous scale.

Within this backdrop, we were lucky enough to come across the team at Rahko. Leo, Ed, Miriam and Ian have gathered a small but world-class team in London. They are taking unique approaches towards unlocking quantum discovery for chemical simulation, with techniques rooted in quantum machine learning that don’t require fully error-corrected quantum computers. Their goal on the product side is to build a robust quantum chemistry platform that provides best-in-class toolboxes for running quantum algorithms. Their work cuts across an entire spectrum: from deploying classical machine learning techniques and quantum-inspired methods on classical computers, to hybrid approaches using both classical and noisy intermediate-scale quantum computers (so-called “NISQ” devices), and in time techniques that will utilize quantum computers exclusively. Academically, there is a growing body of research exploring the intersection of machine learning techniques and quantum circuits. Rahko is well positioned to help companies leverage breakthroughs in this area as they unfold.

We couldn't be more proud to be working with the entire team at Rahko and are looking forward to growing and learning together in the years to come.

a brighter take on privacy

I had the fortune of joining a many-to-one video conference with Ed Snowden in August in Berlin.


Perhaps counterintuitively, it left me feeling rather optimistic about our current direction in regards to privacy both online and in our society, a feeling that contrasted sharply with the gloom I felt after first seeing Citizen Four by Berlin-based filmmaker Laura Poitras. My optimism was informed by a variety of thoughts, namely that:

(i) Ed Snowden is a controversial figure, and rightfully so. Yet here is an individual who sacrificed a tremendous amount in his own life to help bring systemic violations to human liberty into public awareness. However we feel about his first acts, the facts he brought to light have helped us confront challenges to privacy with greater transparency. That he himself was originally culpable and a part of this system clearly brought a level of urgency and raised the stakes regarding the set of choices he had to make.

(ii) For most of us the relevant choices are trivial in comparison. They boil down largely to choosing privacy over convenience, participating in our societies in a fashion consistent with the idea that privacy is a human right, and holding the products and services we engage with online accountable to that same ideal.

Luckily, the gap between true privacy and high quality user experiences in consumer software is narrowing. Before, you may have had to mess around with your own PGP keys to send and receive encrypted emails. Today, for example, you can use ProtonMail. Before, WhatsApp, Facebok Messenger and WeChat were clearly the leaders in terms of mobile messenger UX. Today, Telegram and Signal provide far greater privacy (not to be confused with end-to-end encryption, Facebook still plans on opening WhatsApp up to “businesses in your community”), with quasi feature parity. Organizers of protests in Hong Kong can now effectively communicate on Telegram without sacrificing convenience. What these privacy-focused solutions often don’t have are as-robust social graphs, meaning not everyone you care about will be on them, but that’s for us to change.

Speaking of social graphs, there are various worthwhile efforts at recreating social networks. Some are based on topics (like gaming) and take a pseudonymous approach, like Discord. Others are trying to create a truly p2p version of a social network, notably Scuttlebutt which feature-wise is attempting to replace a lot of what FB did in the early days, or Mastodon, which is a more p2p version of Twitter. If you want privacy from your browser, you can look to Brave or Tor, and if you want privacy while you search you can use DuckDuckGo, who has built a profitable advertising business without resorting to “we know you better than you know yourself” targeting. Even if you want to stay with your current ecosystem of apps, but better manage configs and permissions, apps like Jumbo Privacy can help you do that.

So the trade-offs we have to make in favor of privacy are getting easier, even as awareness of the cost of the status quo (which supports surveillance, direct personal data monetization, and personal data vulnerability through poor security and storage) expands.


iii. Since Snowden brought institutionalized online surveillance programs like PRISM and XKeyScore to our attention in 2013, privacy has become a daily front-page issue for publications and boardrooms around the world. Alongside this narrative has been the slow realisation that most companies simply cannot be trusted with our own personal data (go have a look at Have I Been Pwned and see for yourself). Luckily, in the relatively short six years since, the European Union has put into law the General Data Protection Regulation (GDPR, implemented in May 2018), which states that “The protection of natural persons in relation to the processing of personal data is a fundamental right”. GDPR outlines a comprehensive framework that fundamentally changes how businesses and services must collect, process and treat personal data. Enforcement has so far been muted in my view, while authorities allow for some adjustment time, but I believe major enforcement is a question of when, not if.

Europe is not alone in terms of front-footed policy making on privacy. California passed the California Consumer Privacy Act (CCPA, enforceable beginning January 1, 2020) last year. The Act begins with a reminder that a fundamental right to privacy for all is recognized and protected by California’s constitution. These policies have been a critical impetus in ensuring that citizens and market participants treat counterparty data with more respect. The bills have also created huge opportunities for companies focused on privacy software that help businesses bridge the wide gap between what policymakers are signing into law and the privacy-jeopardizing status-quo of the past few decades. Companies that are seizing this opportunity include Collibra, Onetrust and DataGuard

iv. I used to hear the oft-repeated defence of “Why should I care about privacy if I have nothing to hide?”. Slowly, that sort of naive collective thinking is starting to fade away. My partner remarked to me that you hear that refrain most often from citizens of countries like the UK and US, who have by and large not had a reason to fear their governments in the past few decades. You won’t hear that from Berlin residents who are old enough to remember the Stasi or older residents of Eastern Bloc countries. You won’t hear it from protestors in Hong Kong, or families in South Texas huddling in fear of ICE raids. Slowly we are all realizing that it is not just terrorists and criminals who have something to fear from unfettered surveillance. Slowly we are all realizing that the proper time to make decisions to safeguard our civil liberties is when doing so may seem foolish, because when it doesn’t it may be too late.

Luckily, the steps we have to take today don't seem as foolish and aren't as hard as they were yesterday.

Cross the River by Feeling the Stones | 摸着石头过河

There is a saying in China, 摸着石头过河, that translates to “cross the river by feeling the stones”. It is generally attributed to Deng Xiaoping, who used it as a metaphor to describe China’s approach towards the reform and opening (改革开放)which kicked off at the end of the 1970s. On one side of the river was China’s closed, Marxist, centrally-planned economy. On the other was an open, liberalized, market-driven one. China hadn’t crossed this river before, and so would need to do so slowly, thoughtfully and carefully, by feeling the stones.

Today, just over forty years later, it’s clear the approach has been hugely successful. So successful, in fact, that internet and software entrepreneurs across the world need to employ a similar strategy if they are to cross the river the other way, by successfully navigating the strongest cultural, linguistic, regulatory and technical rapids we’ve seen in recent years. This makes success more tenuous, but rewards perhaps more precious for those who can still find a way to incorporate the modern day Chinese behemoth into their supply chains, their user bases or their cap tables.

To that end, we were humbled to host an event at our Balderton offices recently that tried to shed light on how foreign internet entrepreneurs might best engage with China. We were joined by David Sullivan, Founder of ADG China, one of the top cross-border technology advisory firms focused on China. We were also lucky to have with us Joanna Shan, a Beijing-native and Peking University graduate who works on the Partnership team at Facebook, and Balderton’s own Jodi Yang, Head of Investor Relations, who has extensive experience both operationally on the ground and managing cross-border capital raising processes with China.

Below I’d like to share a few summary takeaways from our wide-ranging conversation.

  • Entering the Chinese market is better done wholeheartedly, with substantial resources and sufficient time allocated to the effort, or not at all. To consider China just another market on the path towards global leadership is to grossly underestimate the scale of the undertaking. Even companies with tremendous resources (see Facebook, Google, Uber) can fail in their efforts.

  • Partnering with provinces and local governments and cutting deals at the regional level can be a more sensible approach. These provinces often have the size and population of large European countries. China has more than twice the population of the 28 countries in the EU divided across 23 provinces (and 11 municipalities and special administrative regions). For example, Anhui province has roughly the same population as Italy, the Beijing and Shanghai municipalities each have more people than the Netherlands, and Guangdong, Henan and Shandong each have populations substantially larger than Germany’s.

  • Allocating 6 months to 2 years from start to finish is sensible in terms of a realistic timeline for getting a deal done with a local partner. It will also mean either regular trips or the establishment of a permanent office in Beijing, Shanghai, Shenzhen, Hangzhou or Hong Kong. Too often foreign companies have set up a great schedule of meetings on a first trip and not returned to follow up, and so getting boots on the ground (or hiring them via an organization like ADG) is often a prerequisite to kicking off deals in earnest.

  • In negotiations, remain transparent, firm and pragmatic. Your negotiating partner will expect you to. Also, until the final deal is signed you can expect issues you thought were previously agreed to still be in play, as they might be used to bargain or trade for the outstanding terms.

  • The playing field in China for foreign firms is not level. Foreign companies need to obtain a specific license to be able to sell and deploy cloud software for example.

  • While China has in the last five years started to leapfrog the West in the sophistication of its social media, e-commerce, telecommunications and mobile payments infrastructure, b2b and enterprise software has lagged behind on a relative basis. That is changing, and fast. Expect increased local competition and more headlines proclaiming massive new enterprise software companies born and bred in China.

  • Given the degree of competition and scale of the 800M internet users in China, startups sometimes have to prioritize strategy above product. Particularly when the companies trade in substitutable goods (like ridesharing or delivery services). Making sure you are employing appropriate go-to-market strategies is critical to success

  • Some of the high growth areas where foreign firms can still offer differentiated services in China include outbound travel and world-leading healthcare, both of which the Chinese have an insatiable demand for but have only been accessible to most relatively recently.

It is a charged time, thirty years to the week after Tiananmen Square, forty years after the launch of reform and opening. With trade tensions rising steeply and newspaper articles highlighting how Chinese and American governments are working to keep critical components away from each other’s military supply chains. This is a scary time.

It is times like these that private citizens and businesses can continue to work together to increase mutual understanding, to engage each other with openness and respect.

We hope the above has helped you a little bit with a toolkit to help you do that.

anticipations for 2019

Und nun wollen wir glauben an ein langes Jahr, das uns
gegeben ist, neu, unberührt, voll nie gewesener Dinge...

"And now we would like to believe in a long year, given to us new, untouched, full of things that never before were..." -- Rainer Maria Rilke in a letter to his wife, January 1, 1907

I thought I'd start the new year with a few technology related anticipations.

1/ People will continue to awaken to the idea that social media and our social lives should not be synonymous with massive companies that monetize our attention and interactions with one another via advertising. I strongly believe we have the technical capability and increasingly, the consumer demand for social platforms that will allow us to communicate and share content with people that we care for without giving an effectively free license for that media and data to be compromised or sold and our privacy and attention jeopardized. Protocols like Scuttlebutt demonstrate how truly p2p social media might be designed, platforms like Steemit show how you might built endogenous content-monetization structures, and the increasing popularity of messengers like Signal or Telegram, and browsers like DuckDuckGo and Brave are encouraging.

Contextual Data: From Q32017 to Q32018, FB DAUs in the US & Canada didn’t grow (stayed flat at 185M). Europe DAUs grew only 1.5% yoy from 274M to 278M. However FB is growing quickly in Asia-Pacific and ‘Rest of World’. FB makes approximately $24 per user per year globally, and is doing annually approximately $52B in revenue at 40-45% operating margins. It also has 4 of the world’s top 5 most downloaded apps.

2/ Mobile-first consumer subscription software will continue to soar, although with more scrutiny on predatory practices and annual renewal rates, particularly for those apps that have favored annual subscriptions over monthly. A majority of the time these annual subscriptions represent one-off purchases (renewal rates under 50% are very common). Consumers will start to demand better tools for monitoring, organizing and managing services they have subscribed to.

Contextual Data: Sensor Tower estimates that Q32018 App Store revenue was $18B globally, up 23% from the year prior. $12B of those $18B came from the iOS App Store. Apple takes between 15-30% of subscription revenue and 30% of in-app purchases so safe to say if Sensor Tower is correct Apple is making ±$3B per quarter via App Store revenue. For it’s part Apple breaks out Total Services revenue which has been growing 20-30% yoy and in the most recent quarter reached $10B.

3/ Relations between the world's two largest superpowers will continue to deteriorate. On the American side, misperceptions and poor leadership will plague negotiations. Unpredictable incidents like the arrest of the Huawei CFO, which Trump reportedly was not aware of prior to the incident, will destabilize attempts at unwinding tension, and may provoke nationalistic fury from the Chinese directed towards America which we have largely avoided until now. On the Chinese side, the pursuit of 6%+ GDP growth at almost any cost (despite the fact that net new labourers has turned negative) will keep them at the negotiating table, but they will be increasingly sensitive to any actions that may jeopardize their own legitimacy and may therefore respond unpredictably.

Contextual Data: China's economy grew at 6.5% yoy in Q30218, down from 6.7% in Q2 and 6.8% in Q1

4/ There will be a deserved increase of concern over smartphone addiction, accompanied by an increase in smartphone usage. More links will be found between smartphone usage and anxiety, particularly among children and adolescents. This will help fuel a renewed push towards a clearer understanding of our own mental health and wellness, mindfulness, meditation, and the impact of psychedelics on consciousness and their capacity to treat mental health issues. The irony will be lost on people who will turn to smartphones to try and solve their smartphone addiction problems.

Contextual Data:

5/ As investor confidence and valuations continue to fall, several tech stocks will start to look cheap on a FCF yield basis. Network effects and monopoly power will continue to buoy profits, and with a split legislature and profit-driven President in power, legislative action to counter monopoly effects will not come to pass in 2019.

Contextual Data: In the twelve months ending Sep 30, 2018, Apple had ±$63.4B in free cash flow (based on my rough workings). At today's market cap of $675B that represents 9.4% FCF yield

back to basics > operating leverage

i've liked thinking about businesses in terms operating leverage since i started looking at internet companies seven years ago. why? it's a great framework for both founders and investors to think about profitability, scalability, and the stage of maturation of a business. it's also just a neat concept.

operating leverage is the rate of change of operating profit with respect to revenue. (in calculus speak d Op Profit / d Revenue) it is bound by 1 on the low end and infinity on the high end.

operating leverage is not to be mistaken for financial leverage. financial leverage is usually understood as debt. borrowing allows firms and funds to generate higher returns on equity by increasing the total amount of resources they can marshal. operating leverage on the other hand is something that is more inherent to a given business model, and in particular its cost structure. let's start with the equation. there are several definitions, and i prefer the following:

operating leverage = (contribution margin) / (operating profit margin)

so a lemonade stand that sells 10k cups of lemonade at $4 each with a unit cost of $2 and total fixed costs of $5k (stands are expensive) has operating leverage of 1.33x

how do i get there?

10k cups of lemonade * $4 revenue per cup = $40k in revenue
$40k - (10k cups * $2 cost per cup) = $20k contribution profit
$20k contribution profit / $40k revenue = 50% contribution margin

$20k contribution profit - $5k fixed costs = $15k operating profit
$15k operating profit / $40k revenue = 37.5% operating profit

50% contribution margin / 37.5% operating profit = 1.33x

this means that for every 1x unit increase in revenue, operating profit increases by 1.33x

you can also think about operating leverage more simply,

operating leverage = (fixed costs) / (total costs)

a company with a high proportion of fixed costs has high operating leverage. put another way, a company with a low proportion of variable costs also has high operating leverage.


why is that?

the classic example given for a business with high operating leverage was Microsoft back in the 90s. the R&D costs (developer salaries) incurred by creating enterprise software like Microsoft Word was relatively high even before a single CD of Microsoft Office was sold. once the software had been written, the incremental cost of each additional CD sold was essentially the cost of a blank CD (and the sales & marketing spend in order to get it onto store shelves and into companies). so Microsoft started in a hole of fixed costs, and each incremental copy of Office they sold was essentially pure profit. if they sold enough copies they dug themselves out of the hole and generated substantial profits on top.

other businesses that exhibit high operating leverage include gaming publishers (like EA or Supercell), software-as-a-service companies, pharmaceutical companies, and media consumer subscription companies (as long as they own the content they're selling).

i find operating leverage most helpful when used to compare two businesses within the same sector, like two software businesses offering the same service but one with an API-driven go-to-market and another focused on on-premise installations.

leverage can cut both ways, businesses with high operating leverage might be at risk of not recovering their fixed costs if a particular product or service doesn't perform well. it also makes the financial performance more sensitive to expectations and volatility in revenue growth (because a business with high operating leverage will recoup less overall costs in a downturn than a similar business with low operating leverage)

a short-hand way of determining whether a company has high operating leverage is to look at its gross margin. because COGS are generally variable costs, businesses with high gross margins also usually have high operating leverage (unless sales & marketing costs are unusually high).

as technology investors, most, but not all companies we look at have high operating leverage. incremental copies of the same strings of code have almost zero variable costs. API-driven businesses could have even lower variable costs than Microsoft used to have.

that said, people's attention increasingly does come at a cost. customer acquisitions cost (CAC) is a variable cost, and businesses with high CACs don't have high operating leverage. it's often not that simple to work out at an early stage what future CACs might look like. that tends to be a reason why venture investors like consumer businesses with an element of virality or strong network effects, but those deserve their own post.

in early stage venture, a companies cost structure is often still being built out, and it can be hard to ascertain whether or not particular company will have operating leverage. however, i still think the fixed cost vs variable cost lens is a really helpful way of thinking about businesses.

how many bitcoins does a nasdaq cost?

OK. You'll have to forgive me. This one is really because I was just curious of what the chart looks like. Why? I'm not sure. It's certainly not a fundamental analysis of anything, and I would be wrong to say you should read well, anything, into it.

On the other hand, somehow this chart is an attempt to answer questions as profound as

--Is Bitcoin a fraud?
--Is Facebook evil?
--If Bitcoin is a fraud, and Facebook is evil, what do we do now?

And other less important questions like

--Where are we at in the installation > deployment continuum of technological revolutions with regards to cryptocurrencies? What about with regard to 21st century software companies?
--Have we started to see one technological paradigm start to replace another?
--Have we seen financial capital decouple from productive capital in the context of software (or crypto)? Are we in bubble territory?
--If we are in a bubble territory, which is a bigger bubble? Open-source digitally-native money that has no cash flows or software companies trading at >20x earnings?

OK enough of that, can't think too hard here. Let's pull and normalize (grr Bitcoins trade on weekends and NASDAQs don't) the data, dig into the numbers, and look at some charts.

First off, this is what NASDAQ has done since Sep 13, 2011 (I started there because it was the furthest back I could find good BTC daily close data for). Basically it's had a steady grind higher for the last 7+ years, essentially tripling in value from ±2,500 to ±7,500, with recent turbulence bringing it down to ±7,200 (from an all-time high of ±8,000)


Secondly, this is what Bitcoin has done since Sep 13, 2011. It was basically worth nothing for a long time, and then it was worth very little until the end of 2013 when it became worth quite a lot quite quickly. And then it became worth relatively little again. And then over the course of 2017 it grew to be worth what many people now consider to be an absurd amount (±$20,000), and today people think that ±30% of that absurd amount sounds like the right number.


Finally, we turn to the question of how many Bitcoins a NASDAQ costs:


OK this first one isn't super helpful. From this perspective it's clear that a NASDAQ at one point cost a lot of Bitcoins and then later it's price appears to have collapsed (in terms of Bitcoins). In Oct 2011 in fact it would have taken 1,177 Bitcoins to buy a NASDAQ. And today you can buy a whole NASDAQ for only 1.31 Bitcoins. From this perspective it is fair to say that the NASDAQ looks really cheap on a long-term horizon (in terms of Bitcoins).

What if we zoom in?


This next chart shows us how many Bitcoins a NASDAQ has cost since Jan 1, 2014 (after that period in 2013 after Bitcoins became worth quite a lot quite quickly). This is a more interesting chart with probably 4 distinct phases.

(i) In the first phase, from Jan 1, 2014 to Jan 14, 2015, NASDAQs became 5x more expensive (in terms of Bitcoins) going from 5 BTC (Bitcoins) to 25 Bitcoins.

(ii) In the second phase, from Jan 2015 to Sep 2015, NASDAQs were bouncing up and down around 20 Bitcoins.

(iii) In the third phase, from Sep 16, 2015 to Dec 17, 2017 NASDAQ went into a complete free fall. You might have missed this in the headlines. On September 16, 2015 a NASDAQ would have cost you 21.48 BTC. Just two years and three months later, that same NASDAQ would only have been worth 0.36 BTC (an all time low), losing 98.3% of its value in terms of Bitcoin in the process.

(iv) The fourth phase spans from Dec 17, 2017 through to today and may also be of interest. Folks will often try and tell you that 2018 has been an unususal post GFC year in that asset prices across indices like NASDAQ have sagged (actually NASDAQ is up ±5% in USD terms through Nov 16, 2018). Crypto investors however might look at NASDAQ and see tremendous performance! NASDAQ is +153% YTD in Bitcoin terms!

Here's a chart dedicated just for the fourth phase


Likely nothing of value to take away here

how german policymakers are hurting berlin's startups

this post first appeared as an article on venturebeat on july 1, 2018. for more thoughts on employee equity, go to

You can hardly go a day without seeing an article heralding the prodigal rise of Berlin’s startup scene. It is true that Berlin has tremendous momentum and potential. There is an iconoclastic streak to the city, and it is attracting young makers with creative ambitions from all over the world. But there is something at the core of Berlin’s startups that is limiting their potential. German corporate structure and inefficient tax treatment is restricting young Berlin startups’ ability to effectively incentivize talent.

Growing tech companies need to source talent globally, and it’s an incredibly competitive market for high performers. People of this calibre, particularly those who have worked in the US or the UK, are used to being offered options as part of their compensation package. However, in Germany standard employee share option plans often don’t exist. Startups often implement VSOP (virtual share option programs) instead of standard option plans because the administrative overhead and employee tax liabilities associated with traditional option plans are very high.

Potential competitive hires perceive VSOPs to be overly complex, less tangible, and fraught with risks that don’t occur with standard options. Obligations to employees in a VSOP are often structured as an employee benefit or cash liability, putting them lower in the capital structure than common shareholders. And employees who leave the company often have to forfeit their virtual share options.

Several founders of Berlin-based startups have told me they have lost potential hires due to the disadvantage of virtual share options. If a hire has one offer for real options and another for virtual ones, which do you think she is more likely to choose?

If VSOPs are so disadvantageous, why implement them? The truth is, Berlin’s startup founders don’t really have viable alternatives. There is no tax-advantaged employee option scheme in Germany. Employees first have to pay to exercise their options, then the difference between their strike price and the market price of their shares is taxed at their income tax rate. Finally, when they sell their shares they are taxed a further 28 percent. Real option plans come with other burdens. Companies often have to create an entirely new share class, and minority shareholders must be consulted on major corporate decisions.

Berlin startups are losing in the battle for competitive talent due to the lack of a tax-advantaged employee option scheme. Elsewhere in Europe policymakers are more supportive and offer schemes that don’t penalize the use of share options to incentivize teams. In France, startups can use the BSPCE (Bons de Souscription de Parts de Creatur d’Enterprise), and in the UK, the EMI (Enterprise Management Incentive) scheme offers the friendliest employee option tax treatment on either side of the Atlantic.

Due to all of the above, founders and investors creating and investing in Berlin-based companies often choose to domicile their companies elsewhere, like the UK or the US, despite the fact that the company may be based and headquartered in Berlin. It’s high time German policymakers recognized the tremendous potential of their startup ecosystem and gave the iconoclasts the tools to build world-class teams that will help shape tomorrow’s world.

JOMO and smartphone intent destruction

i thought i'd flag a new piece of research done by the android UX research team (summarized in this blog post)

essentially the gist is that even google is coming around to the idea that smartphone / mobile addiction is a problem and they are studying the different approaches people are taking to get off their devices (what they're coining JOMO - the joy of missing out). i would even go further and suggest that smartphones are responsible for something far more problematic for productivity, both on a personal and system-level, intent destruction. we all are familiar with taking out our phones to accomplish a specific task, only to find ourselves scrolling through a self-esteem destroying feed minutes later.

why is this relevant for us in the tech community, in particular investors? well, i expect that as awareness and acceptance of this problem increases, apps will either be policed or begin to self-police. hopefully the yardsticks will change, and successful apps might start measuring their health not based on "user engagement", but user satisfaction or some measure of the quality or their contentment with their time spent. (see excerpt from paper below on reconsidering success metrics)

it's also relevant for us because governments have already shown their willingness to come in with a heavy-hand when they perceive the problem as being out of control. the chinese government's crackdown on teenage gaming addiction has contributed to a $200B loss to tencent's market cap since Jan.

i also think it's important we think about this as we invest in next-generation's winners. whether we feel it in our selves, our families or our friends, i do feel like technology that distracts rather than enables may turn out to be short-term profitable but ultimately long-term problematic

some great product and measurement thoughts from the paper:

Reconsider Success Metrics

"We feel that the technology industry’s focus on engagement metrics is core to this attention crisis that users are facing. The more that businesses are incentivized to increase user engagement, as measured through frequency and duration of use, the more it feeds the competition for users’ attention. Hakansson and Sengers [12] described user attention as a commodity sold to advertisers and stressed the importance of seeing the user as a non-consumer. Engagement metrics alone do not account for user satisfaction [2]; even when users enjoy an app, they can experience frustration and guilt from inability to cease engagement [26]. It’s important to consider alternative metrics to indicate success, relating to user satisfaction and quality of time spent."

product market fit

Copyright Bill Watterson

a lot of people smarter than me have written a lot of intelligent things about product market fit. i still get asked the question by entrepreneurs of how i would define it so i thought i'd lay down a summary of my thoughts here.

creative, but largely unhelpful definition:
where the rubber meets the road

or, as Andy Rachleff would say:
when the dogs start eating the dog food

this is the most simple definition. the market is ingesting what you're serving.

external (the investors) definition:

a non-trivial group of customers or users are engaging with your product and service, and have proven that they are willing to trade something valuable for it, usually time or money (or both)

internal (the founders) definition:

early observations of your original value hypothesis being proven correct. a value hypothesis articulates what exactly your product or service is, who will use it, and most importantly why they will value it.

when you have early signs that your product or service is being used by people who truly value it, you've achieved product market fit.

this framework may seem a bit at odds with lean startup methodology.

  1. what if the product or service that customers are valuing is not core to our business?
  2. what if my core product or service is being valued by customers we didn't built it for?

shouldn't these be celebrated? shouldn't that count as product market fit? yes, eventually. but in the first instance you need to rally the company around the product or service that is taking off, and potentially rethink your long-term mission, your medium term strategy and your short-term tactics. in the second instance you should run with it, but also ask yourself why your targeted audience isn't ingesting the product or service and another one is, and potentially rejig the customer facing aspects of your business accordingly.

the above definitions may seem vague and imprecise. the truth is, it's hard to find a universal definition of product market fit that applies to every company. it might mean 4 enterprise customers signing sizeable contracts, it might mean $80k MRR, it might mean participants in a network starting the share meaningful content with one another, it might mean one developer deploying your framework for the first time in production...

perhaps it's clear, but of the above, i think the internal definition is most vital. if you're confident you've achieved product market fit but an investor or external party doesn't agree, go find one that does.