The Valuers Dilemma: Understanding the Tesla Stock Valuation

This morning I watched a fascinating debate on the YouTube channel “Tesla Daily” (https://www.youtube.com/watch?v=C0Fl6JBQgrc) between Rob Maurer and David Trainer. Rob is a self-confessed Tesla Bull, a columnist on The Street who hosts this excellent and influential YouTube Channel. (full disclosure – I am a subscriber and Patreon supporter of his). David is a respected, long time Wall Street analyst, the founder and CEO of New Constructs, an independent financial analyst, and a self-confessed fan of the Tesla car and its founder Elon Musk.  However, in contrast to Rob, David has a very sober view of the current stock price, considering it highly speculative. His firm has authored a note warning that Tesla is “the most dangerous” stock for the investors with fiduciary responsibility.

What struck me as I listened to the debate was that both Rob and David appear to talk past what I consider to be the root cause of the current Tesla stock price, and its probable future value. Elon Musk is famous for insisting on looking at every problem from first principles – and even more famous for being so successful in the application of those principals. Let’s follow Elon’s dictum here to learn why Tesla’s stock has reached such stratospheric levels and evaluate whether it is likely to remain as high or even grow further.

In the matter of the Tesla valuation, I believe the first principle that governs is the Innovators Dilemma, which for brevity we will refer to as InDi. It underpins the entire value proposition represented by Tesla, not just in the gigantic global automotive market, in transportation in general, and perhaps – ultimately even more importantly – in the global energy markets.

InDi is not well understood by the analysts, and is seldom accorded significant value, which is perplexing as it is an extremely well understood process. It was formulated by Clayton M. Christensen while Professor of Business Administration at the Harvard Business School and popularized by him in many writings. Christensen writes:

When disruptive technologies emerge, dominant, well-run companies often stumble. These companies tend to use the same sound business judgment that has guided them through previous changes, including:

  • Listening to what current customers want
  • Providing more and improved versions of what customers want
  • Investing in projects that promise the highest returns

However, in the face of disruptive innovations, these strategies don’t produce the same results. This is the innovator’s dilemma: The approaches that lead to success in adopting most innovations lead to failure when confronting disruptive innovations.

Elucidating on those key finding, Wikipedia adds the following woes to the incumbent’s situation, (each of which readily identifiable with the OEMs!):

  • Small markets struggle to impact an incumbent’s large market
  • Disruptive technologies have fluid futures, as in, it is impossible to know what they will disrupt once matured
  • Incumbent Organizations’ value is more than simply their workers, it includes their processes and core capabilities which drive their efforts
  • Technology supply may not equal market demand. The attributes that make disruptive technologies unattractive in established markets are often the ones that have the greatest value in emerging markets

On the other hand, consider the position of the challenger:

  • They develop the disruptive technology with the ‘right’ customers. Not necessarily their current customer set
  • They place the disruptive technology into an autonomous organization that can be rewarded with small wins and small customer sets
  • They fail early and often to find the correct disruptive technology
  • They allow the disruption organization to utilize all of the company’s resources when needed but are careful to make sure the processes and values were not those of the company

It is easy to recognize the Tesla in these qualities. Finally, Wikipedia points out

  • Disruption is a process, not a product or service, that occurs from the fringe to mainstream
  • Originate in low-end (less demanding customers) or new market (where none existed) footholds
  • New firms don’t catch on with mainstream customers until quality catches up with their standards
  • Success is not a requirement and some business can be disruptive but fail
  • New firm’s business model differs significantly from incumbent

Success, in simple terms, Christensen says, is “correlated with a business model that is unattractive to its competitor”. This is brilliantly true in the case of Tesla vs. OEMs.

A telling example of the business model problem of incumbents becomes apparent in the YouTube debate when Rob Maurer points to the direct sales model of Tesla, capturing the profit margin that would otherwise go to the dealer. David Trainer argues that the dealership network of the OEMs is a strength, allowing the OEMS “concentrate on their main business”, providing them with broad distribution. What eludes him is that this same dealer network becomes uneconomic in an EV world, as the service and spare parts business on EVs are not 10% of that of an ICE vehicle. In simple terms, the dealership can only be run at a loss, and Tesla’s online sales are a significant advantage. Early evidence is that OEMs have been unable to persuade their dealership network to sell EVs, contributing to the woeful sales of legacy OEM sales of their electric vehicles. The dealership network is at the core of the OEM business model that would be legally, culturally and financially impossible to voluntarily sever; yet with it, the OEM EV future is probably doomed.

Another example of the business model problem is the “deep supply chain” mentioned by Trainer as a significant advantage to OEMs. Unfortunately for them, this may be the most serious problem for the OEM business model. It is this supply chain that inhibits the development of a vehicle, with an integrated battery/drivetrain/HVAC/computer system, a vitally necessary step to creating a competitive EV offering (and the reason for the failure of so many “Tesla Killers” to date). The key intellectual property of OEMs retain is their internal combustion engine designs, possibly the only component – apart from the sheet metal – that they develop and manufacture in-house. Ironically this is the technology that is of least value – in fact no value – in this new market.

To compound the problem, the challenge is not just a matter of replacing an internal combustion engine with an electric motor, and simply adding a battery in the stead of a gas tank. Instead it is a highly complex problem of redesigning the drivetrain and vehicle into a single, comprehensive whole. Depending on a supply-chain network to provide this does not permit the iterative design/development necessary to rapidly evolve successful solutions to this very difficult problem. With well evolved, century old technology, depending on supply chains for R&D of everything but the engine made sense: but the situation has changed, and dramatically so; disruption is now occurring. OEMs development cycles traditionally stretch to years. Tesla iterates its designs from month to month to month.

Large OEMs are not given to iterative design/development. This is a longer discussion, and perhaps key in differentiating Tesla from the incumbents. It is sufficient to point to the continuing and growing technological leadership of the company’s vehicles over the incumbents. It is instructive that the industry has not yet been able to manufacture a car to compete with the Tesla Model S, first sold seven year ago.

One could cite many other startlingly clear examples of InDi in the Tesla versus all the others debate. This is also true of Tesla’s work in the energy markets, but I won’t do that here, it is all well documented, and Rob is probably more of an expert than I am in this field. Instead I want to return to the question raised at the outset: the problem of valuation. How does one value stocks of InDi companies, stocks that David Trainer is sure to label, as he has with Tesla, as the “most dangerous” of all stocks.

The answer, of course, is “with great difficulty.” Aswath Damodaran, the “Dean of Valuation”, NYU professor famous for valuation methodology, talks about his struggle – and consistent failure – to value Amazon, that well-known bookseller (http://aswathdamodaran.blogspot.com/2018/04/amazon-glimpses-of-shoeless-joe.html)

Oh yes, of course I know Amazon is not a bookseller. But in the early days we were told – by professors of valuation – that Amazon would have to sell all the books sold in the world to justify its valuation. Amazon taught us several important points. It proved the Christensen principles “Disruptive technologies have fluid futures”, and “Disruption is a process, not a product or service, that occurs from the fringe to mainstream”, and probably a few more besides. What is more important about the Amazon example, is the admission – in the above cited article – made by Damodaran of the extreme importance of this fact:

“Bezos …telling his stockholders that if Amazon built it (revenues), they (the profits and cash flows) would come. In all my years looking at companies, I have never seen a CEO stay so true to a narrative and act as consistently as Amazon has, and it is, in my view, the biggest reason for its market success.”

And further:

“I have consistently under estimated not only the innovative genius of this company, but also its (and its investors’) patience.”

So here we arrive (finally) as my thesis. Like Jeff Bezos, Elon Musk is an innovative genius, who has defined clearly his objectives, methods and objectives. He has created an innovative, rapid learning machine to create products with enormous market appeal and success, in gigantic global markets, and delivered by a highly productive business model. The firm has found its stride in producing and delivering in mass and is in the process of demonstrating its ability to scale. It has built and successfully brought to production in a record time (at several times the speed of the legacy OEMs) a massive factory in China. It is in the process of – simultaneously – building three gigantic factories across the world and is demonstrating a confident touch in those buildouts. And, to top it all off, Tesla is finding ways to drastically reduce costs of manufacture (https://electrek.co/2020/08/25/tesla-start-operations-worlds-largest-casting-machine/).

All Good. Rob aced these points. Here is the miss. Tesla DOES NOT INTEND TO MAKE PROFITS in the foreseeable future. Tesla has said in many fora that is intends to minimize profits. Elon said in the last conference call that the objective to show no more than 1% or so of profits. He, like Jeff Bezos in the quote above, understands that the InDi is focused on market share, not profitability. Tesla is focused on and is on a clear track toward dominance of the Automobile market. And I confidently predict that Tesla’s shareholders, like Amazon’s before them (many of them, after all, are the same people and institutions) will be quite content with that.

Tesla already has a huge price advantage over its competition. Critics – David Trainer amongst them – do not yet realize that Tesla will shortly no longer be a “premium” priced vehicle – Tesla is persistently driving down the price of its vehicles, and their cars are rapidly approaching price parity with Toyota, even while their automotive gross margins are trending higher than the OEMs. Tesla has demonstrated significantly better price/value against their competition.

Why? Because of Wrights Law (The cost of a unit decreases as a function of the cumulative production – https://spectrum.ieee.org/tech-talk/at-work/test-and-measurement/wrights-law-edges-out-moores-law-in-predicting-technology-development) is on Tesla’s side. They have 1 million EVs behind them. No other OEM approaches them, and they will deliver half a million vehicles this year, 5 times more than the nearest competitor, over a million next year, and on and on. VW hope to reach 1.5 million vehicles by 2025. At that point, Tesla are targeting to have delivered over 8 million vehicles (and have plants already built or in building stages) to enable them to deliver those numbers. Tesla’s production cost should be dramatically lower than that of other OEMs. (ARKinvest, a fund manager dedicated to disruptive innovation brought Wright’s Law to my attention – https://ark-invest.com/wrights-law/)

Tesla could translate this advantage into profitability, but the point the point – according to Elon and the company – Tesla won’t. Tesla will drive the cost of the car down inexorably, while at the same time dramatically increasing the efficiency of their capital expenditure (https://cleantechnica.com/2019/10/26/capital-efficiency-teslas-obsession/)

So, throw away your spreadsheets. All those CPAs and valuation specialists carefully compounding profitability and cash flow – its not going to happen. Prices are going to be driven down, free cash is going to be aggressively invested into plant, charging infrastructure, service centers, AI chip development for Autonomy, supercomputers for AI training, Powerwalls, Solar Roof, Autobidder energy trading platform, and on and on. But no profits, and no free cash flow.     

Rob and David both rely on the spreadsheet method and apply a predetermined formula to arrive at a discount of some variation of combination of the profits/cash flow of a firm for a given (presumed) scenario. In this they are joined by the Dean of Valuation, Aswath Damodaran.

But Damodaran should have learned from his Amazon experience. In the case of Amazon, he regrets selling his shares in 2012, and missing the huge run that stock has enjoyed since. But in January of this year he once again applied his spreadsheet formulas, this time to his investment in Tesla, resulting in him sellin his stock at $640. He said:

 “The momentum is strong, and the mood is delirious, implying that Tesla’s stock price could continue to go up. That said, I am not tempted to stay longer, though, because I came to play the investing game, not the trading game, and gauging momentum is not a skill set that I possess. I will miss the excitement of having Tesla in my portfolio, but I have a feeling that this is more a separation than a permanent parting, and that at the right price, Tesla will return to my portfolio in the future.”

In this, I believe he is wrong. I don’t believe this is a “story” stock, nor is it a “momentum” stock. It will certainly fluctuate very widely over time, given the emotions of its supporters and detractors. But, now that Tesla has demonstrated its ability to execute, it is highly unlikely that it will return to the “right price” according to his formulations.

Tesla today, like Amazon in the early 2000s, is a stock that has proven its ability to both innovate at the extreme, and to execute. It truly deserves the Innovators title. It is on a clear path to dominance – not just in the EV market – but in the Auto market, and beyond that in the energy market. What is missing in Rob, David, and Aswath’s calculus, is the formula that values an Innovator that can disrupt large, established markets. We must re-examine the arc of Amazon, Google, Microsoft (in the early Gates years), Apple (in the Jobs years) and the sparse number of firms that have truly demonstrated the characteristics of Innovators. These characteristics are not ephemeral – they take some years to evidence themselves – and they have clear markers. They do not occur frequently, certainly not as frequently as Venture Capital sponsors would have us believe, given the number and rate at which they berth their “Unicorns”. But for legitimate disruptors I believe the appropriate valuation is a function of their Total Addressable Market (TAM). In the case of Amazon it is a function of the retail and technology markets in the geographies in which they operate. In the case of Tesla, it is in the global automotive and energy markets in which they operate.

Just Not FEELing it

In a recent DecisionCAMP conference, in the “ask a vendor” free-for-all, I – once again – voiced my concerns about the inclusion of FEEL (Fairly Easy Expression Language) in the DMN (Decision Model and Notation) specification. This is an issue that I raised at the outset of my involvement with the working group, and continue to raise, as I do now.

To the extent that an expression language is a necessity – and of course it is – FEEL is not the solution. FEEL creates a barrier to entry by requiring the initiate to learn yet another language, when there is at our disposal a perfectly acceptable expression language, originated in Excel, now used by many spreadsheet clones, and clearly a global standard as it is used by hundreds of millions of practitioners. It just so happens that the most effective users of Excel are the key users of decision management tools – business analysts.

The resort to FEEL in many cases is an attempt to overcome what is a lack of capability in the model notation. Some practitioners even regard the modeling notation as just a way to “sketch” the models by the business analysts, so that they, the practitioners, may then “perfect” the models in FEEL language! In other cases, FEEL is used to implement procedural functions, negating the declarative intent of the model.

These examples of the use of FEEL defeat what I consider to be the principal reasons for DMN – to enable businesspeople to express AND MANAGE business logic. The abstraction of that logic out of language formalism into a visually descriptive but accurate and rigorous notation is key to achieving that goal.

I am not alone in this. Carlos Serrano-Morales of Sparkling Logic opined in the discussion that in practice the use of FEEL leads to models that defeat the purpose of interoperability. Called on to defend FEEL, Gary Hallmark of Oracle – one of the original – and principal advocate – of FEEL in the original DMN working group – made the acute observation that while he believed there was still a place for organizations like OMG, that in large measure it was the Open-Source community that was propagating modern day “standards”, suggesting to me that Gary’s views on FEEL have evolved. The glacial rate of evolution of DMN, compared to the dynamic demands of the decision modeling community, lead to the vendor work arounds, and constantly create customized evolutions of DMN/FEEL to meet client needs.

Over the last year or so, I have become acutely aware of the Babel of different scripting, expression and programming languages in the marketplace. This is due to my work on the ALE (Automated Language Extraction) product we, Sapiens Decision, are introducing to market. ALE uses Machine Learning and other methods to extract business logic from programming languages – and potentially natural language – and render that logic into normalized decision models, to be managed in a decision management tool.

We believed that the hundreds of billions of lines of code in COBOL legacy solutions were obvious targets for ALE to extract the business logic, and then manage as decision models in Sapiens Decision. Turns out that the real problem is the logic embedded in any/all computer language(s), even in the most modern of systems!

One representative, but striking, use case I will cite is a super-regional insurance company that recently implemented Sapiens Decision. This company is five years into a strategic re-organization that involved the implementation of a new, enterprise-wide, Policy Administration System (PAS), a classic step in modernization of a legacy insurance company. The CIO recently made the statement to a user group forum at Sapiens Decision that had they implemented decision management at the outset of their journey, they would have “saved tens of millions of dollars.” The reasons for the savings are multi-fold, a contributing factor being the proprietary language used by the PAS in implementing customized business logic. This leads to a combination of issues:

  • The need for a group of specialized (read: highly paid) developers with the knowledge of the unique language used by the PAS to perform customizations (of which there are – of necessity – a great many.
  • A classic combination of business analyst working with developers to implement the solution, leading to lengthy (and costly) implementation and change process.
  • Lengthy – and expensive processes to upgrade the PAS system as it progresses through its life cycle of version following version.

By removing the custom logic (and even a significant portion of the native logic) from the PAS, and having it rendered in decision models, the PAS is made “lightweight”. The decisions are exposed to the PAS as an API. The PAS becomes capable of being upgraded in compressed cycles, and able to be supported by a significantly smaller developer team. Business analysts become more effective, able to author and test their requirements directly; even more importantly they can manage the full scope of the business logic, and isolate change opportunities and process improvements without the deep archeological dives into the code previously necessary.

Given this great use case for decision management, the problem remains that the client has the very large body of existing code to be converted to decision models. Of course, we could manually extract the logic, but cost and time to value are dramatically collapsed using ALE.

This is one example of the use of ALE. To date we have faced client requirements to extract the business logic from an extraordinary variety of rule platforms and languages. The list (not exhaustive) includes COBOL, of course, but also Java, Python, and proprietary languages in several different enterprise systems, ETL tools, and business rule languages (an aside – yes, rule languages are about as bad as programming languages in terms of their proprietary nature, highly specialized technical staffs, the difficulty of traceability, and the entropy in the structure of rules in the rule repository over time.)

Clearly, the last thing the client seeks to do is re-embed any part of the logic, once it Is ultimately freed of language dependency, into yet another language, most particularly one supported by a relative handful of practitioners.

If anything, in this second decade of decision management, I am even more passionate in the belief that a declarative, visual representation of even the most complex business logic is the holy grail. I would be the first to admit that our current tools are far from sufficient to achieve this objective. In the next post I will provide – in the spirit of Open Source – my thoughts on the problems to be tackled together with ideas for solutions.

Elon Time? Part 1

Elon Musk has developed a reputation for so-called “Elon Time”, projecting new products or services in seemingly impossible to deliver timescales. While good humouredly accepting the charge, he responds “I may be late, but I always deliver!”

Despite this reputation, Musk consistently projects the financial performance of Tesla with amazing accuracy; in fact, one may say that he has been fantastically prescient, significantly besting even the most farsighted of fellow CEOs.

The following slide, taken from a February 2014 Tesla deck introducing the “Gigafactory” for the first time, projected 500,000 vehicle sales in 2020, six years in the future.

https://www.tesla.com/sites/default/files/blog_attachments/gigafactory.pdf

At the time this slide was produced in February of 2014, Tesla had never produced even 7,000 cars in a quarter. They had but a single vehicle model, at a price point that could not conceivably reach volume annual sales, and they were faced at having to invent and develop a range of technologies to enable and justify the 2020 projected 500,000 vehicle sales per year, only 6 years hence. Amongst the many technologies that would have to be evolved to make that possible were batteries, at volumes and specifications that were not considered practical or economic at the time.

https://www.tesla.com/sites/default/files/blog_attachments/gigafactory.pdf

An illustration of the scope of the challenge is that Tesla were planning to build more batteries in a single U.S. based plant as the entire global cell production at that time, almost all of which based in Asia (and all focused on delivering computer/phone batteries).

So, Tesla had to (1) Persuade a leading, Asian based battery supplier to the computer industry to (2) invest a very significant amount of money in an unprecedented manufacturing plant in the US, to (3) manufacture batteries based on a new design and evolving technology for a (4) a newly minted car manufacturer that claimed it was going to 15x its sales (5) with a car that was not yet designed, but would contain (6) not only the novel batteries, but a whole new drive train (not to mention computer system) (7) and the success of the car would depend on a greater than 30% reduction in cell cost!

What could possibly go wrong?

It is quite stunning that Tesla will hit, and probably best these delivery goals set in early 2014, despite the broad range of unknowns on that date. (As we sit here at the end of Q3 2020, the best estimates for deliveries for the year 2020 are a shade over the 500,000 targeted in 2014!).

Given the pressure of constant, rapid innovation, Tesla’s guidance has been quite reliable, at least in it’s broad brushstrokes, and despite some understandable lapses.

The company has had to learn to introduce products, and production lines. In the past, they faltered in the introduction of both the Model X and the Model 3, and in managing global delivery logistics for the Model 3, all of which had repercussions on Tesla’s financial expectations. However, that past negative must be balanced with the most recent quarters: the extraordinary speed of the Shanghai factory build out, the smoothness of Model Y launch execution, the launch of the Model Y in Shanghai, and the apparent high speed of the build out of Giga Berlin and Giga Texas.

Given the gigantic scope of their 2014 ambition, achieving it has been truly astonishing, giving the expression “Elon Time” a completely different complexion.

In Elon Time, Part Deux we will explore what the future holds for Tesla, per Elon.

Placing a Value on Tesla

This morning I watched a fascinating debate on the YouTube channel “Tesla Daily” between Rob Maurer and David Trainer. Rob is a self-confessed Tesla Bull, a columnist on The Street who hosts this excellent and influential YouTube Channel. (full disclosure – I am a subscriber and Patreon supporter of his). David is a respected, long time Wall Street analyst, the founder and CEO of New Constructs, an independent financial analyst, and a self-confessed fan of the Tesla car and its founder Elon Musk; However, in contrast to Rob, David has a very sober view of the current stock price, considering it highly speculative. His firm has authored a note warning that Tesla is “the most dangerous” stock for the investors with fiduciary responsibility.

What struck me as I listened to the debate was that both Rob and David appear to talk past what I consider to be the root cause of the current Tesla stock price, and its probable future value. Elon Musk is famous for insisting on looking at every problem from first principles – and even more famous for being so successful in the application of those principals. Let’s follow Elon’s dictum here to learn why Tesla’s stock has reached such stratospheric levels and evaluate whether it is likely to remain as high or even grow further.

In the matter of the Tesla valuation, I believe the first principle that governs is the Innovator’s Dilemma, which for brevity we will refer to as InDi. It underpins the entire value proposition represented by Tesla, not just in the gigantic global automotive market, in transportation in general, and perhaps – ultimately even more importantly – in the global energy markets.

InDi is not well understood by the analysts, and is seldom accorded significant value, which is perplexing as it is an extremely well understood process. It was formulated by Clayton M. Christensen while Professor Business Administration at Harvard Business School and popularized by him in many writings. Christensen writes:

When disruptive technologies emerge, dominant, well-run companies often stumble. These companies tend to use the same sound business judgment that has guided them through previous changes, including:

  • Listening to what current customers want
  • Providing more and improved versions of what customers want
  • Investing in projects that promise the highest returns

However, in the face of disruptive innovations, these strategies don’t produce the same results. This is the innovator’s dilemma: The approaches that lead to success in adopting most innovations lead to failure when confronting disruptive innovations.

Elucidating on those key finding, Wikipedia adds the following woes to the incumbent’s situation, (each of which readily identifiable with the OEMs!):

  • Small markets struggle to impact an incumbent’s large market
  • Disruptive technologies have fluid futures, as in, it is impossible to know what they will disrupt once matured
  • Incumbent Organizations’ value is more than simply their workers, it includes their processes and core capabilities which drive their efforts
  • Technology supply may not equal market demand. The attributes that make disruptive technologies unattractive in established markets are often the ones that have the greatest value in emerging markets

On the other hand, consider the position of the challenger:

  • They develop the disruptive technology with the ‘right’ customers. Not necessarily their current customer set
  • They place the disruptive technology into an autonomous organization that can be rewarded with small wins and small customer sets
  • They fail early and often to find the correct disruptive technology
  • They allow the disruption organization to utilize all of the company’s resources when needed but are careful to make sure the processes and values were not those of the company

It is easy to recognize the role of Tesla in these qualities. Finally, Wikipedia points out:

  • Disruption is a process, not a product or service, that occurs from the fringe to mainstream
  • Originate in low-end (less demanding customers) or new market (where none existed) footholds
  • New firms don’t catch on with mainstream customers until quality catches up with their standards
  • Success is not a requirement and some business can be disruptive but fail
  • New firm’s business model differs significantly from incumbent

Success, in simple terms, Christensen says, is “correlated with a business model that is unattractive to its competitor”. This is brilliantly true in the case of Tesla vs. OEMs.

Rob provides a telling example of this when he points to the direct sales model of Tesla, capturing the profit margin that would otherwise go to the dealer. David Trainer argues that the dealership network of the OEMs is a strength, allowing the OEMS “concentrate on their main business”, providing them with broad distribution. What eludes David is that this same dealer network becomes uneconomic in an EV world, as the service and spare parts business on EVs are not 10% of that of an ICE vehicle. In simple terms, the dealership can only be run at a loss, and Tesla’s online sales are a significant advantage. Early evidence is that OEMs have been unable to persuade their dealership network to sell EVs, contributing to the woeful sales of legacy OEM sales of their electric vehicles. The dealership network is at the core of the OEM business model that would be legally, culturally and financially impossible to voluntarily sever; yet with it, the OEM EV future is probably doomed.

Another example of the business model problem is the “deep supply chain” mentioned by Trainer as a significant advantage to OEMs. Unfortunately for them, this may be the most serious problem for the OEM business model. It is this supply chain that inhibits the development of a vehicle, with an integrated battery/drivetrain/HVAC/computer system, a vitally necessary step to creating a competitive EV offering (and the reason for the failure of so many “Tesla Killers” to date). The key intellectual property of OEMs retain is their internal combustion engine designs, possibly the only component – apart from the sheet metal – that they develop and manufacture in-house. Ironically this is the technology that is of least value – in fact no value – in this new market.

To compound the problem, the challenge is not just a matter of replacing an internal combustion engine with an electric motor, and simply adding a battery in the stead of a gas tank. Instead it is a highly complex problem of redesigning the drivetrain and vehicle into a single, comprehensive whole. Depending on a supply-chain network to provide this does not permit the iterative design/development necessary to rapidly evolve successful solutions to this very difficult problem. With well evolved, century old technology, depending on supply chains for R+D of everything but the engine made sense: but the situation has changed, and dramatically so; disruption is now occurring. OEMs development cycles traditionally stretch to years. Tesla iterates its designs from month to month to month.

Large OEMs are not given to iterative design/development. This is a longer discussion, and perhaps key in differentiating Tesla from the incumbents. It is sufficient to point to the continuing and growing technological leadership of the company’s vehicles over the incumbents. It is instructive that the industry has not yet been able to manufacture a car to compete with the Tesla Model S, first sold seven year ago.

One could cite many other startlingly clear examples of InDi in the Tesla versus all the others debate. This is also true of Tesla’s work in the energy markets, but I won’t do that here, it is all well documented, and Rob is probably more of an expert than I am in this field. Instead I want to return to the question raised at the outset: the problem of valuation. How does one value stocks of InDi companies, stocks that David Trainer is sure to label, as he has with Tesla, as the “most dangerous” of all stocks.

The answer, of course, is “with great difficulty.” Aswath Damodaran, the “Dean of Valuation”, NYU professor famous for valuation methodology, talks about his struggle – and consistent failure – to value Amazon, that well-known bookseller

Oh yes, of course I know Amazon is not a bookseller. But in the early days we were told – by professors of valuation – that Amazon would have to sell all the books sold in the world to justify its valuation. Amazon taught us several important points. It proved the Christensen principles “Disruptive technologies have fluid futures”, and “Disruption is a process, not a product or service, that occurs from the fringe to mainstream”, and probably a few more besides. What is more important about the Amazon example, is the admission made by Damodaran in the article cited above, of the extreme importance of this fact:

“Bezos …telling his stockholders that if Amazon built it (revenues), they (the profits and cash flows) would come. In all my years looking at companies, I have never seen a CEO stay so true to a narrative and act as consistently as Amazon has, and it is, in my view, the biggest reason for its market success.”

“I have consistently under estimated not only the innovative genius of this company, but also its (and its investors’) patience.”

So here we arrive (finally) at my thesis. Like Jeff Bezos, Elon Musk has defined clearly his objectives, methods and objectives. He wrote them down – they are available on the Tesla web site in articles entitled “Secret Master Plan” and “Master Plan (Part Deux)” written 14 and 4 years ago respectively. Tesla has followed the strategic path set out with remarkable fidelity.

Tesla has created an innovative, rapid learning machine to create products with enormous market appeal and success, in gigantic global markets, and delivered using novel but efficient business models.

The cars started out as expensive, appealing to a coterie of ecologically aware, well-off customers, but have moved significantly down the cost scale, and dramatically widened the appeal of the product. In the past two years the firm has found its stride in producing and delivering in mass and is in the process of demonstrating its ability to scale, and have consistently driven down the price – and cost – of the mass production vehicles. It has built and successfully brought to production in a record time (at several times the speed of the legacy OEMs) a massive factory in China. It is in the process of – simultaneously – building three gigantic factories across the world and is demonstrating a confident touch in those buildouts. Also Tesla is finding ways to drastically reduce costs of manufacture and evolving new and emerging products into related, but huge markets (as prescribed by Christensen.)

All Good. Rob aced these points. Here is the miss: Tesla DOES NOT INTEND TO MAKE PROFITS in the foreseeable future. Tesla has said in many forums that it intends to minimize profits. It’s in the published Master Plan. Elon said in a recent conference call that the objective was to show no more than 1% or so of profits. He, like Jeff Bezos, understands that the disrupter is focused on market share, not profitability. Tesla intends to, and is on a clear track toward dominance of the Automobile market. NOT THE EV SECTOR – THE ENTIRE AUTO MARKET. Calculating the EV sector as a percentage of the auto market, then calculating Tesla’s market share is missing the point. Tesla intends to ensure that the entire auto market becomes an EV market. In this I have little doubt they will succeed, in a leadership role, and in less than ten years. ICE cars are the flip phones of 2030. Value that.

A Tale of Two Countries…Part Deux

All data downloaded from worldometers.info 8/15/2020 09:00 EST

On May 9 we wrote about the great divide between the “hot” cities/states, and the rest of the country, remarking on the significant divide between those states with high rates, versus the majority of states with low COVID-19 infections (https://www.linkedin.com/pulse/tale-two-countries-larry-goldberg/?trackingId=KoRoENT6ZX6AqnV%2BX%2Fe44Q%3D%3D)

Now that the infection rates have risen significantly in many states and localities, it is time to revisit those infection rates.

Nothing more clearly illustrates the disparity across the divide than the graph above. It shows deaths per day to the commencement of the Pandemic in early March, until this week. It compares, in scale, the course of the pandemic in New York State and California from the first onset in early/mid march, to current time.

We see the early explosion of infections in New York State (principally New York City area) to a peak of 1,000 death per day, followed by a rapid decline to deaths in the single digits, less than a month after the onset of the first infections. In California, a state twice as large as New York, on the other hand, there was a gradual rise to double digits over a period of a month, which was maintained over three months of lockdown, followed by a modest rise to a peak of 200 deaths a day by early August after a degree of loosening of the lockdown. Since California is twice as populous as New York, it can be seen that there is a huge disparity between the states. To date, New York has suffered a state-wide mortality rate of about 1,700 per 1 million of population, versus California at less than 300 per million.

The disease appears to have run its course in New York, and there is an effective community immunity (given continued public precautions); this is not the case in California, when infections, and deaths, continue. But using IHME projections, the likely mortality count is unlikely to double. Thus, at best, New York will end up with about 3 times the mortality rate of California.

All data downloaded from worldometers.info 8/15/2020 09:00 EST

The table above shows the disparity between four of the principal population centers on the US North-East corridor (New York, New Jersey, Connecticut and Massachusetts) compared to the rest of the USA. These three states alone, with a population of about 8% of the US, are enough to have a huge impact on the Infection Fatality Rate (IFR) of the entire USA. If we were to discount the impact of these states, the US would have a IFR of better similar to those of the European countries, none of which evidenced levels of concentration remotely approaching those of the N/E Seaboard.

What caused this spike? In the absence of detailed evidence, we can only guess. It appears that the concentration of international Airports in the corridor, feeding directly into heavily trafficked mass-transit systems connecting dense population centres had roles to play. Couple these factors with a lack of early warning, of preparedness, and of awareness, and the locating of COVID-19 overflow patients into elder care facilities; all this contributed heavily to the disaster.

Each week new ideas and theories about COVID-19, its science, treatments emerge, not to mention ongoing debate about the politics of the pandemic. We try to provide the emergent data, and allow people to form their own conclusions.

Weekly Graphs

Each week, with some exceptions, we update our graphs tracking the disease. We do so with no comment this week.

All data downloaded from worldometers.info 8/15/2020 09:00 EST USA

This graph summarizes, in 7-day moving average trend lines the state of the pandemic in the USA. The secondary peak of infections is shown to have occured on 7/26. This chart indicates a 26 day lag time between infection rise and proportionate death. Given this, we may see a peak in the mortality rate occur around 8/21, with rates in excess of 2,000 deaths per day.

All data downloaded from worldometers.info 8/15/2020 09:00 EST USA

The states that are emblematic and comprise of the major component of the late stage bloom in the US are California, Texas and Florida. We follow them as they are a strong indicator of the likely course of this stage of the pandemic. As in the total US numbers, mortality rates reflect the rise in the disease rate from 26 days prior, even if the mortality rate is departing somewhat from this correlation in the most recent weeks. In the above graph we have shifted the infection rates back by 26 days to more clearly show this correlation. It is not clear that there is yet a peak in the infection rate for these states, unlike that which we see for the US as a whole. This is of concern, and we will be watching this statistic closely over the coming period.

Uploaded from https://www.cdc.gov/nchs/nvss/vsrr/covid19/excess_deaths.htm 08/15/2020 0900 EST USA

The rise in excess deaths in recent weeks reflects the forthcoming second, but lower, peak of infections in the US.

Uploaded from https://coronavirus.jhu.edu/testing/testing-positivity 8/15/2020 0900 EST USA

The above graph indicates those states (shown in green) that are indicating positive results on 5% or less for all test cases they have conducted over the prior 10 days. This is the WHO goal that indicates a sufficient level of testing to enable a re-opening. As of this week only 17 states have met this goal, while most have re-opened their economies to some degree or another. The testing situation continues to not show any material improvements.

Downloaded from rt.live 8/2/2020 0900 EST USA

This graphic is based on analysis compiled by rt.live, and indicates states that have an Rt less than 1.0 (green), and those above (red). Rt is a key measure of how fast the virus is growing. It’s the average number of people who become infected by an infectious person. If Rt is above 1.0, the virus will spread quickly. When Rt is below 1.0, the virus will stop spreading. Rt for any given state should be considered against the total number of infections in a given state. Rt over 1 in a state with a minimal number of infections may be less serious than a state with a large number of infections having an Rt that may be under 1. Perhaps also of consequence is the uncertainty bars: these are the bars above and below the bubbles, and they have expanded considerably in the last two weeks, indicating that there should be some concern about the accuracy of the numbers.

All data downloaded from worldometers.info 8/15/2020 0900 EST USA

We end with our compilation of US pandemic statistics vs a set of European states to contrast the experience of the two continents. Unless there is a significant new wave of infections, it is clear that the mortality rate in the US is going to outpace that of Europe as a whole. However, absent Germany, or the 4 N/E corridor US states, and the two continents would be balanced. The experience of Germany needs much deeper examination. We will be delving into that in the coming weeks.

Weekly Statistical Survey of COVID-19 in the US

This week we provide a brief survey of an update to the statistics we have followed over the past five months.

All data downloaded from worldometers.info 8/2/2020 09:00 EST USA

This graph summarizes, in 7-day moving average trend lines the state of the pandemic in the USA. The secondary peak of infections is shown to have occured on 7/26. This chart indicates a 26 day lag time between infection rise and proportionate death. Given this, we may see a peak in the mortality rate occur around 8/21, with rates in excess of 2,000 deaths per day.

All data downloaded from worldometers.info 8/2/2020 09:00 EST USA

The states that are emblematic and comprise of the major component of the late stage bloom in the US are California, Texas and Florida. We follow them as they are a strong indicator of the likely course of this stage of the pandemic. As in the total US numbers, mortality rates reflect the rise in the disease rate from 26 days prior, even if the mortality rate is departing somewhat from this correlation in the most recent weeks. In the above graph we have shifted the infection rates back by 26 days to more clearly show this correlation. It is not clear that there is yet a peak in the infection rate for these states, unlike that which we see for the US as a whole. This is of concern, and we will be watching this statistic closely over the coming period.

downloaded from https://www.cdc.gov/nchs/nvss/vsrr/covid19/excess_deaths.htm 8/2/2020 09:00 EST USA

The modest rise in excess deaths in recent weeks is beginning to reflect the forthcoming second peak of infections in the US.

downloaded from https://coronavirus.jhu.edu/testing/testing-positivity 8/2/2020 09:00 EST USA

The above graph indicates those states (shown in green) that are indicating a positive results on 5% or less for all test cases they have conducted over the prior 10 days. This is the WHO goal that indicates a sufficient level of testing to enable a re-opening. As of this week only 17 states have met this goal, while most have re-opened their economies to some degree or another.

downloaded from rt.live 8/2/2020 09:00 EST USA

This graphic is based on analysis compiled by rt.live, and indicates states that have an Rt less than 1.0 (green), and those above (red). Rt is a key measure of how fast the virus is growing. It’s the average number of people who become infected by an infectious person. If Rt is above 1.0, the virus will spread quickly. When Rt is below 1.0, the virus will stop spreading. Rt for any given state should be considered against the total number of infections in a given state. Rt over 1 in a state with a minimal number of infections may be less serious than a state with a large number of infections having an Rt that may be under 1. Perhaps also of consequence is the uncertainty bars: these are the bars above and below the bubbles.

All data downloaded from worldometers.info 8/2/2020 09:00 EST USA

We end with our compilation of US pandemic statistics vs a set of European states to contrast the experience of the two continents. Unless there is a significant new wave of infections, it is clear that the mortality rate in the US is going to outpace that of Europe as a whole. However, absent Germany, and that would change the balance. The experience of Germany needs much deeper examination. We will be delving into these numbers in the coming weeks.

Testing, testing, testing…

Readers of these posts know that we are extremely disappointed in how the whole testing story unfolded throughout the entire COVID-19 pandemic in the US.

On the 7th May we wrote in these columns:

“If there is any single major failure of policy and implementation of the science at the CDC, and at every level of government, it is in the area of testing.”

Things have not improved much since then. Below is the chart we showed yesterday, updated to current date, which shows States with a positivity above 5%, which is the level that indicates that only those that seek medical attention are being tested. This indicates that those states may not be able to understand whether the disease is spreading, and whether opening is recommended.

Uploaded from https://coronavirus.jhu.edu/testing/testing-positivity on 7/27/2010 at 13:00 EDT USA

There are other, profound problems with our testing. It’s too slow, taking days and sometimes weeks to return a result. And it is too difficult…requiring PPE clad, trained professionals to administer. This means that it is also too expensive and too ponderous to be used either universally, or frequently. And, to cap it all off, it takes to long to return, sometimes days, even weeks, by which time it is too late.

The Fix

We believe there is a real solution to the problem on the horizon. A shout out to Daniel Gerson, whose relentless researching, turned up some exciting developments that offer great promise if their advocates can overcome huge bureaucratic and regulatory hurdles endemic in our regulatory and healthcare systems today.

Daniel pointed us to https://www.youtube.com/watch?v=h7Sv_pS8MgQ&fbclid=IwAR21wiBpz4aI9hfp_BpiYlSGDcBkG4eiH7m_vfuxPHcXfEvhyWXSajL1ulM, a video on the MedCram YouTube channel. Apart from highly recommending the channel, and in particular Dr. Seheult the host of MedCram, Daniel was excited to have me learn about the work of Dr, Michael Mina of Harvard.

We highly recommend watching the video, but if you wish the Cliffs Notes, we will summarize the key takeaways below.

First, it is necessary to talk about Dr. Mina’s credentials, as he comes as an expert. He is an Assistant Professor of Epidemiology at Harvard T. H. Chan School of Public Health and a core member of the Center for Communicable Disease Dynamics (CCDD). He is additionally an Assistant Professor in Immunology and Infectious Diseases at HSPH and Associate Medical Director in Clinical Microbiology (molecular diagnostics) in the Department of Pathology at Brigham and Women’s Hospital, Harvard Medical School. His professional background and published work may be found here: https://ccdd.hsph.harvard.edu/people/michael-mina/

The summary of the video: Dr. Mina shows that the COVID-19 tests currently being conducted are – apart from being too costly and taking too long – too accurate!

Yes, you read that correctly, too accurate. His thesis is simple: the accuracy makes the test highly susceptible to finding positive results from people who are no longer infectious, and to make the test that sensitive, we have sacrificed speed, ease of use of the test, and economy. The video provides clarity by going into great detail on this issue.

Dr. Mina explains that we could produce, today, an extremely inexpensive, paper-strip/saliva test that could be self-administered, comfortably, at home that would immediately – in a matter of minutes – indicate whether a person is infected with COVID-19 and is infectious, and with an appropriate level of accuracy. In scientific terms it means printing monoclonal antibodies onto paper strips. The cost? At most, he says, a couple of dollars a strip.

Think about it. If every household in the US had a set of strips that they are able to use daily, then each and every one of us would be able to test ourselves on, test our children, every day.

Each of us could daily determine whether we were clear of disease, and therefore free to attend school, work and recreation without fear of infecting anyone.

Sound too good to be true?

The Science and the Politics of Testing

Without going into the science in excruciating detail, we can say that we have done a great deal of due diligence, and to our understanding, Dr. Mina’s findings are solid. We have the means to deliver at scale the tests he mentions, and these tests are capable of being used by the lay person, at home, in about 10 minutes or so, with no discomfort or difficulty. It the simplest possible terms, all that is required is spitting on a paper strip.

In short, it is a reliable test that determines whether a person is infected with COVID-19, and is thus infectious.

The test is not as sensitive as the current clinical tests. However, the sensitivity and accuracy of the clinical tests are not an advantage – and in some respects may be a disadvantage. The reason is well explained in the video, but essentially the virus, below certain levels of viral load, is not communicable. So the highly sensitive tests that we give could well quarantine people who are not longer infectious.

But this is the rub: our bureaucrats are fixated on sensitivity and accuracy, and will have a hard time accepting and investing in a test that they consider less than perfect. So in this sad case, the perfect may drive out the good, when ironically the good is better than the perfect.

It’s Going Backwards…

Data uploaded from www.worldometers.info on 7/26/2020 at 11:00am EST USA

On Saturday, July 26, the US reported 908 deaths. Saturday and Sunday are generally light reporting days because a lot of counties (all US death reporting is done at the county level) take the weekend off. Nonetheless, this level of mortality on a Saturday has not been seen in the US since the last weekend in May, two months previously, and is consistent with the continuous rise in mortality rate since the beginning of July. The 7-day moving average of the mortality rate has climbed to the 900 per day mark from the early July low of 500, and it may only a short time before we revisit the 1000 per day point. Until we develop a vaccine – which it is fairly certain will not be widely available before 2021 – the means our ability to continue opening up our economy depends upon three key factors: (1) State policies, (2) testing rates and (3) public behavior. These three factors are bound up, one with the other, and need to be considered together.

State policies

The question is whether the act of relaxing shelter-at-home policies, and permitting the re-opening of businesses, contributed to the ultimate rising of the mortality rate. Most states began the relaxation towards the end of April through the beginning of May. The mortality rate began to rise again in early to mid-July, a gap of about five to six weeks. This period is beyond the incubation period of the disease, but we know that the infection rates in most states have hovered close to the fine borderline between no growth in infections and very rapid growth. There has been a steady increase in the Rt (measure of the rate of spread of the virus) in most states, and this increase is what is being translated to every higher infections, and ultimately deaths. These next three charts from rt.live trace the Rt (the rate of infection) over the last 3 months across all 50 states in a graphic display of how the infection rate has crept up across the US. As a reminder to new readers of this column, the Rt rate is an indicator of the likelihood the disease will expand its spread: below the score ot Rt=1, the disease will cease to spread: above that rate, it will spread. The higher it is above that rate the faster it will spread. States with low rates of infection are better off than those with higher rates of infection, even if they have a higher Rt rate. But what we are indicating here is the trend. The diagrams below show the states in green that are above the Rt=1.0 line for all 50 states.

Rt Rate for all 50 States per Rt.live 3 months prior (4/25/2020) to current date (7/25/2020)
Rt Rate for all 50 States per Rt.live 2 months prior (5/25/2020) to current date (7/25/2020)
Rt Rate for all 50 States per Rt.live 1 month prior (6/25/2020) to current date (7/25/2020)
Rt Rate for all 50 States per Rt.live current date (7/25/2020)

The trend that we can glean from this progression is that the Rt rate has crept up for most states since the relaxation, indicating that the relaxation has a direct impact on the Rt rate. Under our constitution, each state has a specific responsibility to manage its public health, and the opening of the economy should be down in consideration of testing policies and influence over public behaviour. Failure to manage all three factors in concert will result in a rise in infections, followed by a rise in mortality to unsustainable levels.

Testing rates

Since this column began we have deplored the poor execution of testing: at the outset it was a catastrophic failure of the CDC, to a degree that is yet to be generally understood by the media or the public. This is a failure of public policy that needs to be fully examined once the pandemic is behind us, as it is probably more directly responsible for the situation we find ourselves in than any other single factor. For the moment we have to look at the future, and inexplicably we find that we are still not being well served in this critical area of testing. At this point in the pandemic we should be testing far in excess of the numbers of those infected, such that we have a full measure of the state of the pandemic in a given area – be it a city block, a town, or a rural area. However, we are still a long, long way from there. We know from the mortality rate that infections are at least 100 times the number of deaths, so we know that there are over 100,000 infections a day; yet we are only measuring about 65% of that, or about 65,000 per day. This means that there are 35,000 infection per day that are escaping our attention. Ultimately, unless we are testing enough people to capture a full picture of those infected, we will not be able to contain the pandemic. Below we set out a state-by-state measure, showing those states that are testing only those infected, that is those that have symptoms, resulting in this sorry situation. The graph indicates which states (the green bars) are achieving 5% or less positive results from their testing, indicating that they are testing a wider group, to a level that is considered sufficient to trace infection source.

Failure of states to implement widespread testing regimes may render all their other policies pointless. Before considering the further opening of the economy, states would be well advised to examine – and re-examine – their testing capabilities.

Positivity Testing – downloaded from https://coronavirus.jhu.edu/testing/testing-positivity 7/26/2020

Public behavior

The third factor that will ultimately determine the success in overcoming the pandemic in the absence of an effective vaccine, is public behaviour. The willingness to conform to social distancing, and wear masks, ensured that Germany was able to reduce its mortality to levels of less than 20% of those of the US; in addition they appear to have dodged any “second wave” to date.

The politicization of conforming behaviour has been very regrettable, with very severe negative impacts.

Public behaviour in almost every social sphere – bars (in particular), restaurants, hotels, public gatherings – are full of examples of behavior that flouts the guidelines.

The lack of leadership at the national level and the inability of the States to take the initiative in influencing public behaviour has led to the current situation, which will continue to deteriorate if decisive action is not taken. Re-institution of lockdowns, closing of newly reopened business and activities will inevitably follow when mortality rates rise to politically unsustainable levels, and/or medical facilities become overwhelmed, as they did in New York and several other cities in late April.

We are not calling for a level of conformance and discipline such as that which was enforced upon the Chinese people after Wuhan, although that proved to be very effective. Rather, we are calling for leadership that guides and inspires people toward a responsible public commitment. The US has risen to greater challenges in the past, there is no reason not to do so now.

Vaccines and treatments

There are a number of treatments in late stages of development and approval for COVID-19 that may ameliorate either the symptoms or the death rates. These will become progressively more widely available in the forthcoming months, and ultimately will bring some relief to those who contract COVID-19.

The magic bullet will be an effective vaccine. We have little doubt that one or more will emerge from trials presently underway, and will then become available – if not by the end of this year, then certainly early in the next. It will take some time to manufacture, distribute and administer to the population as a whole before life returns to a semblance of normalcy.

Until that day, we should all be playing a role in dealing with this pandemic.

Errors in the COVID-19 Data?

The data remains as puzzling as ever, and in our view no one has yet been able to interpret them with any accuracy. Today we are going to review emerging aspects of data that raise questions.

We start with Figure 1. our own tracking graph that seeks to find the correlation – if any – between testing, reported infections, and mortality.

Figure 1: Mortality vs infection vs testing
Data downloaded from www.Worldometers.info at 12:00 EST USA on 7/18/2020

Last week we noted that the 7-day moving average of the mortality rate showed a distinct upward trend, mirroring the sudden upward turn in the reported infection rate from 26 days earlier. We noted that this could indicate we had a renewal of the crisis looming, given the huge rise in the infection rate since that time.

However, the growth rate in the mortality data declined this week, which indicated at least a degree of detachment from the reported infection rate from 26 days prior.

Readers of this column know that we hold the reported infection rate as unreliable, reflecting only a portion of those infected. Based on the mortality rate, and the known IFR (Infection Fatality Rate) of the disease, we have high confidence that the total infections in the US exceed 14 million, compared to a reported 3.8 million. The number of infections is a complex mix of increase in testing, and an apparent increase in infections.

However, when it comes to mortality in general across the US, we have a paradox in the data, as yet unreported in the media.

Figure 2: Excess deaths in the US from all causes, highlighting COVID-19 deaths
Uploaded from https://www.cdc.gov/nchs/nvss/vsrr/covid19/excess_deaths.htm 7/18/2019 at 12:00 EST USA
CDC Data updated to June 15, 2020. Note:

The graph in the figure above is maintained by the CDC, and reports excess deaths from COVID-19 and other reported sources. It has been a source of data that we have used in these columns, and has – until recently – reflected the mortality rates being reported on COVID-19.

NOTE: the CDC warns that the data in most recent weeks may not be complete, and that while it makes statistical adjustments to correct for this, the data for those weeks may not be reliable. We have found over the last several months that we may rely upon data up to two weeks prior to the reporting date, and we have marked that point – June 20th, 2020 – on this and the following graphs with red arrows.

It can be seen, however, that the “Excess Mortality” – that is the number of deaths higher than the historic average (adjusted for population) – has fallen dramatically from the peak in late April. Mortality is now (or was, on June 20th) at the average expectation level for a “normal” year.

The story is continued in the following CDC graphs.

Figure 3: Weekly counts of deaths due to select causes
Uploaded from https://www.cdc.gov/nchs/nvss/vsrr/covid19/excess_deaths.htm 7/18/2019 at 12:00 EST USA
CDC Data updated to June 15, 2020
Figure 4: Weekly counts of deaths by age group
Uploaded from https://www.cdc.gov/nchs/nvss/vsrr/covid19/excess_deaths.htm 7/18/2019 at 12:00 EST USA. CDC Data updated to June 15, 2020

These data indicate that the causes of death closely associated with COVID-19 have fallen off dramatically in recent weeks to the point that the deaths are at expected levels. The only the exception is deaths due to Alzheimer’s disease, which shows a mild rise. That rise has not impacted the total weekly count of deaths in the 85-year and older group, which are shown as remaining at the normal expected rates in recent weeks.

These declines in aggregate, absolute and relative terms, indicate a far more dramatic drop in mortality rate than those reported daily and as attributed to COVID-19. Furthermore, the data also indicated that the deaths reported prior to the peak in late April included a significant number of deaths that were so-called “pulled-forward” – that is patients with comorbidities that succumbed to the disease, but who may have died in any event within a few months. This is implied by the decline to below average mortalities in recent weeks.

What about the western and southern states, which are reporting a very significant rise in infections and deaths. Figure 4 is our weekly tracking graph of infections versus deaths in California, Texas and Florida. We aggregated those states as convenient proxies for the southern and western states, their populations being so large that there numbers are decisive for any statistical analysis.

Figure 5: California, Texas & Florida: mortality vs infection
Data downloaded from www.Worldometers.info at 12:00 EST USA on 7/18/2020

In this graph there is a very strong correlation between infection rate – pulled forward by 26 days – and the mortality rate, and the trend is indeed alarming. It indicates more than a doubling within a week, with no apparent end in sight. However, when we investigated the CDC Excess Death data for these three states, the pattern for all three states remain similar as that of the U.S. shown in Figures 2-4, which is to say that in the weeks leading up to end-April the mortality rate spiked to much higher than normal levels in in the higher age groups and for the causes of deaths associated with COVID-19, and then subsided to a current level of below normal by June 20th. The average number of deaths in all three states shown in the CDC graphs at June 20 is at normal levels.

Conclusion

The actual mortality rate from COVID-19 remains elusive, but not as much as the infection rate. Excess Death rates were cited early in the pandemic as an indicator of under-reporting. It now seems that the opposite may be true.

…and now it gets serious

There is no further mystery in the data: the relationship between the rate of infection and subsequent mortality from COVID-19 in the U.S. this late in the pandemic has become clear. The news is not good.

All data downloaded from www.worldometers.info 7/11/2020 at 22:00 EST USA

The spike in the death rate has arrived…and it is bad

From Tuesday of this week (July 7th), and every day since, the death rate has spiked to levels not seen in a month. For the first time since late April, the 7-day trailing average no longer reduced, but increased, and at a very significant rate. As can be seen in the graph above, the upturn in the moving average appears to mirror the increase in the rate of infection which had occurred approximately 26 days prior.

This delay in increase has been puzzling to us as it does not reflect the general scientific consensus that death follows about 14 days from the onset of symptoms. There may be several explanations for this apparent anomaly, but the stark fact is that the data point to an alarming rate of increase in mortality. These rates could exceed the peaks we were experiencing in April as we still do not know when the rise in the infection rate will peak.

Given this, the data also points to a likely increase in mortality of rates that significantly exceed the rates being experienced in early to late April. This would put the US on a path towards a a very serious situation. Because it is still early – the spike has been with us less than a week – it is unclear what the rate of the increase will be. The initial numbers appear not to be linear in relation to the infection rate – that is the slope of the curve may not be as steep – but only time will tell.  

Infection Rates

The infection rate, which appeared to be ameliorating last week, spiked again this week, exceeding 70,000 on Friday 10th, a single day record and driving toward the 100,000-mark about which Dr. Fauci expressed concern.

On the All-in Podcast (Chamath Palihapatiya, David Sacks, Jason Calacanis and David Friedberg – four outstanding startup wizards – catch them at https://www.youtube.com/channel/UCESLZhusAkFfsNsApnjF_Cg/feed) this week, the suggestion arose that the US is headed for a Sweden-like experience with COVID-19, as an accidental outcome to the decentralized way we have dealt with the threat. The notion is that we will eventually get to a form of community immunity which is what Sweden appear to have reached, although this was their objective all along.

Unhappily, though, it is apparent that this may happen in the US with a higher mortality rate than experienced by Sweden, which experienced about 547 deaths per million population to date. The US has experienced 415 deaths per million of population to date. It is possible, at current rates of growth of infections, and if the mortality rate continues to grow in sympathy (even if not in direct linear proportion), that number could double, or more.   

We will be drilling deeper into the mortality rate in the forthcoming week: again exploring the CDC “Excess Deaths” numbers (which is still not showing any signs of increase, but that, too, is a lagging indicator.) We will also be exploring the impact on the most vulnerable states. We close this briefer than usual report with a graph showing the mortality trendlines in Florida, Texas and California that bears out the statistics from the rest of the country. In this graph we have shifted forward the infections by 26 days, showing the correlation between infection and mortality rates at that precise date delay.

All data uploaded from www.worldometers.info on 7/11/2020 at 22:00 EST USA

Stay safe, every one! More next week.