This page provides a brief description of two webinars in which we discuss COVID-19, and various lessons on probabilistic modeling. In addition, links to relevant modeling topics, example models and outside articles are included.
Webinar 1: COVID is not a black swan, so why did it still catch most off-guard?
Abstract: Infectious disease specialists have long warned us about the pandemic potential of a novel respiratory virus jumping from wildlife to humans, possibly from wet markets. Why was the international community taken by surprise when this finally occurred even when business leaders such as Bill Gates also raised alarms about this potential pandemic and how to prepare for it? Epidemiologists use a wide variety of models to analyze and forecast the spread of diseases such as COVID-19. At this point you have certainly heard modeling lingo such as R0 and social distancing, but it’s also likely you have read about SIR, SEIR, agent-based, network, and statistical models. The epidemic modeling literature is rich and there are hundreds of well documented and thoroughly tested epidemic modeling apps in the public domain. Why is there not just one “overall model” that can help our leadership make informed decisions? In this webinar we will review different epidemic models and their applications to manage epidemics such as COVID with an eye on the importance of choosing the right type of model, and acknowledging parameter and model uncertainty. We will then extend this reasoning to non-COVID situations, and will use real-life case-studies to discuss modeling aspects such as:
- Fit for purpose: importance of selecting the right technique and type of analytical model for the right problem.
- Acknowledging limitations of models, including uncertainty: we should help decision-makers understand the strengths and limitations of models, and embrace uncertainty as a reality of any prediction that should be included rather than ignored
- Mechanistic vs statistical modeling: this important distinction previously reserved only to us working in prescriptive analytics has become a matter of great political debate, as demonstrated by the diverging COVID-19 mortality predictions between groups such as U of Washington’s Institute for Health Metrics and Evaluation (using statistical models) vs those from the Imperial College (mechanistic). We will use the COVID example to discuss the role of each modeling paradigm, and how this relates to effective prescriptive modeling in business.
Relevant example @RISK models:
- - Case fatalities from novel Coronavirus have been closely monitored to estimate the impacts of epidemics as the virus continues to spread. The Case Fatality Rate (CFR) appears to be heterogenous, but does it statistically differ between populations? What conclusions can we draw, and can a global CFR be derived to estimate likely mortality in a population which has not yet had any cases? Is statistical uncertainty sufficient, or are there biases we should consider?
- - We have historical records of Emerging Infectious Diseases (EID) events and pandemics. Can we use these to model the next EID and/or pandemic event? We have data on EID from 1968-2003, and on pandemics from 1889 to present day. How likely is an EID to occur in a ten year time-span, and how likely is a pandemic?
- - This model shows a few different sales update curves, and how various curves can be "combined" in a financial forecasting model.
- Analyzing and using data - this topic explains how data can be used and analyzed, including evaluation the quality of the data, fitting distribution to data, and estimating model parameters.
- The Poisson process- this topic discussed the Poisson process, which is often used when estimating or simulating the occurrence of events like epidemics or accidents (see also the EID Events model listed above).
- Additional probabilistic models - in this topic, you'll find links to many other case studies and example models.
- This McKinsey article provides some ideas of what effort and investments are needed to help prevent future pandemics.
Webinar 2: Lessons from COVID: embracing uncertainty as a key part of the decision-making process
When will this lock-down end? Should I get tested? Should I disinfect my groceries? If I wear my PJs for that conference call, is there a chance they might want me to use video? In this new “COVID-19 era”, many of us had to get used to dealing with these and other uncertainties, some with serious potential consequences. In this webinar, we will start by discussing how probabilistic models helped in guiding COVID decisions where the outcome was highly uncertain. But uncertainties are not just important when making decisions concerning epidemics, but also affect most business decisions. Using a case studies, we will illustrate the importance of accounting for this uncertainty in our decision-making:
- How to use MC simulation for your business / planning given current COVID uncertainties?
- More generally, probabilistic models (and probabilistic thinking) can ensure that our plans are robust and thrive in a variety of future scenarios. We will illustrate how increasing business resilience through probabilistic modeling can become a competitive advantage
- Models can “learn” over time, and consider how new information may change our best course of action
- Risk management should be a key part of management’s decision-making, and not perceived as a separate cost-center, but as a way to add value to the organization.
- Bayesian statistics: As explained in Webinar 2, the field of Bayesian statistics is large, and rapidly evolving. This topic provides an overview of some of the key Bayesian concepts and techniques, including a number of @RISK example models.
- Financial risk models and examples: This topic contains links to a number of example financial risk models.
- Using expert opinion in modeling: Here, we discuss various aspects of the use of expert opinion in probabilistic modeling, including potential sources of errors / biases.
Model on the interpretation of a diagnostic test:
During the webinar we discussed the interactive model below (also found at https://epix.shinyapps.io/SeSpPr/). This model uses Bayes' theorem to calculate the confidence in individual test results given a certain probability of infection prior to running the test (labeled Prevalence here), the probability of a positive test if the individual is infected (test Sensitivity), and the probability of observing a negative test given that the individual is not infected (test Specificity). For example, if the pre-test probability of COVID-19 infection was 10%, and one uses a rapid test with 50% sensitivity and 95% specificity, we can say that we are roughly 95% sure that the individual is not infected if the test result is negative. Likewise, we would be almost 53% confident that the person is infected if the test is positive...in other words, under this scenario a positive test is as predictive as tossing a coin!