To learn more about EpiX Analytics' work, please visit our modeling applications, white papers, and training schedule.

Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Uploaded figures again

...

It might at first seem that we are getting something for nothing here. After all, we don't actually know anything more until we perform the extra tests. However, the decision that would be made would depend on the results of those extra tests, and those results depend on what the true value of p actually is. Thus, the analysis is based on our prior for p (i.e. what we know to date about p) and the decision rule. When the model generates a scenario it selects a value from the prior for p. It is saying: "Let's imagine that this is the true value for p". If that value is <2% we should develop the product of course, but we'll never know the value of p (until we have launched the product and have enough customer history to know its value). However, extra tests will get us closer to knowing its true value and so we end up taking less of a gamble. When the model picks a small value for p, it will probably generate a small number of affected people in our new tests, and our interpretation of this small number as meaning p is small will often be correct. The danger is that a high p value could by chance result in an unrepresentatively small fraction of m being affected, which will be misinterpreted as a small p, and lead management to make the wrong decision. However, as m gets bigger so that risk diminishes. The balance that needs to be made is that the tests cost money. The model simulates twenty scenarios where m is varied between 100 and 3000, with the following results:


Image RemovedImage Added


It tells us that the optimal strategy, i.e. with the greatest expected VOII, is to perform about another 700 tests. The saw-tooth effect in these plots occurs because of the discrete nature of the extra number affected one would observe in the new data. Note that if the tests had no cost, the graph above would look very different:


Image RemovedImage Added


Now it is continually worth collecting more information (providing it is feasible to actually do) because there is no penalty to be paid in running more tests (except perhaps time which is not included as part of this problem). In this case the value of information asymptotically approaches the VOPI (= $2.12million) as the number of people tested approaches infinity.

...