Matt’s Dodgy Covid-19 Test Kit Problem [PRELIMINARY]

By Alessio Farhadi and Adam Lund

Matt has ordered 2 million Covid-19 antibody test kits at a cost of $7 per unit. Upon delivery he learns the test kits are only 85% accurate. How much does Matt need to spend on average per person to determine with 99% accuracy the presence of Covid-19 antibodies? Discuss how this may change with the prevalence of Covid-19 within the population.

Initially, we set this problem as a thought provoking interview brainteaser. It became apparent the solution to the problem was far from simple. Whilst it has become commonplace to approach many statistical problem through Machine Learning techniques, we show a rigorous analytical approach is often the best starting point yielding more complete solutions.

A good rule of thumb is to take 3 consecutive test from the same batch of test kits. If one test result is inconsistent with the rest then take a 4th and stop. When the result is 3-1 go with the majority, and if 2-2 throw the test kits away.

Comments and feedback are welcome.

Please note, the information contained in this article is solely for illustrative purposes and MUST NOT be interpreted as professional medical advice. Please consult your physician if you are unwell or suffering from Covid-19 type symptoms.

Alessio’s approach

I live in the Bayesian world for the purposes of this problem. This is a fancy way of saying I assume my probabilities for this problem are a given – known unknowns in the language of Donald Rumsfeld. The result of each Covid-19 antibody tests is binary, meaning it may only be in one of two states: positive or negative (real antibody test kit results are within a range). Whilst 85% certainty may be sufficient for the purposes mass population statistics, for an individual concerned this may present an unacceptable level certainty. Especially, when false Covid-19 positives may lead to the dire consequences and super spreaders. For example, a carer of an elderly care home incorrectly being told they have antibodies present.

To solve this problem, I need to figure out how many binary (positive/negative) tests are required to get my error rate below 1%. Let’s start with the simplest case and approach.

Case (i): Consecutive test results consistent

We assume p is the probability a Covid-19 sufferer’s test result is correct (85%) and 1 - p (15%) incorrect. Each test is drawn from an independent and identical distribution (i.i.d.) of the test kits available. We start with the simplest case, where n-consecutive Covid-19 trial tests result are consistent and correct. The minimum number of tests required to be inaccurate with less than 1% probability can be derived from the expression

1 \% = (1 - p)^n.

With p=0.85, we find 3 consistent consecutive test results ($21 cost) would be sufficient to attain 99% accuracy. However, this would be a special case, and also the minimum number of tests required to achieve Matt’s original desired accuracy.

Given the 15% probability that a test result is incorrect, we are highly likely to require more than 3 tests in order to reach our desired 99% accuracy threshold. In fact, the probability of having at least one incorrect test result within our first 3 tests taken is 1-p^3 (39%). To complicate matters further, in practical terms, we have a high degree on uncertainty around the true accuracies of antibody test kits given the product infancy.

Case (ii): One correct test result

Let’s consider the case we take 4 test and just 1 result is correct. Once again, we restrict the discussion to known positive/negative cases. There are 3 ways (combinations) this may happen: On the first, second, or third testing event. The probability P of this occurring is

P = 3p(1-p)^3.

For a test kit with 85% accuracy (p=0.85) the probability P=0.8%. The cost is $28.

Case (iii): Two correct test result

Now the case where 5 tests are taken and just 2 of the results are accurate. This can occur in 6 combinations. Thus,

P = 6 p^2 (1-p)^3

which gives 1.4% probability of this occurring at $35 cost. Unfortunately, just below our 99% accuracy threshold. We require an additional accurate test to get us over the 99% hurdle at a cost of $42. A subsequent incorrect test would start to send us down the rabbit hole, and the total cost starts to spiral. Matt should really consider the trade-offs between the accepted accuracy threshold, and the average cost per person. For now, we will set our primary objective to be accuracy over costs.

More generally

After a little more consideration we can envision (see below) how the accuracy and the implied accuracy and costs evolved as more test are taken. We stop testing once we are 99% confident we have correctly diagnosed the presences or absence of Covid-19 antibodies in a patient.

More generally, if we take n tests with k correct results, we continue testing until

1 \% > (Number of possible paths) \left[ p^k (1-p)^{n-k} \right].

This reminds me a lot the optimal stopping problem when traders consider the early exercising of American Style Option. With the notable difference, the penalty for early exercise of an American option decreases as the option time value erodes. Whereas, for testing, the cost increases with each incremental test taken.

As a policy, I would set a minimum of 3, and maximum of 4 tests should just one test be inconsistent, at an expected cost of

C = 0.614 x 21 + 0.386 x 28 = $23.70

per person.

Much like Matt, I find myself out of my depth to do anything more statistically rigorous. Fortunately, I have a smart friend (Adam) with a PhD in Statistics who can help. Let’s ask the experts.

Adam’s Approach

Before we begin, we must to clarify some terminology and what we mean by test accuracy. The sensitivity, a_1\in[0,1], represents the true positive rate, and the specificity, a_0\in[0,1], the true negative rate.

These are important factors doctors and statisticians need in order to determine the accuracy a of a test kit, expressed as

(1)   \begin{alignat*}{4} a = a_1\eta+a_0(1-\eta) \end{alignat*}

where \eta\in[0,1] is the prevalence, i.e. the fraction of positive in the population.

If a test kit has 85% accuracy the statistical errors become significant when Covid-19 test kits are used to estimate the prevalence if the true rate or prevalence is below 15%. This would require larger scale testing than the typical 1000-2000 sample sizes in most studies.

We will assume the test kits are drawn from an i.i.d. such that if a known Covid-19 positive (or negative) person, takes m consecutive tests, am tests will be correct, and (1-a)m incorrect.

Neyman-Pearson Lemma

Let us assume Jon is an individual that would like to know whether or not he has suffered from Covid-19. And, if so, with what degree of can he be certain.

Let us assume we have a time series of m consecutive Covid-19 test results t_1,\ldots,t_m for an individual such that for all t_j\in\{0,1\}

t_j = 0 when Covid-19 negative, or
t_j = 1 when Covid-19 positive.

How do we combine m test results to derive a joint probability distribution D_m\in\{0,1\} in such way that allows us to control the confidence level (accuracy)? 

This is a classical hypothesis testing problem the Neyman-Pearson Lemma seeks to address. We let H_0 denote the null hypothesis that a Jon is Covid-19 negative. By contrast, H_1 is the alternative hypothesis, Jon is Covid-19 positive. Given m observations t_1,\ldots,t_m, we need to find a hypothesis (model) which best fits our observations. In our case, we compare hypothesis H_0 with H_1.

Since each t_j is either 0 or 1 (a Bernoulli trail), and t_1\ldots, t_m are i.i.d. it follows that their joint distribution must follow a binomial distribution of order m. We use the pre-determined accuracy a to define our null hypothesis. This enables us to reduce the problem to one where we wish to get a best fit parameter between two binomial distribution functions. This situation is termed a simple hypothesis.

We know the test result t_j is correct with probability a. This means given a patient is negative Covid-19 antibodies (within our null hypothesis), we require P(t_j=0)=a and P(t_j=1)=1-a. Similarly given a patient is positive (within the alternative hypothesis) we must have P(t_j=1)=a. If we let p=\sum_{i=1}^mt_i denote the total number of positives test results, and n=m-p, the number of negatives, we obtain the following expressions for the probability function

    \begin{alignat*}{4} f_0(t_1,\ldots,t_m) =\prod_{j=1}^mf_{0}(t_j) =(1-a)^{\sum_{i=1}^mt_i}a^{m-\sum_{i=1}^mt_i} \end{alignat*}

and under the alternative \mathcal H_1, the likelihood is

    \begin{alignat*}{4} f_1(t_1,\ldots,t_m) =\prod_{j=1}^mf_{1}(t_j)= a^{\sum_{i=1}^mt_i}(1-a)^{m-\sum_{i=1}^mt_i}. \end{alignat*}

So we see that we have a particularly simple version of a simple hypothesis setup for parameters a and 1-a.

Now classical statistic testing theory, that is the Neyman-Pearson Lemma, tells us that we should use the likelihood ratio test (LRT) quantity

    \begin{alignat*}{4} L_m :=\frac{f_1(T_1,\ldots,T_m)}{f_0(T_1,\ldots,T_m)} =\frac{a^{\sum_{i=1}^mT_i}(1-a)^{m-\sum_{i=1}^mT_i}}{(1-a)^{\sum_{i=1}^mt_i}a^{m-\sum_{i=1}^mt_i}} =(\frac{a}{1-a})^{2\sum_{i=1}^mT_i-m} \end{alignat*}

when deciding which of the two hypothesis we should accept. The overall test or decision rule based on the m observations is then

(2)   \begin{alignat*}{4} D_m= \left\{ \begin{array}{cccc} 1& \text{if} & L_m> c \\ B&\text{if}& L_m= c, \\ 0& \text{if} & L_m< c \\ \end{array} \right. \end{alignat*}

where B\sim bern(q), q\in [0,1]. In particular, c and q are determined such that the decision rule D_m has statistical significance \alpha (probability of false positive) i.e. 

    \begin{alignat*}{4} P_0(D_m=1)=P_0 ( L_m>c)+ q P_0( L_m=c)=\alpha. \end{alignat*}

This decision rule is optimal in the following sense; it is the most powerful test (the lowest probability of making a type II error) at significance level \alpha meaning the test that gives us the highest probability of rejecting a false null hypothesis.

Using (2) gives us a testing procedure but still leaves the question of how to obtain the desired accuracy. To answer that we need to know the distribution of our test quantity L_m or, if thats not easy to figure out or a non-standard distribution, its asymptotical distribution.  In our setup however, if we consider an equivalent decision rule based on the logarithm of the likelihood ratio L_m, we can obtain analytical expressions of the error probabilities. In this connection let us first define the quantity

    \begin{alignat*}{4} \delta:= \log\Big (\frac{a}{1-a}\Big) \end{alignat*}

As a function of the accuracy a\delta quantifies the amount of information embedded in the test variable T_j, its entropy if you will, and in turn indicates if we can learn anything from repeating the test.  As such we can think of \delta as the learning rate of our overall test procedure. More specifically, maximal entropy is reached for a=1/2, the uniform binary distribution, yielding \delta=0  reflecting  that repeating the test won’t let us learn anything. On the other hand for a=1, (minimal entropy) \delta \rightarrow \infty reflecting that the learning procedure gives us the truth every time i.e. is an oracle procedure. Note that \vert \delta \vert is symmetric around  a=1/2 reflecting that a binary test that is wrong more than half the time is just as good as a binary test that is right more than half the time. 

Now instead of using the rule (2) we will compare the test quantity

(3)   \begin{alignat*}{4} S_m:=\log(L_m)= \delta\Big (2\sum_{j=1}^mT_j-m\Big). \end{alignat*}

to \log(c) so that our decision variable is now given as 

(4)   \begin{alignat*}{4} D_m= \left\{ \begin{array}{cccc} 1& if & \sum_{i=1}^mT_i> \frac{m}{2}+\frac{\log(c)}{2\delta}\\ B&if & \sum_{i=1}^mT_i= \frac{m}{2}+\frac{\log(c)}{2\delta}\\ 0& if & \sum_{i=1}^mT_i< \frac{m}{2}+\frac{\log(c)}{2\delta} \\ \end{array} \right. \end{alignat*}

Clearly (3) is equivalent to (2) and makes intuitive sense.  It shows directly that the threshold controlling the significance level (i.e. the accuracy of the test), for any fixed c, is in fact a function of the number of trials m. It is convenient to choose c=1 to obtain the majority rule; after m trials if the majority of answers are Covid-19 positive we will reject the null and  accept that Jon is positive, in case of a tie we will let chance decide and otherwise we will accept that he is negative. 

Now what remains is to find m^\ast such that the test procedure based on   (3) has accuracy a^\ast. This is easy  since \sum_{i=1}^mT_i is binomial distributed under each hypothesis. This means the probabilities of error, \alpha =P_0(D_m=1) and \beta=P_1(D_m=0), are easy to compute as a function of munder each hypothesis.

In particular, the error probability \alpha under \mathcal H_0 is

(5)   \begin{alignat*}{4} P_0( D_m=1)&= P_0(\sum_{i=1}^mT_i>m/2)+q P_0(\sum_{i=1}^mT_i=m/2) \nonumber \\ &= \left\{ \begin{array}{ll} \sum_{i=m/2+1}^{m}\binom{m}{i}(1-a)^ia^{m-i}+q \binom{m}{m/2}(1-a)^{m/2}a^{m-m/2}&m \text{ is even}\\ \sum_{i=(m+1)/2}^{m}\binom{m}{i}(1-a)^ia^{m-i} &m \text{ is odd} \end{array} \right.\nonumber\\ &= \left\{ \begin{array}{ll} \sum_{j=0}^{m/2-1}\binom{m}{j}(1-a)^{m-j}a^{j}+q \binom{m}{m/2}((1-a)a)^{m/2}&m \text{ is even}\\ \sum_{j=0}^{(m-1)/2}\binom{m}{j}(1-a)^{m-j}a^{j} &m \text{ is odd}. \end{array} \right. \end{alignat*}

The last expression is obtained by changing the index to j:=m-i and using the fact that 

    \begin{alignat*}{4} \binom{m}{i}=\binom{m}{m-i},\quad i\in \{0,\ldots, m\}. \end{alignat*}

Under \mathcal H_1 we get that the error probability \beta is

(6)   \begin{alignat*}{4} P_1( D_m=0)&= P_1(\sum_{i=1}^mT_i<m/2)+(1-q) P_1(\sum_{i=1}^mT_i=m/2)\nonumber \\ &= \left\{ \begin{array}{ll} \sum_{i=0}^{m/2-1}\binom{m}{j} a^{i}(1-a)^{m-i}+(1-q) \binom{m}{m/2} (a(1-a))^{m/2}&m \text{ is even}\\ \sum_{i=0}^{(m-1)/2}\binom{m}{j}a^i(1-a)^{m-i}&m \text{ is odd}. \end{array} \right. \end{alignat*}

As noted initially to obtain overall accuracy a^\ast for any prevalence level we must have \alpha=\beta according to (1). By (4) and (5) this happens if and only if q=1-q implying we have to let q=1/2 in our test procedure. Now for fixed q=1/2 to finally determine m^\ast we plot (5) as a function of m in Figure 1.

Probability of error as a function of trials

Figure 1. Error rates for the LRT as a function of sample size for four single test accuracy levels 0.65, 0.75, 0.85, 0.95.

Note that the first time we obtain the desired accuracy is after an odd number of trials for each a, i.e. it is always suboptimal to test an even number of times. This is actually good information to have since some places indeed recommend that you test twice if you want higher confidence in the result. According to Figure 1, for the decision rule (3), this does not make any sense. 

Finally to solve Matt’s original problem, for a^\ast =0.99 and a=0.85, he needs to test Jon m^\ast=9 times for a total cost of $72. Doing that he actually gets around a 99.5 \% accuracy, however, he can save $16 if he is willing to accept a slightly lower accuracy of around 98.8 \%.

With 85% accuracy test kits, Matt can achieve 99% accuracy in Covid-19 diagnosis with 9 consecutive tests at a cost of $72 per patient.


We now have an answer to the question; Jon would need to get tested 9 times to obtain (more than) 99 \% accuracy with a Covid-19 test that is 85 \% accurate using the LRT based decision rule. To answer this question we did a power analysis to obtain the minimum number of tests we would need for our likelihood ratio test to attain a certain power.

However, we have not proved that this is the optimal test strategy in terms of cost i.e. that the likelihood ratio test is in fact the most efficient test.  Efficiency is defined by the sample size, i.e. the m^\ast above, needed to  obtain the required power. So to fully answer original question and obtain the minimal cost for Matt we need to test with higher efficiency.

During World War II, several groups of researchers were concerned with statistical efficiency. For example, when testing batches of military equipment, to minimize the associated costs. In particular, the Statistical Research Group (SRG) at Columbia University was working extensively on this problem. Milton Friedman, an influential economist, and statistician Wilson Allen Wallis conjectured a sequential testing strategy where the number of tests required depended on the observable outcomes. In practical terms, this sequential testing approach yields higher time and cost efficiencies compared to the preset method in LRT at an insignificant statistical cost (same error rates \alpha and \beta).

Optimal Stopping

Optimal stopping problems are well featured in mathematical finance – as previously mentioned in the early exercise of an American option. European options, unlike American style, have a fixed exercise/maturity date which permits analytical pricing using the Black-Scholes equation. Pricing an American option is both mathematically and computationally taxing due to the inherent optimal stopping problem of early exercise.

As Alessio touched upon above, the Covid-19 testing optimal stopping differs from American options with the incremental cost of each test kit, whereas the American option has reducing time value as it approaches maturity. In Jon’s case for instance, using 85 \% accurate tests, 5 consistent test results in a row will render the need for the additional 4 test – saving $32, time and resources. This demonstrate given an appropriate stopping rule it is possible to obtain the accuracy we desire for a lower cost. However, as with an American option, setting an optimal stopping rule is non-trivial.

After hearing about Friedmans and Wallis conjecture, Abraham Wald proposed a mathematical framework, sequential probability ratio testing (SPRT), where by a stopping rule is derived for such similar trial problems. Later, together with Jacob Wolfowitz, he demonstrated that his proposed stopping rule is also optimal for the simple hypothesis setting; It is the most efficient test for a given accuracy, see [2].

To introduce the SPRT procedure and stopping rule we do as in Wald’s original paper [1] and define the following two quantities

(7)   \begin{alignat*}{4} b_0=\frac{\beta}{1-\alpha},\quad b_1=\frac{1-\beta}{\alpha}, \end{alignat*}

where we note that with \alpha,\beta<0.5 we have 0<b_0<1<b_1. Next consider the random variable defined by

(8)   \begin{alignat*}{4} Z_j=\log\Big (\frac{f_{1}(T_j)}{f_{0}(T_j)}\Big ) = \delta(2T_j-1), \end{alignat*}

with T_1,T_2,\ldots Bernoulli i.i.d.. Then, Z_1,Z_2,\ldots are i.i.d. with Z_j\in{-\delta,\delta} and it follows that we can write S_m from (3) as

(9)   \begin{alignat*}{4} S_m =\sum_{j=1}^m Z_j. \end{alignat*}

Now the sequential probability likelihood ratio procedure can be formalized by the algorithm below:

Given the single test accuracy a and desired error rates \alpha and  \beta the constant b_0 and b_1 define the stopping rule; for each iteration if S_m\geq \log(b_1) we stop and accept \mathcal H_1,  if S_m\leq \log(b_0) we stop and accept \mathcal H_0, and otherwise we continue to test.

In Machine Learning (ML) terminology, we can think of the SPRT as an online  (sequential) learning procedure and correspondingly the classical Neyman-Pearson LRT procedure as batch learning. The SPRT algorithm is optimal such that on average it will result in significantly fewer tests to statistically accept a hypothesis compared to other methods. It turns out that using such an online learning (in sample) approach often yields greater efficiency through a reduced number of trials.

To compute the expected number of tests Jon would have to take using the SPRT procedure, we begin by defining our optimal stop as

    \begin{alignat*}{4} \tau =\inf\{n\in \mathbb N : S_n\leq \log(b_0) \text{ or } S_n\geq \log(b_1)\}, \end{alignat*}

where \tau\in \mathbb N  is a random variable representing the number of iterations Algorithm (1) will do before it stops. i.e. The number of test Jon must take to obtain the desired Covid-19 test accuracy. As shown in [1], the SPRT minimizes E_i(\tau), the expected time to acceptance each hypothesis, for any desired accuracy level.

Next, we can explicitly compute the average number of tests required to achieve our desired (99 \%) level of accuracy. To do so, let Z represent a random varible with a probabilty distribution identical to that of the i.i.d. variables Z_1,\ldots,Z_j in (7). From (8), using Wald’s identity it follows that

(10)   \begin{alignat*}{4} E_i(\tau)=\frac{E_i(S_\tau)}{E_i(Z)}. \end{alignat*}

For the denominator

(11)   \begin{alignat*}{4} E_i(Z)= \delta(2E_i(T_j)-1) =\left\{ \begin{array}{ccc} \delta(1-2a),& i=0& \\ - \delta(1-2a),& i=1.& \end{array} \right. \end{alignat*}

To calculate the numerator, we observe that D_\tau=0 or D_\tau=1, such that Adam’s Law (Tower property) implies

(12)   \begin{alignat*}{4} E_i(S_\tau)&= E_i(S_\tau\mid D_\tau=1)P_i(D_\tau=1)+E_i(S_\tau\mid D_\tau=0)P_i(D_\tau=0)\nonumber\\ &= E_i(S_\tau\mid D_\tau=1)(1-P_i(D_\tau=0))+E_i(S_\tau\mid D_\tau=0)P_i(D_\tau=0). \end{alignat*}

Since \alpha is the probability of accepting \mathcal H_0 when \mathcal H_1 is true (under P_1) and \beta is the probability of accepting \mathcal H_1 when \mathcal H_0 is true (under P_0) we obtain 

(13)   \begin{alignat*}{4} P_i(D_\tau=0)=\left\{ \begin{array}{ccc} 1-\beta,& i=0 \\ \alpha,& i=1. \\ \end{array} \right. \end{alignat*}

Note, S_m takes values in the set 

    \begin{alignat*}{4} \{-\delta m,-\delta(m-1),\ldots,-\delta,0,\delta, \ldots,\delta(m-1),\delta m\}, \end{alignat*}

with a,\alpha,\beta known, it is straightforward to determine n_0,n_1\in \mathbb N such that

(14)   \begin{alignat*}{4} n_i =\min\{n\in \mathbb N: n\delta \geq \vert \log(b_i) \vert\}. \end{alignat*}

As \vert S_m-S_{m-1}\vert=\delta, it follows from our definition of \tau in (9), and P_0 + P_1 =1 that

(15)   \begin{alignat*}{4} S_\tau =\left\{ \begin{array}{ccc} -\delta n_0 ,&\text{given}& D_\tau=0 \\ \delta n_1,& \text{given}& D_\tau=1. \end{array} \right. \end{alignat*}

Combining (11),  (12) and (14) we obtain

    \begin{alignat*}{4} E_i(S_\tau)&= \left\{ \begin{array}{ccc} \delta n_1\beta-\delta n_0(1-\beta) ,& i=0 \\ \delta n_1(1-\alpha)-\delta n_0\alpha,& i=1, \end{array} \right. \end{alignat*}

Finally, using (10) we can write (9) as

    \begin{alignat*}{4} E_i(\tau) =\left\{ \begin{array}{ccc} \frac{ n_1 \beta- n_0(1-\beta)}{1-2a},& i=0 \\ \frac{ n_0\alpha- n_1(1-\alpha) }{ 1-2a},& i=1. \end{array} \right. \end{alignat*}

In the special case where \alpha=\beta=1-a^\ast, such that we have an overall test accuracy a^\ast according to (6), we get b_1=1/b_0, and \vert \log(b_1)\vert=\vert \log(b_0)\vert.

Using (13), it follows that n(a):=n_0=n_1 which gives rise to the condition

(16)   \begin{alignat*}{4} E_0(\tau) = E_1(\tau) = \frac{ n(a)(1-a^\ast)-n(a)a^\ast }{ 1-2a} = n(a) \frac{ 1-2a^\ast }{ 1-2a}. \end{alignat*}

The expression (15) represents the average the number of tests required as a function of the individual test kit accuracy a in order to achieved our desired level of accuracy a^\ast. Moreover, it provides the minimum number of tests required to have an error rate \alpha=\beta=1-a^\ast, i.e. accuracy a^\ast.

In (15), E_i(\tau) \rightarrow \infty when a=1/2, reflecting that one cannot learn much from test kits which are no more accurate than a series of coin flips. Also, for a=a^\ast we get \delta=\log(b_1) implying n(a)=1 by (13) and then E_i(\tau)=1 as expected. Figure (2) shows the function in (15) for a^\ast =0.99.

Expected number of Covid−19 test required for 99% target accuracy
Figure 2. Expected number of test using the SPRT with 99% target accuracy as a function of single test accuracy a.

In Table (1) we compare the values of m^\ast for a^\ast=0.99, the trials needed using the batch learning approach above, the LRT, with the expected number of trials needed with the SPRT approach.

Table 1. Expected number of tests with target accuracy a ∗ = 0.99 for the SPRT and the LRT.

We see from Table (1) by following the SPRT approach we have at least halved the average number of test kits required to attain a 99% confidence in our Covid-19 diagnosis as compared with LRT. In practical terms $29.40 vs $63, saving Matt total of $33.60 per patient.

With an optimal stopping approach Matt can still achieve 99% accuracy in Covid-19 diagnosis using less than half as many test kits (4.2 average), at an expected cost of $29.40 per patient.


[1] Abraham Wald. Sequential tests of statistical hypotheses. The Annals of Mathematical Statistics, 16(2):117-186, 1945.

[2] Abraham Wald and Jacob Wolfowitz. Optimum character of the sequential probability ratio test. The Annals of Mathematical Statistics, 19(3):326–339, 1948.


Leave a Reply

Your email address will not be published.

Neural Network Overload! One sec please...