# Randomized Algorithms (Spring 2010)/Fingerprinting

## Checking identities

Many applications in Computer Science require to efficiently check whether two complex objects are identical, while the objects are presented implicitly (e.g., as black-boxes). We consider two examples. One is to check the result of multiplying two matrices, and the other is to check the identity of two polynomials.

### Example: Checking matrix multiplication

Consider the following problem:

• Given as the input three ${\displaystyle n\times n}$ matrices ${\displaystyle A,B}$ and ${\displaystyle C}$,
• check whether ${\displaystyle C=AB}$.

We could compute ${\displaystyle AB}$ and compare the result to ${\displaystyle C}$. The time complexity of fastest matrix multiplication algorithm (in theory) is ${\displaystyle O(n^{2.376})}$, and so is the time complexity of this method.

Here’s a very simple randomized algorithm, due to Freivalds, that runs in only ${\displaystyle O(n^{2})}$ time:

 Algorithm (Freivalds) pick a vector ${\displaystyle r\in \{0,1\}^{n}}$ uniformly at random; if ${\displaystyle A(Br)=Cr}$ then return "yes" else return "no";

The running time of the algorithm is ${\displaystyle O(n^{2})}$ because it does only 3 matrix-vector multiplications.

If ${\displaystyle AB=C}$ then ${\displaystyle A(Br)=Cr}$ for any ${\displaystyle r\in \{0,1\}^{n}}$, thus the algorithm always returns "yes". But if ${\displaystyle AB\neq C}$ then the algorithm will make a mistake if it happens to choose an ${\displaystyle r}$ for which ${\displaystyle ABr=Cr}$. This, however, is unlikely:

 Lemma If ${\displaystyle AB\neq C}$ then for a uniformly random ${\displaystyle r\in \{0,1\}^{n}}$, ${\displaystyle \Pr[ABr=Cr]\leq {\frac {1}{2}}}$.
Proof.
 Let ${\displaystyle D=AB-C}$. We will show that if ${\displaystyle D\neq {\boldsymbol {0}}}$, then ${\displaystyle \Pr[Dr={\boldsymbol {0}}]\leq {\frac {1}{2}}}$ for a uniform random ${\displaystyle r\in \{0,1\}^{n}}$. Since ${\displaystyle D\neq {\boldsymbol {0}}}$, it must have at least one non-zero entry, say ${\displaystyle D(i,j)\neq 0}$. The ${\displaystyle i}$-th entry of ${\displaystyle Dr}$ is ${\displaystyle (Dr)_{i}=\sum _{k=1}^{n}D(i,k)r_{k}}$. If to the contrary, ${\displaystyle Dr={\boldsymbol {0}}}$, then ${\displaystyle (Dr)_{i}=\sum _{k=1}^{n}D(i,k)r_{k}=0}$, which is equivalent to that ${\displaystyle r_{j}=-{\frac {1}{D(i,j)}}\sum _{k\neq j}^{n}D(i,k)r_{k}}$, i.e. once ${\displaystyle r_{k}}$ for ${\displaystyle k\neq j}$ have been chosen, there is only one value of ${\displaystyle r_{j}}$ that would give us a zero ${\displaystyle Dr}$. However, there are two possible values ${\displaystyle \{0,1\}}$ for ${\displaystyle r_{j}}$ which are equal-probable, so with at least ${\displaystyle {\frac {1}{2}}}$ probability, the choice of ${\displaystyle r}$ fails to give us a zero ${\displaystyle Dr}$.
${\displaystyle \square }$

### Example: Checking polynomial identities

Consider the following problem:

• Given as the input two multivariate polynomials ${\displaystyle P_{1}(x_{1},\ldots ,x_{n})}$ and ${\displaystyle P_{2}(x_{1},\ldots ,x_{n})}$,
• check whether the two polynomials are identical, denoted ${\displaystyle P_{1}\equiv P_{2}}$.

Obviously, if ${\displaystyle P_{1},P_{2}}$ are written out explicitly, the question is trivially answered in linear time just by comparing their coefficients. But in practice they are usually given in very compact form (e.g., as determinants of matrices), so that we can evaluate them efficiently, but expanding them out and looking at their coefficients is out of the question.

 Example Consider the polynomial ${\displaystyle P(x_{1},\ldots ,x_{n})=\prod _{\overset {i Show that evaluating ${\displaystyle P}$ at any given point can be done efficiently, but that expanding out ${\displaystyle P}$ to find all its coefficients is computationally infeasible even for moderate values of ${\displaystyle n}$.

Here is a very simple randomized algorithm, due to Schwartz and Zippel. Testing ${\displaystyle P_{1}\equiv P_{2}}$ is equivalent to testing ${\displaystyle P\equiv 0}$, where ${\displaystyle P=P_{1}-P_{2}}$.

 Algorithm (Schwartz-Zippel) pick ${\displaystyle r_{1},\ldots ,r_{n}}$ independently and uniformly at random from a set ${\displaystyle S}$; if ${\displaystyle P_{1}(r_{1},\ldots ,r_{n})=P_{2}(r_{1},\ldots ,r_{n})}$ then return “yes” else return “no”;

This algorithm requires only the evaluation of ${\displaystyle P}$ at a single point. And if ${\displaystyle P\equiv 0}$ it is always correct.

In the Theorem below, we’ll see that if ${\displaystyle P\neq 0}$ then the algorithm is incorrect with probability at most ${\displaystyle {\frac {d}{|S|}}}$, where ${\displaystyle d}$ is the maximum degree of the polynomial ${\displaystyle P}$.

 Theorem (Schwartz-Zippel) Let ${\displaystyle Q(x_{1},\ldots ,x_{n})}$ be a multivariate polynomial of degree ${\displaystyle d}$ defined over a field ${\displaystyle \mathbb {F} }$. Fix any finite set ${\displaystyle S\subset \mathbb {F} }$, and let ${\displaystyle r_{1},\ldots ,r_{n}}$ be chosen independently and uniformly at random from ${\displaystyle S}$. Then ${\displaystyle \Pr[Q(r_{1},\ldots ,r_{n})=0\mid Q\not \equiv 0]\leq {\frac {d}{|S|}}.}$
Proof.
 The theorem holds if ${\displaystyle Q}$ is a single-variate polynomial, because a single-variate polynomial ${\displaystyle Q}$ of degree ${\displaystyle d}$ has at most ${\displaystyle d}$ roots, i.e. there are at most ${\displaystyle d}$ many choices of ${\displaystyle r}$ having ${\displaystyle Q(r)=0}$, so the theorem follows immediately. For multi-variate ${\displaystyle Q}$, we prove by induction on the number of variables ${\displaystyle n}$. Write ${\displaystyle Q(x_{1},\ldots ,x_{n})}$ as ${\displaystyle Q(x_{1},\ldots ,x_{n})=\sum _{i=0}^{k}x_{n}^{k}Q_{i}(x_{1},\ldots ,x_{n-1})}$ where ${\displaystyle k}$ is the largest exponent of ${\displaystyle x_{n}}$ in ${\displaystyle Q(x_{1},\ldots ,x_{n})}$. So ${\displaystyle Q_{k}(x_{1},\ldots ,x_{n-1})\not \equiv 0}$ by our definition of ${\displaystyle k}$, and its degree is at most ${\displaystyle d-k}$. Thus by the induction hypothesis we have that ${\displaystyle \Pr[Q_{k}(r_{1},\ldots ,r_{n-1})=0]\leq {\frac {d-k}{|S|}}}$. Conditioning on the event ${\displaystyle Q_{k}(r_{1},\ldots ,r_{n-1})\neq 0}$, the single-variate polynomial ${\displaystyle Q'(x_{n})=Q(r_{1},\ldots ,r_{n-1},x_{n})=\sum _{i=0}^{k}x_{n}^{k}Q_{i}(r_{1},\ldots ,r_{n-1})}$ has degree ${\displaystyle k}$ and ${\displaystyle Q'(x_{n})\not \equiv 0}$, thus {\displaystyle {\begin{aligned}&\quad \,\Pr[Q(r_{1},\ldots ,r_{n})=0\mid Q_{k}(r_{1},\ldots ,r_{n-1})\neq 0]\\&=\Pr[Q'(r_{n})=0\mid Q_{k}(r_{1},\ldots ,r_{n-1})\neq 0]\\&\leq {\frac {k}{|S|}}\end{aligned}}}. Therefore, due to the law of total probability, {\displaystyle {\begin{aligned}&\quad \,\Pr[Q(r_{1},\ldots ,r_{n})=0]\\&=\Pr[Q(r_{1},\ldots ,r_{n})=0\mid Q_{k}(r_{1},\ldots ,r_{n-1})\neq 0]\Pr[Q_{k}(r_{1},\ldots ,r_{n-1})\neq 0]\\&\quad \,\,+\Pr[Q(r_{1},\ldots ,r_{n})=0\mid Q_{k}(r_{1},\ldots ,r_{n-1})=0]\Pr[Q_{k}(r_{1},\ldots ,r_{n-1})=0]\\&\leq \Pr[Q(r_{1},\ldots ,r_{n})=0\mid Q_{k}(r_{1},\ldots ,r_{n-1})\neq 0]+\Pr[Q_{k}(r_{1},\ldots ,r_{n-1})=0]\\&\leq {\frac {k}{|S|}}+{\frac {d-k}{|S|}}\\&={\frac {d}{|S|}}.\end{aligned}}}
${\displaystyle \square }$

### The idea of fingerprinting

Suppose we want to compare two items ${\displaystyle Z_{1}}$ and ${\displaystyle Z_{2}}$. Instead of comparing them directly, we compute random fingerprints ${\displaystyle \mathrm {FING} (Z_{1})}$ and ${\displaystyle \mathrm {FING} (Z_{2})}$ and compare these. The fingerprints has the following properties:

• ${\displaystyle \mathrm {FING} (\cdot )}$ is a function, which means that if ${\displaystyle Z_{1}=Z_{2}}$ then ${\displaystyle \mathrm {FING} (Z_{1})=\mathrm {FING} (Z_{2})}$.
• If ${\displaystyle Z_{1}\neq Z_{2}}$ then ${\displaystyle \Pr[\mathrm {FING} (Z_{1})=\mathrm {FING} (Z_{2})]}$ is small.
• It is much more to compute and compare the fingerprints than to compare ${\displaystyle Z_{1}}$ and ${\displaystyle Z_{2}}$ directly.

For Freivald's algorithm, the items to compare are two ${\displaystyle n\times n}$ matrices ${\displaystyle AB}$ and ${\displaystyle C}$, and given an ${\displaystyle n\times n}$ matrix ${\displaystyle M}$, its random fingerprint is computed as ${\displaystyle \mathrm {FING} (M)=Mr}$ for a uniformly random ${\displaystyle r\in \{0,1\}^{n}}$.

For the Schwartz-Zippel algorithm, the items to compare are two polynomials ${\displaystyle P_{1}(x_{1},\ldots ,x_{n})}$ and ${\displaystyle P_{2}(x_{1},\ldots ,x_{n})}$, and given a polynomial ${\displaystyle Q(x_{1},\ldots ,x_{n})}$, its random fingerprint is computed as ${\displaystyle \mathrm {FING} (Q)=Q(r_{1},\ldots ,r_{n})}$ for ${\displaystyle r_{i}}$ chosen independently and uniformly at random from some fixed set ${\displaystyle S}$.

For different problems, we may have different definitions of ${\displaystyle \mathrm {FING} (\cdot )}$.

## Communication complexity

Alice and Bob are two entities. Alice has a private input ${\displaystyle x}$ and Bob has a private input ${\displaystyle y}$. Together they want to compute a function ${\displaystyle f(x,y)}$ by communicating with each other. This is the model of communication complexity introduced by Yao in 1979.

In the communication complexity model, the local computational costs are ignored. The complexity of algorithms (also called communication protocols here) are measured by the number of bits communicated between Alice and Bob.

A basic function is EQ, defined as

${\displaystyle \mathrm {EQ} (x,y)={\begin{cases}1&{\mbox{if }}x=y,\\0&{\mbox{otherwise.}}\end{cases}}}$

This function corresponds to the problem that two far apart entities Alice and Bob, each has a copy of a database (Alice's copy is ${\displaystyle x}$, and Bob's copy is ${\displaystyle y}$), and they want to compare whether their copies of the database are identical.

A trivial way to solve EQ is to let Bob send ${\displaystyle y}$ to Alice. Supposed that ${\displaystyle x,y\in \{0,1\}^{n}}$, this costs ${\displaystyle n}$ bits of communications.

It is known that for deterministic communication protocols, this is the best we can get for computing EQ.

 Theorem (Yao 1979) Any deterministic communication protocol computing EQ on two ${\displaystyle n}$-bit strings costs ${\displaystyle n}$ bits of communication in the worst-case.

This theorem is much more nontrivial to prove than it looks, because Alice and Bob are allowed to interact with each other in arbitrary ways. How to prove such lower bounds is not today's topic.

If the randomness is allowed, we can use the idea of fingerprinting to solve this problem with significantly less communications. The general framework for the algorithm is as follows:

• Alice choose a random fingerprint function ${\displaystyle \mathrm {FING} (\cdot )}$ and compute the fingerprint of her input ${\displaystyle \mathrm {FING} (x)}$;
• Alice sends both the description of ${\displaystyle \mathrm {FING} (\cdot )}$ and the value of ${\displaystyle \mathrm {FING} (x)}$ to Bob;
• Bob computes ${\displaystyle \mathrm {FING} (y)}$ and check whether ${\displaystyle \mathrm {FING} (x)=\mathrm {FING} (y)}$.

So the question is, how to design this random fingerprint function ${\displaystyle \mathrm {FING} (\cdot )}$ to guarantee:

1. A random ${\displaystyle \mathrm {FING} (\cdot )}$ can be described succinctly.
2. The range of ${\displaystyle \mathrm {FING} (\cdot )}$ is small, so the fingerprints are succinct.
3. If ${\displaystyle x\neq y}$, the probability ${\displaystyle \Pr[\mathrm {FING} (x)=\mathrm {FING} (y)]}$ is small.

The fingerprint function we choose is as follows: by treating the input string ${\displaystyle x\in \{0,1\}^{n}}$ as the binary representation of a number, let ${\displaystyle \mathrm {FING} (x)=x{\bmod {p}}}$ for some random prime ${\displaystyle p}$. The prime ${\displaystyle p}$ can uniquely specify a random fingerprint function ${\displaystyle \mathrm {FING} (\cdot )}$, thus can be used as a description of the function, and alos the range of the fingerprints is ${\displaystyle [p]}$, thus we want the prime ${\displaystyle p}$ to be reasonably small, but still has a good chance to distinguish different ${\displaystyle x}$ and ${\displaystyle y}$ after modulo ${\displaystyle p}$.

 A randomized protocol for EQ Alice does: for some parameter ${\displaystyle k}$ (to be specified), choose uniformly at random a prime ${\displaystyle p\in [k]}$; send ${\displaystyle p}$ and ${\displaystyle x{\bmod {p}}}$ to Bob; Upon receiving ${\displaystyle p}$ and ${\displaystyle x{\bmod {p}}}$, Bob does: check whether ${\displaystyle x{\bmod {p}}=y{\bmod {p}}}$.

The number of bits to be communicated is ${\displaystyle O(\log k)}$. We then bound the probability of error ${\displaystyle \Pr[x{\bmod {p}}=y{\bmod {p}}]}$ for ${\displaystyle x\neq y}$, in terms of ${\displaystyle k}$.

Suppose without loss of generality ${\displaystyle x>y}$. Let ${\displaystyle z=x-y}$. Then ${\displaystyle z<2^{n}}$ since ${\displaystyle x,y\in [2^{n}]}$, and ${\displaystyle z\neq 0}$ for ${\displaystyle x\neq y}$. It holds that ${\displaystyle x{\bmod {p}}=y{\bmod {p}}}$ if and only if ${\displaystyle z}$ is dividable by ${\displaystyle p}$. Note that ${\displaystyle z<2^{n}}$ since ${\displaystyle x,y\in [2^{n}]}$. We only need to bound the probability

${\displaystyle \Pr[z{\bmod {p}}=0]}$ for ${\displaystyle 0, where ${\displaystyle p}$ is a random prime chosen from ${\displaystyle [k]}$.

The probability ${\displaystyle \Pr[z{\bmod {p}}=0]}$ is computed directly as

${\displaystyle \Pr[z{\bmod {p}}=0]\leq {\frac {{\mbox{the number of prime divisors of }}z}{{\mbox{the number of primes in }}[k]}}}$.

For the numerator, we have the following lemma.

 Lemma The number of distinct prime divisors of any natural number less than ${\displaystyle 2^{n}}$ is at most ${\displaystyle n}$.
Proof.
 Each prime number is ${\displaystyle \geq 2}$. If an ${\displaystyle N>0}$ has more than ${\displaystyle n}$ distinct prime divisors, then ${\displaystyle N\geq 2^{n}}$.
${\displaystyle \square }$

Due to this lemma, ${\displaystyle z}$ has at most ${\displaystyle n}$ prime divisors.

We then lower bound the number of primes in ${\displaystyle [k]}$. This is given by the celebrated Prime Number Theorem (PNT).

 Prime Number Theorem Let ${\displaystyle \pi (k)}$ denote the number of primes less than ${\displaystyle k}$. Then ${\displaystyle \pi (k)\sim {\frac {k}{\ln k}}}$ as ${\displaystyle k\rightarrow \infty }$.

Therefore, by choosing ${\displaystyle k=tn\ln tn}$ for some ${\displaystyle t}$, we have that for a ${\displaystyle 0, and a random prime ${\displaystyle p\in [k]}$,

${\displaystyle \Pr[z{\bmod {p}}=0]\leq {\frac {n}{\pi (k)}}\sim {\frac {1}{t}}}$.

We can make this error probability polynomially small and the number of bits to be communicated is still ${\displaystyle O(\log k)=O(\log n)}$.

### Application: Randomized pattern matching

Consider the following problem of pattern matching, which has nothing to do with communication complexity.

• Input: a string ${\displaystyle x\in \{0,1\}^{n}}$ and a "pattern" ${\displaystyle y\in \{0,1\}^{m}}$.
• Determine whether the pattern ${\displaystyle y}$ is a contiguous substring of ${\displaystyle x}$. Usually, we are also asked to find the location of the substring.

A naive algorithm trying every possible match runs in ${\displaystyle O(nm)}$ time. The more sophisticated KMP algorithm inspired by automaton theory runs in ${\displaystyle O(n+m)}$ time.

A simple randomized algorithm, due to Karp and Rabin, uses the idea of fingerprinting and also runs in ${\displaystyle O(n+m)}$ time.

Let ${\displaystyle X(j)=x_{j}x_{j+1}\cdots x_{j+m-1}}$ denote the substring of ${\displaystyle x}$ of length ${\displaystyle m}$ starting at position ${\displaystyle j}$.

 Algorithm (Karp-Rabin) pick a random prime ${\displaystyle p\in [k]}$; for ${\displaystyle j=1}$ to ${\displaystyle n-m+1}$ do if ${\displaystyle X(j){\bmod {p}}=y{\bmod {p}}}$ then report a match; return "no match";

So the algorithm just compares the ${\displaystyle \mathrm {FING} (X(j))}$ and ${\displaystyle \mathrm {FING} (y)}$ for every ${\displaystyle j}$, with the same definition of fingerprint function ${\displaystyle \mathrm {FING} (\cdot )}$ as in the communication protocol for EQ.

By the same analysis, by choosing ${\displaystyle k=n^{2}m\ln(n^{2}m)}$, the probability of a single false match is

${\displaystyle \Pr[X(j){\bmod {p}}=y{\bmod {p}}\mid X(j)\neq y]=O\left({\frac {1}{n^{2}}}\right)}$.

By the union bound, the probability that a false match occurs is ${\displaystyle O\left({\frac {1}{n}}\right)}$.

The algorithm runs in linear time if we assume that we can compute ${\displaystyle X(j){\bmod {p}}}$ for each ${\displaystyle j}$ in constant time. This outrageous assumption can be made realistic by the following observation.

 Lemma Let ${\displaystyle \mathrm {FING} (a)=a{\bmod {p}}}$. ${\displaystyle \mathrm {FING} (X(j+1))\equiv 2(\mathrm {FING} (X(j))-2^{m-1}x_{j})+x_{j+m}{\pmod {p}}\,}$.
Proof.
 It holds that ${\displaystyle X(j+1)=2(X(j)-2^{m-1}x_{j})+x_{j+m}\,}$. So the equation holds on the finite field modulo ${\displaystyle p}$.
${\displaystyle \square }$

Due to this lemma, each fingerprint ${\displaystyle \mathrm {FING} (X(j))}$ can be computed in an incremental way, each in constant time. The running time of the algorithm is ${\displaystyle O(n+m)}$.

## Checking distinctness

Consider the following problem:

• Given a sequence ${\displaystyle x_{1},x_{2},\ldots ,x_{n}\in \{1,2,\ldots ,n\}}$, check whether every member of ${\displaystyle \{1,2,\ldots ,n\}}$ appears exactly once.

This problem is called detecting duplicate or checking distinctness. It can be solved in linear time and linear space by a straightforward algorithm.

For many real applications, the ${\displaystyle n}$ is enormously large, and we would like to have an algorithm using very limited extra space.

### Fingerprinting multisets

A randomized algorithm due to Lipton, checks distinctness by solving a more general problem fingerprinting multisets.

Given a multiset (each member may appear more than once) ${\displaystyle M=\{x_{1},x_{2},\ldots ,x_{n}\}}$, its fingerprint is defined as

${\displaystyle \mathrm {FING} (M)=\prod _{i=1}^{n}(r-x_{i}){\bmod {p}},}$

where ${\displaystyle p}$ is a random prime chosen from the interval ${\displaystyle [(n\log n)^{2},2(n\log n)^{2}]}$, and ${\displaystyle r}$ is chosen uniformly at random from ${\displaystyle [p]}$.

We first see that the space reuqired to compute ${\displaystyle \mathrm {FING} (M)}$ is only ${\displaystyle O(\log n)}$ bits. We do not need to compute the value of the product ${\displaystyle \prod _{i=1}^{n}(r-x_{i})}$. Instead, all the computation can be done in the finite field ${\displaystyle \mathbb {Z} _{p}}$ (you need some knowledge in algebra to understand this). So the space requirement is only ${\displaystyle O(\log p)=O(\log n)}$.

It is easy to see that the above fingerprint function is invariant under permutations, thus one multiset has only one fingerprint. The next theorem due to Lipton, states that the probability that two distinct multisets have the same fingerprint is small.

 Theorem (Lipton 1989) Let ${\displaystyle M_{1}=\{x_{1},x_{2},\ldots ,x_{n}\}}$ and ${\displaystyle M_{2}=\{y_{1},y_{2},\ldots ,y_{n}\}}$ be two multisets whose members are from ${\displaystyle \{1,2,\ldots ,n\}}$. If ${\displaystyle M_{1}\neq M_{2}}$, then ${\displaystyle \Pr[\mathrm {FING} (M_{1})=\mathrm {FING} (M_{2})]=O\left({\frac {1}{n}}\right)}$.
Proof.
 Let ${\displaystyle P_{1}(u)=\prod _{i=1}^{n}(u-x_{i})}$ and ${\displaystyle P_{2}(u)=\prod _{i=1}^{n}(u-y_{i})}$, and let ${\displaystyle Q(u)=P_{1}(u)-P_{2}(u)}$. If ${\displaystyle M_{1}\neq M_{2}}$, then the polynomials ${\displaystyle P_{1}}$ and ${\displaystyle P_{2}}$ are not identical, and the polynomial ${\displaystyle Q\not \equiv 0}$. We only need to show that ${\displaystyle \Pr[Q(r){\bmod {=}}0]\,}$ is small for our choice of random ${\displaystyle p}$ and ${\displaystyle r}$. We first show that the probability that ${\displaystyle Q\equiv 0{\bmod {p}}}$ is small. Then apply the Schwartz-Zippel technique to show that conditioning on that ${\displaystyle Q\not \equiv 0{\bmod {p}}}$, the probability ${\displaystyle Q(r){\bmod {p}}=0}$ is small. Expand ${\displaystyle Q(u)}$ in form of ${\displaystyle Q(u)=\sum _{k=0}^{n}a_{i}u^{k}}$. It can be verified by induction on ${\displaystyle n}$ that for any coefficient ${\displaystyle a_{k}}$, it holds that ${\displaystyle |a_{k}|\leq 2n\cdot 2^{n}}$. Since ${\displaystyle Q\not \equiv 0}$, it has at least one nonzero coefficient ${\displaystyle c\neq 0}$, and it holds that ${\displaystyle |c|\leq 2n\cdot 2^{n}}$. Then with the same analysis as in the problem of EQ, by our choice of random ${\displaystyle p}$, ${\displaystyle \Pr[c{\bmod {p}}=0]=O\left({\frac {1}{n}}\right)}$. Conditioning on that ${\displaystyle Q\not \equiv 0}$ after modulo ${\displaystyle p}$, since the degree of ${\displaystyle Q}$ is at most ${\displaystyle n}$, ${\displaystyle Q}$ has at most ${\displaystyle n}$ roots. Since ${\displaystyle p\in [(n\log n)^{2},2(n\log n)^{2}]}$, therefore ${\displaystyle r}$ is uniformly distributed over a set of size at least ${\displaystyle (n\log n)^{2}}$. Therefore, the probability that ${\displaystyle Q(r){\bmod {p}}=0}$ conditioning on that ${\displaystyle Q\not \equiv 0{\bmod {p}}}$ is at most ${\displaystyle {\frac {1}{n(\log n)^{2}}}}$. By the union bound, ${\displaystyle \Pr[\mathrm {FING} (M_{1})=\mathrm {FING} (M_{2})]=O\left({\frac {1}{n}}\right)}$.
${\displaystyle \square }$

### Checking distinctness by fingerprinting multisets

We now have a fingerprint function for multisets, which is useful for checking the identities of multisets. To solve the problem of checking distinctness, we just check whether the input multiset ${\displaystyle M}$ is identical to the set ${\displaystyle \{1,2,\ldots ,n\}}$.