Let $X_0$ be the unit disc, and consider the process of "cutting out circles", where to construct $X_n$ you select a uniform random point $x \in X_{n-1}$, and cut out the largest circle with center $x$. To illustrate this process, we have the following graphic:

cutting out circles

where the graphs are respectively showing one sample of $X_1,X_2,X_3,X_{100}$ (the orange parts have been cut out).

Can we prove we eventually cut everything out? Formally, is the following true $$\text{lim}_{n \to \infty} \mathbb{E}[\text{Area}(X_n)] = 0$$

where $\mathbb{E}$ denotes we are taking the expectation value. Doing simulations, this seems true, in fact $\mathbb{E}[\text{Area}($X_n$)]$ seems to decay with some power law, but after 4 years I still don't really know how to prove this :(. The main thing you need to rule out is that $X_n$ doesn't get too skinny too quickly, it seems.

Joshua Lin
  • 2,203
  • 1
  • 12
  • 35
  • 5
    I think the visualization is not what you intend because it has overlapping circles? – alphacapture Jan 16 '17 at 03:40
  • 3
    It might be useful to first consider the 1-dimensional case, i.e. nested intervals. – Adrien Vakili Jan 16 '17 at 05:27
  • 1
    In the 1-d case, it is possible (but unlikely) to fill a given gap by selecting the exact midpoint. In the case of filling with circles, you pick a point that lies in an unfilled section. That section is concave, and you put a circle into it. It is impossible for the new circle to completely fill the region you put it into (Except in the trival case that your first circle is placed dead centre). If after placement of new circle, there is guaranteed to be unfilled space, there always be unfilled space, so your limit is 0. – Iadams Jan 16 '17 at 08:54
  • 1
    @ladams: The OP says : 'Repeat'. I assume they mean 'create a countable sequence of centers of circles that satisfy the criteria'. It is certainly possible to create such a countable sequence that does not cover the entire circle (imagine all centers fall on a diameter). So the question is not 'can I always find a new center?' – Chas Brown Jan 16 '17 at 10:12
  • 1
    On the other hand, at each step $n$ we can certainly make $A_{n+1}>A_n$. So the question is not $equivalent$ to 'can I always find a new center?' – Chas Brown Jan 16 '17 at 10:18
  • Suppose we allow the randomly selected circle centers to be inside old circles, but if such a center is chosen in a specific step, don't add a circle. This is clearly equivalent (in fact, if we make it add the largest circle contained in the smallest circle it is contained in, we get the visualization he gave). Then the sequence of points is almost surely dense. Is it possible to have a dense sequence of points such that this fails? – alphacapture Jan 19 '17 at 00:14
  • @alphacapture: That seems like a fruitful approach. Proceed! – Chas Brown Jan 19 '17 at 08:16
  • This reminds me of [Apollonian gaskets](https://en.wikipedia.org/wiki/Apollonian_gasket), which are special cases of what you describe. There has been some research on the areas of these, see also [this question/answer here on Math.SE](https://math.stackexchange.com/a/565018/16881) including a link to a related question on MathOverflow. Not an answer, but it might help! – Ailurus Jul 28 '21 at 22:58
  • If you keep track of the distance from any point to the removed area with random variables, you find that this distance tends to zero for every point with probability $1$, so that every point is either removed, or in the closure of a removed disk, and with probability $1$ all the area is removed in the limit. See my answer below for details (it got trapped beneath an even longer answer, hence my comment). – Rivers McForge Aug 13 '21 at 07:03

5 Answers5


This proof is incomplete, as noted in the comments and at the end of this answer

Apologies for the length. I tried to break it up in to sections so it's easier to follow and I tried to make all implications really clear. Happy to revise as needed

I'll start with some definitions to keep things clear.


  • The area of a set $S \subset \mathbb{R^2}$ be the 2-Lebesgue measure $\lambda^*_2(S):= A(S)$
  • $p_n$ be the point selected from $X_{n-1}$ such that $P(p_n \in Q) = \frac{A(Q)}{A({X_{n-1}})} \; \forall Q\in \mathcal{B}(X_{n-1})$
  • $C_n(p)$ is the maximal circle drawn around $p \in X_{n-1}$ that fits in $X_{n-1}$: $C_n(p) = \max_r \textrm{Circle}(p,r):\textrm{Circle}(p,r) \subseteq X_{n-1})$
  • $A_n = A(C_n(p_n)) $ be the area of the circle drawn around $p_n$ $($i.e., $X_n = X_{n-1}\setminus C_n(p_n))$

We know that $0 \leq A_n \leq 1$. By your definition of the generating process we can also make a stronger statement:

Also, since you're using a uniform probability measure over (well-behaved) subsets of $X_{n-1}$ as the distribution of $p_n$ we have $P(p_n \in B) := \frac{A(B)}{A(X_{n-1})}\;\;\forall B\in \sigma\left(X_{n-1}\right) \implies P(p_1 \in S) = P(S) \;\;\forall S \in \sigma(X_0)$.

Lemma 1: $P\left(\exists L \in [0,\infty): \lim \limits_{n \to \infty} A(X_{n}) = L\right)=1$

Proof: We'll show this by proving

  1. $P(A_n>0)=1\;\forall n$
  2. $(1) \implies P\left(A(X_{i})\leq A(X_{i-1}) \;\;\forall i \right)=1$
  3. $(2) \implies P\left(\exists L \in [0,\infty): \lim \limits_{n \to \infty} A(X_{n}) = L\right)=1$

$A_n = 0$ can only happen if $p_n$ falls directly on the boundary of $X_n$ (i.e., $p_n \in \partial_{X_{n-1}} \subset \mathbb{R^2})$. However, since each $\partial_{X_{n-1}}$ is the union of a finite number of smooth curves (circular arcs) in $\mathbb{R^2}$ we have ${A}(\partial_{X_{n-1}})=0 \;\forall n \implies P(p_n \in \partial_{X_{n-1}})=0\;\;\forall n \implies P(A_n>0)=1\;\forall n$

If $P(A_n>0)=1\;\forall n$ then since $A(X_i) = A(X_{i-1}) - A_n\;\forall i$ we have that $A(X_{i-1}) - A(X_i) = A_n\;\forall i$

Therefore, $P(A(X_{i-1}) - A(X_i) > 0\;\forall i) = P(A_n>0\;\forall i)=1\implies P\left(A(X_{i})\leq A(X_{i-1}) \;\;\forall i \right)=1$

If $P\left(A(X_{i})\leq A(X_{i-1}) \;\;\forall i \right)=1$ then $(A(X_{i}))_{i\in \mathbb{N}}$ is a monotonic decreasing sequence almost surely.

Since $A(X_i)\geq 0\;\;\forall i\;\;(A(X_{i}))_{i\in \mathbb{N}}$ is bounded from below, the monotone convergence theorem implies $P\left(\exists L \in [0,\infty): \lim \limits_{n \to \infty} A(X_{n}) = L\right)=1\;\;\square$

As you've stated, what we want to show is that eventually we've cut away all the area. There are two senses in which this can be true:

  1. Almost all sequences $\left(A(X_i)\right)_1^{\infty}$ converge to $0$: $P\left(\lim \limits_{n\to\infty}A(X_n) = 0\right) = 1$
  2. $\left(A(X_i)\right)_1^{\infty}$ converges in mean to $0$: $\lim \limits_{n\to \infty} \mathbb{E}[A(X_n)] = 0$

In general, these two senses of convergence do not imply each other. However, with a couple additional conditions we can show almost sure convergence implies convergence in mean. Your question is about (2), and we will get there via proving (1) plus a sufficient condition for $(1)\implies (2)$.

I'll proceed as follows:

  1. Show $A(X_n) \overset{a.s.}{\to} 0$ using Borel-Cantelli Lemma
  2. Use the fact that $0<A(X_n)\leq 1$ to apply the Dominated Convergence Theorem to show $\mathbb{E}[A(X_n)] \to 0$

Step 1: $A(X_n) \overset{a.s.}{\to} 0$

If $\lim_{n\to \infty} A(X_n) = A_R > 0$ then there is some set $R$ with positive area $A(R)=A_R >0$ that is a subset of all $X_n$ (i.e.,$\exists R \subset X_0: A(R)>0\;\textrm{and}\;R \subset X_i\;\;\forall i> 0)$

Let's call a set $S\subset X_0:A(S)>0,\;S \subset X_i\;\;\forall i> 0$ a reserved set $(R)$ since we are "setting it aside". In the rest of this proof, the letter $R$ will refer to a reserved set.

Let's define the set $Y_n = X_n \setminus R$, and the event $T_n:=p_n \in Y_{n-1}$ then

Lemma 2: $P\left(\bigcap_1^n T_i \right) \leq A(Y_0)^n = (1 - A_R)^n\;\;\forall n>0$

Proof: We'll prove this by induction. Note that $P(T_1) = A(Y_0)$ and $P(T_1\cap T_2) = P(T_2|T_1)P(T_1)$. We know that if $T_1$ has happened, then Lemma 1 implies that $A(Y_{1}) < A(Y_0)$. Therefore

$$P(T_2|T_1)<P(T_1)=A(Y_0)\implies P\left(T_1 \bigcap T_2\right)\leq A(Y_0)^2$$

If $P(\bigcap_{i=1}^n T_i) \leq A(Y_0)^n$ then by a similar argument we have

$$P\left(\bigcap_{i=1}^{n+1} T_i\right) = P\left( T_{n+1} \left| \;\bigcap_{i=1}^n T_i\right. \right)P\left(\bigcap_{i=1}^n T_i\right)\leq A(Y_0)A(Y_0)^n = A(Y_0)^{n+1}\;\;\square$$

However, to allow $R$ to persist, we must ensure that not only does $T_n$ occur for all $n>0$ but that each $p_n$ doesn't fall in some neighborhood $\mathcal{N}_n(R)$ around $R$:

$$\mathcal{N}_n(R):= \mathcal{R}_n\setminus R$$ $$\textrm{where}\; \mathcal{R}_n:=\{p \in X_{n-1}: A(C_n(p)\cap R)>0\}\supseteq R$$

Let's define the event $T'_n:=p_n \in X_{n-1}\setminus \mathcal{R}_n$ to capture the above requirement for a particular point $p_n$. We then have the following.

Lemma 3: $A(X_n) \overset{a.s.}{\to} A_R \implies P\left(\bigcap \limits_{i \in \mathbb{N}} T_i'\right)=1$

Proof: Assume $A(X_n) \overset{a.s.}{\to} A_R$. If $P\left(\bigcap \limits_{i \in \mathbb{N}} T_i'\right)<1$ then $P\left(\exists k>0:p_k \in \mathcal{R}_k\right)>0$. By the definition of $ \mathcal{R}_k$, $A(C_k(p_k)\cap R) > 0$ which means that $X_{k}\cap R \subset R \implies A(X_{k}\cap R) < A_R$. By Lemma 1, $(X_i)_{i \in \mathbb{N}}$ is a strictly decreasing sequence of sets so $A(X_{j}\cap R) < A_R \;\;\forall j>i$; therefore, $\exists \epsilon > 0: P\left(A(X_n) \overset{a.s.}{\to} A_R - \epsilon\right)>0$. However, this contradicts our assumption $A(X_n) \overset{a.s.}{\to} A_R$. Therefore, $P\left(\bigcap \limits_{i \in \mathbb{N}} T_i'\right)<1$ is false which implies $P\left(\bigcap \limits_{i \in \mathbb{N}} T_i'\right)=1\;\square$

Corollary 1: $P\left(\bigcap \limits_{i \in \mathbb{N}} T_i'\right)=1$ is a necessary condition for $A(X_n) \overset{a.s.}{\to} A_R$

Proof: This follows immediately from Lemma 3 by the logic of material implication: $X \implies Y \iff \neg Y \implies \neg X$ -- an implication is logically equivalent to its contrapositive.

We can express Corollary 1 as an event $\mathcal{T}$ in a probability space $\left(X_0^{\mathbb{N}},\mathcal{F},\mathbb{P}\right)$ constructed from the sample space of infinite sequences of points $p_n \in X_0$ where:

  • $X_0^{\mathbb{N}}:=\prod_{i\in\mathbb{N}}X_0$ is the set of all sequences of points in the unit disk $X_0 \subset \mathbb{R^2}$

  • $\mathcal{F}$ is the product Borel $\sigma$-algebra generated by the product topology of all open sets in $X_0^{\mathbb{N}}$

  • $\mathbb{P}$ is a probability measure defined on $\mathcal{F}$

With this space defined, we can define our event $\mathcal{T}$ as as the intersection of a non-increasing sequence of cylinder sets in $\mathcal{F}$:

$$\mathcal{T}:=\bigcap_{i=1}^{\infty}\mathcal{T}_i \;\;\;\textrm{where } \mathcal{T}_i:=\bigcap_{j=1}^{i} T'_j = \text{Cyl}_{\mathcal{F}}(T'_1,..,T'_i)$$

Lemma 4: $\mathbb{P}(\mathcal{T}_n) = \mathbb{P}(\bigcap_1^n T'_i)\leq \mathbb{P}\left(\bigcap_1^n T_i\right)\leq (1-A_R)^n$

Proof: $\mathbb{P}(\mathcal{T}_n) = \mathbb{P}(\bigcap_1^n T'_i)$ follows from the definition of $\mathcal{T}_n$. $\mathbb{P}(\bigcap_1^n T'_i)\leq \mathbb{P}\left(\bigcap_1^n T_i\right)$ follows immediately from $R\subseteq \mathcal{R}_n\;\;\forall n\;\square$

Lemma 5: $\mathcal{T} \subseteq \limsup \limits_{n\to \infty} \mathcal{T}_n$

Proof: By definition $\mathcal{T} \subset \mathcal{T}_i \;\forall i>0$. Since $\left(\mathcal{T}_i\right)_{i \in \mathbb{N}}$ is nonincreasing, we have $\limsup \limits_{i\to \infty} \mathcal{T}_i = \limsup \limits_{i\to \infty}\mathcal{T}_i = \lim \limits_{i\to \infty}\mathcal{T}_i = \mathcal{T}\;\;\square$

Lemma 6: $\mathbb{P}\left(\limsup \limits_{i\to \infty} \mathcal{T}_i\right) = 0\;\;\forall A_R \in (0,1]$

Proof: From Lemma 4 $$\sum \limits_{i=1}^{\infty} \mathbb{P}\left(\mathcal{T}_i\right) \leq \sum \limits_{i=1}^{\infty} (1-A_R)^i = \sum \limits_{i=0}^{\infty} \left[(1-A_R) \cdot (1-A_R)^i\right] =$$ $$ \frac{1-A_R}{1-(1-A_R)} = \frac{1-A_R}{A_R}=\frac{1}{A_R}-1 < \infty \;\; \forall A_R \in (0,1]\implies$$ $$ \mathbb{P}\left(\limsup \limits_{i\to \infty} \mathcal{T}_i\right) = 0 \;\; \forall A_R \in (0,1]\textrm{ (Borel-Cantelli) }\;\square$$

Lemma 6 implies that only finitely many $\mathcal{T}_i$ will occur with probability 1. Specifically, for almost every sequence $\omega \in X_0^{\infty}$ there $\exists n_{\omega}<\infty$ such $p_{n_{\omega}} \in \mathcal{R}_{n_{\omega}}$.

We can define this as a stopping time for each sequence $\omega \in X_0^{\infty}$ as follows:

$$\tau(\omega) := \max \limits_{n \in \mathbb{N}} \{n:\omega \in \mathcal{T}_n\}$$

Corollary 2: $\mathbb{P}(\tau < \infty) = 1$

Proof: This follows immediately from Lemma 6 and the definition of $\tau$

Lemma 7: $P(\mathcal{T}) = 0\;\;\forall R:A(R)>0$

Proof: This follows from Lemma 5 and Lemma 6

This is where I'm missing a step For Theorem 1 below to work, Lemma 7 + Corollary 1 are not sufficient.

Just because every subset of positive area $R$ has probability zero of occurring doesn't imply that the probability of the set of all possible subsets of area $R$ has a zero probability. An analogous situation is with continuous random variables -- there are an uncountable number of points, but yet when we draw from it we nonetheless get a point.

What I don't know are the sufficient conditions for the following:

$P(\omega)=0 \;\forall \omega\in \Omega: A(\omega)=R \implies P(\{\omega: A(\omega)=R\})=0$

Theorem 1: $A(X_n) \overset{a.s.}{\to} 0$

Proof: Lemma 7 and Corollary 1 imply $A(X_n)$ does not converge to $A_R$ almost surely, which implies $P(A(X_n) \to A_R) < 1 \;\forall A_R > 0$. Corollary 2 makes the stronger statement that $P(A(X_n) \to A_R)=0\;\forall A_R>0$ (i.e., almost never), since we know that the sequences of centers of each circle $p_n$ viewed as a stochastic process will almost surely hit $R$ (again, since we've defined $R$ such that $A(R)>0)$. $P(A(X_n) \to A_R) = 0 \;\forall A_R>0$ with Lemma 1 implies that $P(A(X_n) \to 0) = 1$. Therefore, $A(X_n) \overset{a.s.}{\to} 0\;\square$

Step 2: $\mathbb{E}[A(X_n)] \to 0$

We will appeal to the Dominated Convergence Theorem to prove this result.

Theorem 2: $\mathbb{E}[A(X_n)] \to 0$

Proof: From Theorem 1 we've shown that $A(X_n) \overset{a.s.}{\to} 0$. Given an almost surely constant random variable $Z\overset{a.s.}{=}c$, we have $c>1 \implies |A(X_n)| < Z\;\forall n$. In addition, $\mathbb{E}[Z]=c<\infty$, hence $Z$ is $\mathbb{P}$-integrable. Therefore, $\mathbb{E}[A(X_n)] \to 0$ by the Dominated Convergence Theorem. $\square$

  • 2,591
  • 1
  • 6
  • 16
  • Your Lemma 1 seems excessively complicated - we know by definition that each new points removes area (or is on a boundary and does nothing), so $A(X_i)\geq A(X_{i+1}) \geq 0$. Therefore $A(X_i)$ is a monotonically decreasing bounded sequence and so always converges to some nonnegative number $L$. – Eric Aug 13 '21 at 04:12
  • @Eric didn’t you just restate the three components of the proof of Lemma 1? Can you point out where I added extra complexity? Perhaps I just proved something that seems obvious to you? – Bey Aug 13 '21 at 04:17
  • I’m a little confused at your definition of $R$. $R$ is a random Area which depends on which points are chosen. Therefore it doesn’t make sense to talk about the probability of your points not landing in $R$. Maybe you could do something sensible by talking about the conditional probability given the limit sequence or you could choose some fixed $R$ and then talk about probability relative to that, but you seem to do neither. Am I missing something obvious? – Eric Aug 13 '21 at 04:17
  • @Eric R is fixed, as per your second suggestion – Bey Aug 13 '21 at 04:18
  • Thanks for clarifying R. For Lemma 1, I think you proved some unnecessary stuff. For example, you don’t need to show that $A_n$ is a.s. positive. And the inside of (2) is just plain true (that removing stuff doesn’t add area), so you don’t need to use any probability language. If the rest of the proof is correct (it’s a bit overly notation dense for me to check the details) it’s pretty neat. – Eric Aug 13 '21 at 04:28
  • @Eric it's very likely that I proved something obvious -- but I don't know the background of the person asking and proving the obvious doesn't hurt my argument (apart from making it unnecessarily verbose) – Bey Aug 13 '21 at 04:31
  • Let us [continue this discussion in chat](https://chat.stackexchange.com/rooms/128527/discussion-between-eric-and-bey). – Eric Aug 13 '21 at 13:05
  • It seems to me that thus proof is wrong. The problem, as pointed out by Eric in the chat, is that there are uncountably many possible choices of $R$, so ruling one of them out does not rule out that some set of positive measure survives. – Yuval Peres Aug 21 '21 at 02:58
  • @YuvalPeres correct — the missing piece is going from Lemma 7, which holds for any fixed region of positive area M, to the measure of all regions of positive area M — analogous to needing to show the density is a point mass at 0 — not sure the missing element though — some measure over all possible subsets of the unit disk of area M – Bey Aug 21 '21 at 03:12
  • @YuvalPeres what I need to find is sufficient conditions to show that $P(\omega)=0 \;\forall \omega\in \Omega: A(\omega)=R \implies P(\{\omega: A(\omega)=R\})=0$ – Bey Aug 21 '21 at 03:39

New to this, so not sure about the rigor, but here goes.

Let $A_k$ be the $k$th circle. Assume the area of $\bigcup_{k=1}^n A_k$ does not approach the total area of the circle $A_T$ as $n$ tends towards infinity. Then there must be some area $K$ which is not covered yet cannot harbor a new circle. Let $C = \bigcup_{k=1}^\infty A_k$. Consider a point $P$ in such that $d(P,K)=0$ and $d(P,C)>0$. If no such point exists, then $K \subset C$, as $C$ is a clearly a closed set of points. If such a point does exist, then another circle with center $P$ and nonzero area can be made to cover part of $K$, and the same logic applies to all possible $K$. Therefore there is no area $K$ which cannot contain a new circle, and by consequence $$\lim_{n\to\infty}\Bigg[\bigcup_{k=1}^n A_k\Bigg] = \big[A_T\big]$$ Since the size of circles is continuous, there must be a set of circles $\{A_k\}_{k=1}^\infty$ such that $\big[A_k\big]=E(\big[A_k\big])$ for each $k \in \mathbb{N}$, and therefore $$\lim_{n\to\infty} E(\big[A_k\big]) = \big[A_k\big] $$

EDIT: This proof is wrong becuase I'm bad at probability, working on a new one.

  • 969
  • 9
  • 23
  • You may use [`\limits` in MathJax](http://meta.math.stackexchange.com/a/12850/290189) to show $\bigcup\limits_{k=1}^n A_k$ – GNUSupporter 8964民主女神 地下教會 Jan 17 '17 at 23:16
  • I know, I prefer the inline version when in paragraph form like that. – Vedvart1 Jan 17 '17 at 23:23
  • How is $K$ being defined? Or, what is the difference in definition between $K$ and $C$? – alphacapture Jan 18 '17 at 00:32
  • C is the union of all colored circles, while K is an arbitrary non-covered area in which a new circle can't be made. – Vedvart1 Jan 18 '17 at 02:18
  • The existence of a specific sequence that covers the area doesn't imply that the *expectation* of the sequence does it. I think you jumped the most important step there. –  Jan 18 '17 at 02:26
  • Shouldn't this prove it for all sequences though, given the generality of $A_k$? Since if you had any specific sequence of circles, you could substitue that in for $\{A_k\}_{k=1}^\infty$. – Vedvart1 Jan 18 '17 at 02:32
  • @Vedvart1 clearly there exist sequences that don't fill the region. The problem is to show that you will tend to avoid these in expectation. Check Chas Brows's comment in OP. –  Jan 18 '17 at 02:36
  • Are there sequences of nonzero probability that don't fill the region? Also, the points don't have to be chosen, since the proof simply relies on there not being any possibility for the theorem to be false for ANY points, random OR chosen. – Vedvart1 Jan 18 '17 at 02:42
  • @Vedvart1 there are infinitely many of sequences that don't fill the region. Any sequence where each new radius is smaller than $1/(\pi n)$ for example. It will converge to a area smaller than $\pi$. Just because each of them has probability 0 doesn't imply convergence of expectation. –  Jan 18 '17 at 02:50
  • Let us [continue this discussion in chat](http://chat.stackexchange.com/rooms/51973/discussion-between-vedvart1-and-slug-pue). – Vedvart1 Jan 18 '17 at 02:52

This is not a complete answer but is too lengthy for a comment. This problem is similar to the problems of statistical geometry. There, it is proposed that you can fill an arbitrary region on the plane with another shape (or shapes) that decrease in size according to a power law. The foundations of statistical geometry are laid out in a book by its developer, John Shier: Fractalize That! : A Visual Essay On Statistical Geometry, World Scientific, 2018 (available at Amazon).

I, myself, have made many such calculations. Several examples can found at Pinterest.

Now, the catch is that this cannot be proven (so far) other than for the simple case of circles within a circular disk. This was published by Christopher Evens, "(Always) Room for One More," Math Horizons, February 2016. This might be just what you need.

Cye Waldman
  • 6,238
  • 2
  • 12
  • 30

This process will generate a random instance of an appolonian packing.

This paper shows that the Hausdorff dimension of the residual set (leftover points) of an appolonian packing is less than 2 (more like 1.3). Therefore the 2-Lebesgue measure of the residual set is zero

So, all such sets result in a sparse residual set of zero area. Whether that counts as cutting out “all” area — I guess it depends on if your want it cut out almost everywhere or strictly everywhere.

  • 2,591
  • 1
  • 6
  • 16
  • That was a fast downvote! What is the issue here? I thought this would address the posters question in the affirmative. – Bey Aug 23 '21 at 01:31
  • @Eric — took your advice and went geometric — note that the Appolonian Gasket is one example of an appolonian packing; it looks like the max-radius/tangency requirement makes the residual area behave well enough that we don’t get zero interior sets of positive area – Bey Aug 23 '21 at 02:09
  • @JoshuaLin (had the mention wrong before) - -have you seen the paper linked to in my post? Seems to say that for Appolonian circle packings (i.e., maximial radii) you are correct. – Bey Aug 23 '21 at 15:17

Suppose our sample space is the open unit disk $\Bbb{D}$ with the usual uniform (Lebesgue) probability measure. To show that it's practically certain that the non-removed area tends to zero, we begin by constructing a sequence of random variables $R_n$, $n \geq 0$, on $\Bbb{D}$ as follows. (The following argument also works for any open set of finite area.)

Before we start slicing out our random disks, we initialize our random variables $R_0: \Bbb{D} \to [0, 1]$ so that, for any point $x \in \Bbb{D}$, $R_0(x)$ is the distance from $x$ to the edge of $\Bbb{D}$, i.e. $R_0(x)$ is the radius of the largest possible disk contained in $\Bbb{D}$ centered at $x$.

At each stage $n$, $n \geq 1$, we then select a point $x \in \Bbb{D}$ uniformly at random, and remove the disk of radius $R_{n-1}(x)$ centered at $x$. $R_n(y)$ is then defined to be $0$ for points $y$ in the removed disk, and updated to be the infimum distance from $y$ to any previously removed point (or the edge of $\Bbb{D}$) otherwise. If for some $y$, all the newly removed points are more than $R_{n-1}(y)$ away from $y$, then we set $R_n(y) = R_{n-1}(y)$.
Note that because our selections take place uniformly at random on $\Bbb{D}$, we may at some stage $n$ select a point $x$ for which $R_{n-1}(x) = 0$ (i.e. $x$ was already removed). In this case, no new disks are removed, $R_n(x)$ is defined to equal $R_{n-1}(x)$ for all points $x \in \Bbb{D}$, and we pick again.
These steps where "nothing happens" have no effect on the end result that all the area of $\Bbb{D}$ is eventually removed, although they grow increasingly frequent as the removed area takes up a larger and larger percentage of $\Bbb{D}$. The convenience of including these "do-nothing" steps is that we don't have to restrict our sample space at each new stage to only select from the non-removed points where $R_{n-1} > 0$.

We then have, for all $x \in \Bbb{D}$, $$R_0(x) \geq R_1(x) \geq R_2(x) \geq ... \geq 0.$$ The area of the remaining circle centered at $x$ at stage $n$ is $\pi R_n(x)^2$. The removed region at any stage $n$ is connected and closed; the non-removed region at stage $n$ is open. Let $C_0 = \emptyset$ and $C_n$ be the removed region at stage $n$; let $U_0 = \Bbb{D}$ and $U_n = \Bbb{D} \setminus C_n$ be the non-removed region. Since the area of $C_n$ is $\pi \Bbb{P}(C_n)$ and the area of $U_n$ is $\pi \Bbb{P}(U_n)$, it suffices to show that $\Bbb{P}(U_n) \to 0$ with probability $1$. If the points chosen in $\Bbb{D}$ at each stage are $X_1, X_2, X_3, ...$ then we have

\begin{align*} \Bbb{P}(C_n) &= R_1(X_1)^2 + ... + R_n(X_n)^2 \\ \Bbb{P}(U_n) &= 1 - R_1(X_1)^2 + ... + R_n(X_n)^2 \ \end{align*}

We first note that for any point $p \in \Bbb{U_{n-1}}$, $$\Bbb{P}(R_n(p) \leq \frac{1}{2} R_{n-1}(p) | X_n \in B(p, R_{n-1}(p)) = 9/16,$$ because if $X_n$ lands closer to $p$ in $R_{n-1}(p)$ than $\frac{3}{4}R_{n-1}(p)$, the radius $R_{n-1}(X_n) \geq \frac{1}{2}R_{n-1}(p)$ (draw a picture to see this). Since $\Bbb{D}$ and $U_n$ are both separable, we can cover them with a countable union of open balls $B(p_n, R_{n-1}(p_n))$. The infinite monkey theorem then tells us that almost surely, the area of any of the countably many open balls in $U_i$ is at some stage $n \geq i$ reduced by a factor of at least $(1 - (1/4)^2) = 15/16$. Since a countable intersection of almost sure events is almost sure, this implies that this happens for every open ball in $U_i$, and hence the area of $U_i$ is in the long term multiplied by a factor of at most $15/16$ with probability 1. In other words, if we let $U_\infty := \cap_n U_n$, then this means $\Bbb{P}(U_\infty) < 15/16 \Bbb{P}(U_n)$ with probability $1$ for all $n$.

We're close to the end here. Let $\pi L$ be the limiting area of $U_\infty$, so $L$ is the limiting probability of $U_\infty$. This means for any $\epsilon$, there exists $n$ so that the area of $U_n$, which we call $\pi L_n$, is at most $\epsilon$ greater than the area of $U_\infty$: $\pi L_n \leq \pi L + \epsilon$. But we know that with probability $1$, the area of $U_n$ will be reduced by a factor of at least $15/16$ in the limit, which means $\frac{15}{16} \pi L_n \geq \pi L$. Therefore $15/16 \pi L + 15/16 \epsilon > \pi L$ for every epsilon with probability $1$, which (since $L$ is nonnegative) is only possible if $L = 0$. $\blacksquare$

Rivers McForge
  • 4,514
  • 4
  • 28
  • Why does $R_n(p)$ approaching $0$ imply that $p$ is removed? This only seems to imply that the points removed are dense, but that’s different than them having area approaching everything. – Eric Aug 13 '21 at 16:21
  • @Eric It’s not that every point gets removed, but that the set of points that were never removed has area zero with probability $1$. If the set of never-removed points $S$ had positive area, then the event of us picking a point in $S$ (i) had positive probability and (ii) never happened, which means (again by invoking infinite monkey theorem) that $S$ has positive area with probability zero. – Rivers McForge Aug 13 '21 at 17:17
  • I’m still confused - all you’ve shown is that any ball will a.s. have a point that’s removed. However, there are sets of positive measure with zero interior, so this isn’t sufficient to show that all the area is removed. (ex: https://en.m.wikipedia.org/wiki/Smith–Volterra-Cantor_set ). – Eric Aug 13 '21 at 20:24
  • @Eric Throughout the selection process, I’m picking points uniformly at random in $\Bbb{D}$. With probability $1$, I eventually pick a point from any set of positive area. For there to be a leftover set of positive area, I would have to “miss” that set of positive area infinitely often. That is a probability zero event. – Rivers McForge Aug 13 '21 at 22:01
  • For what it's worth - I am also confused about parts of this proof. Its amazing how much brain fog this problem has caused me. First question: the R_n(x) are at first considered as real-valued random variables indexed by an x in D, but then you swap to considering R_n as a (D -> R function)-valued random variable, and claim lim n->infty R_n is a well-defined (D -> R function)-valued random variable. What sigma-algebra is being taken on the space of real-valued functions over D? And what type of convergence is R_n -> R_infty? (ideally, these details don't matter, but I'm confused in any case – Joshua Lin Aug 13 '21 at 23:36
  • @JoshuaLin The real-valued functions/random variables under consideration are just measurable functions w/r/t our uniform probability measure, corresponding to Lebesgue measure and the Lebesgue $\sigma$-algebra on $\Bbb{D}$ (i.e. the Lebesgue $\sigma$-algebra on $\Bbb{R}^2$, restricted to $\Bbb{D}$.) It turns out the $R_n$ are continuous for $0 \leq n \leq \infty$. $R_n \to R_\infty$ uniformly pointwise. – Rivers McForge Aug 14 '21 at 00:12
  • By that logic removing a random countable set would 100% hit any fixed area and so also leave over a set of measure 0 (which is false since countable sets have measure 0). It’s true that for a fixed area that it’s probability 0 to not hit it, but that doesn’t mean that it’s 100% to cover everything but a 0 area since there are uncountable many such potential end sets. – Eric Aug 14 '21 at 00:33
  • @Eric But that’s true? If I pick a countable sequence of points uniformly at random in $\Bbb{D}$, it intersects any set of *positive* area with probability $1$. I’m also removing a countable number of *whole disks*, not just their center points, which is why the leftover area is zero with probability $1$. – Rivers McForge Aug 14 '21 at 00:42
  • 1
    The issue is that $S_\infty$ is random so you can’t apply the monkey theorem to it. More generally, your proof never depends on the fact that the radii are maximal, so it can’t work. If your proof worked it would also apply if you replaced the $n$th radius $r_n$ with $min(r_n,1/2^n)$. However, then the area is at most $\pi*(1/4+1/16+1/64...)=\pi/3$, so it can’t cover everything (same as the single point case). You’re left with a positive measure nowhere dense set. Therefore, your proof isn’t correct. – Eric Aug 14 '21 at 02:26
  • Maybe a good example to keep in mind (following @Eric 's comment) is that if you enumerate all the rational points in the disc, and take the union of balls around these rational points where the radii get smaller sufficiently quickly (e.g. r_n ~ 1/2^n), the measure of this set is smaller than the measure of the whole disc; and yet R_infty = 0 identically.. it seems? – Joshua Lin Aug 14 '21 at 13:29
  • @eric I think you’re misunderstanding how that part of the proof is used. What I’m saying is for any particular set that could play the role of $S_\infty$, the infinite monkey theorem guarantees that if it has positive area, it gets perforated to zero in the selection process. The maximality of the removed area is implicit in the argument. Your attempts to invalidate the argument all burden it with additional ad hoc hypotheses (only points are removed, the total area removed is forced to be less than the circle) that don’t hold in the original. – Rivers McForge Aug 14 '21 at 18:03
  • I’m trying to say that a random positive measure set can have probability 0 of containing any fixed positive measure set. My construction (remove countable many circles whose centers are random and a.s dense) shows this explicitly. The issue with your proof is that you prove that any fixed random set has prob 0 of being in $S_\infty$ (monkey theorem), and then conclude that $S_\infty$ is a.s. zero measure. That conclusion is not sound, and my constructions show how you can have a random positive measure set which a.s. does not contain any fixed positive measure sets. – Eric Aug 14 '21 at 21:54
  • I made some changes to more explicitly use the maximality of the removed set. I'm pretty sure this argument should be satisfactory and explicate some of the steps that were left implicit in the original answer. – Rivers McForge Aug 15 '21 at 23:58