Short answer: if $\frac 1 \pi$ is a normal number in base $2$, then the series converges in measure (but not necessarily converges in the usual sense). However, the normality of $\pi$ and $\frac 1 \pi$ has not yet been proved (and it's not known could it be proved). I didn't attempt to prove the reverse, that from convergency in measure follows normality of $\frac 1 \pi$.

*Disclaimer*: I would be glad if someone with good knowledge of measure theory checked and maybe helped to improve and make more rigorous the part that justifies introduction of the probability space.
$\DeclareMathOperator{\E}{\mathbb{E}}$
$\DeclareMathOperator{\Var}{Var}$
$\DeclareMathOperator{\Cov}{Cov}$

First, because of periodicity of sine, $\sin\left(2^n\right) = \sin\left(2\pi\left\{\frac{2^n}{2\pi}\right\}\right) =
\sin\left(2\pi \left\{2^{n-1}\frac{1}{\pi}\right\}\right),$ where $\{\}$ denotes taking a fractional part of a number.

I define a number $c$ to be normal in base $2$ if $$\left (\forall(a,b)\in\left\{(a,b):a\in(0,1)\wedge\ b\in (a,1)\right\}:\lim_{N\to\infty}\left(\frac 1 N\sum_{n=1}^{N}I\left[\left\{2^n c\right\}\in(a,b)\right] \right)=b-a,\right)\wedge\\\left(\forall a\in[0,1]: \lim_{N\to\infty}\left(\frac 1 N \sum_{n=1}^{N}I\left[\left\{2^nc\right\}=a\right]\right)=0\right).$$
This definition could be found e.g. here, page 127.

Below I'm assume that $\frac 1 \pi$ is a normal number in base $2.$

Let's introduce a probability space $(\Sigma, \mathcal{F}, P)$ satisfying Kolmogorov's axioms taking $\Sigma = (0,1),$ $\mathcal{F}$ a Borel algebra on $(0,1)$ and a probability measure $P$ so that the the measure of an open interval is equal to its length, and the measure of a point is equal to zero. This probability space describes probability space of a uniformly distributed on $(0,1)$ random value.

There are two remarks here. First, the probability I'm talking about in this answer is the frequentist probability that draw consequences from infinite (but fixed) sequences of points with known distribution they are sampled from. It is not the Bayesian probability that characterizes degrees of belief. Second, there is a theory of measurable spaces that generalizes the concept of probability space with less restrictive axioms. I don't use it because more restrictive probability axioms are enough for this problem and at this moment I'm more familiar with probability theory rather than general measure theory.

Let's define a sequence of real number $\xi_n$ to be so that
$$\left (\forall(a,b)\in\left\{(a,b):a\in(0,1)\wedge\ b\in (a,1)\right\}:\lim_{N\to\infty}\left(\frac 1 N\sum_{n=1}^{N}I\left[\xi_n \in(a,b)\right] \right)=b-a,\right)\wedge\\\left(\forall a\in[0,1]: \lim_{N\to\infty}\left(\frac 1 N \sum_{n=1}^{N}I\left[\xi_n=a\right]\right)=0\right)\wedge\\
\left(
\forall n > 0: \left\{2\xi_{n} - \xi_{n+1}\right\} = 0
\right).
$$

This sequence can be drawn from our probability space, thus in the frequentist probability language it is a uniformly distributed sequence drawn from our probability space. Simultaneously a sequence $\{2^n c\}$ satisfies all conditions for $\xi_n,$ so it is possible to work with $\{2^n c\}$ as with a particular fixed sequence drawn from our probability space.

I call a series $\sum_{n=1}^\infty x_n$ converging in measure to value $S$ if
$$\forall \varepsilon > 0: \lim\limits_{N \to \infty} \left(\frac 1 N \sum_{n=1}^{N} I \left[\left|S - \sum_{n=1}^{N} x_n \right| > \varepsilon \right] \right) = 0.$$

From this definition of convergency in measure it follows that the series converge iff
$$\forall \varepsilon > 0 \ \exists N > 0: \lim\limits_{M\to\infty}\left(\frac 1 M \sum_{n=N}^{N+M} I\left[\left|\sum_{n=N}^{N+M} x_n\right| > \varepsilon \right] \right) = 0.$$

This expression is what I'm aiming to prove for $x_n=\frac{\sin(2\pi \xi_n)}{n}.$

Because $\xi_n$ as it is argued above could be treated as a sample from the defined above probability space, the sequences $\frac 1 M \sum_{n=N}^{N+M} I\left[\left|\sum_{n=N}^{N+M} x_n\right| > \varepsilon \right]$ and $\frac 1 M \sum_{n=N}^{N+M} I\left[\left|\sum_{n=N}^{N+M} x_n\right| > \varepsilon \right]$ become samples from corresponding to them probability spaces too, and the properties of the space they are sampled from could be inferred.

Thus the convergency in measure criterion defined above could be rephrased in terms of the corresponding to the sequence probability space as
$$\forall \varepsilon > 0 \ \exists N > 0:
\lim\limits_{M\to\infty} P\left(\left|\sum_{n=N}^{N+M} x_n\right| > \varepsilon \right) = 0.$$
In probability theory this type of convergency is called convergency in probability, and it is a weaker type of convergency than convergency with probability $1.$

Let's define $\Delta_{N,M}$ as $\Delta_{N,M} = \sum_{n=N}^{N+M} x_n.$ Then $\E\left[\Delta_{N,M}\right] = 0$ because sequence $2\pi \xi_n$ is uniform on $(0, 2\pi)$ and sine is symmetric.

From Chebyshev inequality $\forall \varepsilon > 0:\ P\left(\left|\Delta\right| > \varepsilon\right) < \frac{\E\left[\Delta_{N,M}^2\right]}{\varepsilon^2}.$ Thus to show the convergency in probability it is enough to show that $\lim\limits_{N\to\infty}\lim\limits_{M\to\infty} \E\left[\Delta_{N,M}^2\right] = 0.$

Let's show that $\lim\limits_{N\to\infty}\lim\limits_{M\to\infty} \E\left[\Delta_{N,M}^2\right] = 0$ using the idea from this question.

The variance of $\Delta_{N,M}$ could be expressed as

$$\E\left[\Delta_{N,M}^2\right] =
\E\left[\left(\sum\limits_{n=N}^M x_n\right)^2\right] =
\E\left[\sum\limits_{n=N}^M \sum\limits_{k=N}^M x_n x_k \right] =
\sum\limits_{n=N}^M \sum\limits_{k=N}^M \E\left[ x_n x_k \right] =\\
2\sum\limits_{n=N}^M \sum\limits_{k=n+1}^{M} \E\left[ x_n x_k \right] +
\sum\limits_{n=N}^M \E\left[ x_n x_n \right] \leq
2\sum\limits_{n=N}^M \sum\limits_{k=n}^{M} \E\left[ x_n x_k \right] =
2\sum\limits_{n=N}^M \sum\limits_{k=0}^{M-n} \left|\E\left[ x_n x_{n+k} \right]\right|.$$

From this
$$\left|\E\left[ x_n x_{n+k} \right]\right| = \left|\E\left[ \frac{\sin\left(2\pi \xi_n\right) \sin\left(2\pi \xi_{n+k}\right)}{n(n + k)}\right]\right|,$$
and as it is shown in Appendix 1,
$$\left|\E\left[ \frac{\sin\left(2\pi \xi_n\right) \sin\left(2\pi \xi_{n+k}\right)}{n(n + k)}\right]\right| \leq \frac {2^{-k}}{n(n+k)}.$$

So $$\E\left[\Delta_{N,M}^2\right] \leq \sum\limits_{n=N}^M \sum\limits_{k=0}^{M - n} \frac{2^{-k}}{n(n+k)}.$$

As it is snown in Appendix 2, $$\lim_{N \rightarrow \infty} \lim_{M \rightarrow \infty} \sum\limits_{n=N}^M \sum\limits_{k=0}^{M - n} \frac{2^{-k}}{n(n+k)}=0,$$

and from this it follows that $\Delta_{N,M}$ converges in probability to zero as $M \rightarrow \infty$ and $N \rightarrow \infty.$

So the series converges in probability, or in measure introduced by uniformely distributed (in case of normality of $\frac 1 \pi$) sequence of numbers $\xi_n = \left\{2^n \frac 1 \pi\right\}.$

## Appendix 1

Let $\xi_n = \left\{2^n a\right\},$ where $a$ is a normal number, be written as a binary fraction $\xi_n = 0.b_{n,1}b_{n,2}b_{n,3}... = \sum\limits_{m=1}^\infty b_{n,m}2^{-m},$ where each number $b_{n,m}$ is either $0$ or $1.$ Then $\xi_{n+k} = \sum\limits_{m=1}^\infty b_{n,m}2^{-m+k}I\left[m < k\right] =
\sum\limits_{m=1}^\infty b_{n,m+k}2^{-m}$ and $\xi_n = \sum\limits_{m=1}^{k-1}b_{n,m} 2^{-m} + 2^{-k}\xi_{n+k}.$

Using the same probability measure as in the main part of the answer, that treats $\xi_n$ as uniformly distributed random variables on $(0,1),$ it is possible to treat $b_{n,k} = \lfloor 2^k \xi_n \rfloor \mod 2$ as random variables too. For each $n$ $b_{n,1}$ and $b_{n,2}$ should be independent, i.e. all possible combinations of values of $b_{n,1}$ and $b_{n,2}$ should be equiprobable, otherwise probabilities of $\xi_n$ being in subsets $\left(0,\frac 1 4\right],$ $\left(\frac 1 4, \frac 1 2\right],$ $\left(\frac 1 2 , \frac 3 4 \right],$ and $\left(\frac 3 4, 1 \right )$ would not be equal, that contradicts the assumption about uniform distribution of $\xi_n.$

The independence of $B_{n,k} = \sum\limits_{m=1}^{k-1}b_{n,m} 2^{-m}$ and $b_{n,k},$ $k >1$ could be shown by induction by $k$ using the same argument about uniform distribution of $\xi_n$ from the previous paragraph. From this independence follows independence of $B_{n,k}$ and $\sum\limits_{m=k}^{\infty}b_{n,m} 2^{-m},$ which is equivalent to independence of $B_{n,k}$ and $\xi_{n+k}.$

Using the obtained results let's estimate absolute value of covariance of $\sin \zeta_n$ and $\sin \zeta_{n+k},$ where $\zeta_n = 2\pi \xi_n:$

$$\E\left[\sin \zeta_n \sin \zeta_{n+k}\right] =
\E\left[\sin\left(2\pi B_{n,k} + \zeta_{n+k}2^{-k}\right) \sin \zeta_{n+k}\right].$$

Because $\sin\left(\alpha\beta\right) = \sin\alpha\cos\beta + \cos\alpha\sin\beta,$ $$\sin\left(2\pi B_{n,k} + \zeta_{n+k}2^{-k}\right) =
\sin\left(2\pi B_{n,k}\right) \cos\left(\zeta_{n+k}2^{-k}\right) +
\cos\left(2\pi B_{n,k}\right) \sin\left(\zeta_{n+k}2^{-k}\right) =
\sin\left(2\pi B_{n,k}\right) + 2^{-k} \zeta_{n+k} \cos\left(2\pi B_{n,k}\right) + o(2^{-k}),$$
and
$$\E\left[\sin \zeta_n \sin \zeta_{n+k}\right] =
\E\left[\sin\left(2\pi B_{n,k}\right) \sin \zeta_{n+k}\right] +
\E\left[2^{-k} \zeta_{n+k} \cos\left(2\pi B_{n,k}\right) \sin \zeta_{n+k}\right] + o(2^{-k}).$$

From independence of $B_{n,k}$ and $\xi_{n+k}$ it follows that $\E\left[\sin\left(2\pi B_{n,k}\right) \sin \zeta_{n+k}\right] = 0.$

The absolute value of $\E\left[\cos\left(2\pi B_{n,k}\right)\right] = \sum\limits_{m=1}^{2^k}\cos\left(\frac{2\pi}{m}\right)$ is bounded by $1,$ and $\E\left[ \zeta_{n+k}\sin \zeta_{n+k} \right] = -2\pi,$ so the absolute value of $\E\left[\sin\zeta_{n} \sin\zeta_{n+k}\right]$ is bounded by $\frac C {2^k},$ where $C$ is some constant independent of $n.$

## Appendix 2

Let's prove that for the double limit

$$\lim_{N \rightarrow \infty} \lim_{M \rightarrow \infty} \sum\limits_{n=N}^M \sum\limits_{k=0}^{M - n} \frac{2^{-k}}{n(n+k)}$$

the inner limit exists and the outer limit exists and is equal to zero.

The sum $\sum\limits_{k=0}^{M - n} \frac{2^{-k}}{n(n+k)}$ is bounded from above by $I_n = \frac 1 n \sum\limits_{k=0}^{\infty} \frac{2^{-k}}{n+k} = \frac 1 n \Phi\left(\frac 1 2, 1, n\right)$ for every $n,$ where $\Phi\left(z, s, a\right)$ is Lerch transcendent function. Using property 25.14.5 from this list, it is possible to rewrite $I_n$ as $\frac 2 n \int\limits_0^\infty \frac{e^{-nx}}{2-e^{-x}}dx.$ The integrand is bounded from above by $e^{-nx}$ and $I_n$ is bounded from above by $\frac 2 n \int\limits_0^\infty e^{-nx} dx = \frac 2 {n^2}.$

So

$$0 \leq \sum\limits_{n=N}^M \sum\limits_{k=0}^{M - n} \frac{2^{-k}}{n(n+k)} \leq 2 \sum\limits_{n=N}^M \frac {1}{n^2}.$$

The series $\sum\limits_{n=0}^\infty \frac{1}{n^2}$ converges as it could be shown using Maclaurin-Cauchy integral test, so as a consequence of the squeeze theorem the inner limit exists, and the outer limit exists and is equal to zero.