A statistic $T(X)$ is called **complete** statistic for a parameter $\theta$, if $E_{\theta}g(T) = 0$ for all $\theta$ implies $P_{\theta}(g(T) = 0) = 1$ for all $\theta$.

I interpret $P_{\theta}(g(T) = 0) = 1 \> \forall \theta \quad$ as

$ \> g(t) = 0 \> for \> almost \> every \> t \in T \enspace \forall \theta\quad$ (for continuous distribution)

$ \> g(t) = 0$ $\forall t \in T \enspace \forall \theta\quad\quad\quad\quad\quad$ (for discrete distribution)

Note:for the following continuous case(unifrom distribution), I've abused this notation a bit and have written $'\forall t'$ instead of $\>'for \> almost\> every\> t'$

In the book Statistical inference(2nd ed.) by Berger and Casella in Example 6.2.23, to prove that $T(X) = \max_i X_i$ is a complete statistic for random sample $X_1 , X_2 , ... X_n$ following Uniform distribution $ f(x;\theta) = 1/\theta,\; 0\leq x \leq \theta$

We assume a function $g(t)$ satisfying $E_{\theta}g(T) = 0 \> \forall \theta$ and finally arrive at the condition that $g(\theta) = 0 \> \forall \theta$. I've understood the proof till here but couldn't understand that how from this condition, can we conclude that $T$ is a complete statistic.

Shouldn't we need to show that $g(t) = 0 \> \forall t \> \forall \theta$ to conclude that $T$ is a complete statistic. For example if we consider a function $g(t) = t-\theta$ ,then $ g(\theta) = \theta-\theta = 0 \>\forall \theta$ but that need not necessarily mean that $g(t) = 0 \> \forall t \> \forall \theta$.

I saw a similar proof here but couldn't get how $g = 0$. I think here also it meant that $g(\theta) = 0\> \forall \theta$ and not $g(t) = 0 \> \forall t \> \forall \theta$.

I couldn't understand where I am going wrong or is my interpretation above is wrong. Please help.