Your core question is to determine $E[X_j | X_{(1)}, X_{(n)}]$, where $X_{(1)} = \min_{1 \leq i \leq n}X_i$, $X_{(n)} = \max_{1 \leq i \leq n}X_i$.
There is an intuitive argument as follows (cf., Example 34.4, Probability and Measure (3rd edition)): having observed $X_{(1)}$ and $X_{(n)}$, for any $j \in \{1, \ldots, n\}$, $X_j$ has $n^{-1}$ probability to be $X_{(1)}$, $n^{-1}$ probability to be $X_{(n)}$, and $1 - \frac{2}{n}$ probability to be uniformly distributed on the interval $[X_{(1)}, X_{(n)}]$, therefore
\begin{align*}
E[X_j|X_{(1)}, X_{(n)}] = \frac{1}{n}X_{(1)} + \frac{1}{n}X_{(n)} +
\left(1 - \frac{2}{n}\right)\times \frac{X_{(1)} + X_{(n)}}{2} =
\frac{X_{(1)} + X_{(n)}}{2}. \tag{1}\label{1}
\end{align*}
For a rigorous proof of $\eqref{1}$, note that for $s, t$ such that $\theta < s < t < 2\theta$, we have
\begin{align*}
& \int_{X_{(1)} > s, X_{(n)} \leq t}X_jdP_\theta \\
=& \int_0^\infty P_\theta\left[X_j \geq u, X_{(1)} > s, X_{(n)} \leq t\right]du \\
=& \int_0^s\left(\frac{t - s}{\theta}\right)^ndu + \int_s^t \frac{t - u}{\theta}\left(\frac{t - s}{\theta}\right)^{n - 1}du \\
=& s\frac{(t - s)^{n}}{\theta^n} + \frac{(t - s)^{n + 1}}{2\theta^n}. \tag{2}\label{2}
\end{align*}
On the other hand, since the joint density of $(X_{(1)}, X_{(n)})$ is
\begin{align*}
f_{1n}(x, y) = \begin{cases}
n(n - 1)\left(\frac{y - x}{\theta}\right)^{n - 2}\frac{1}{\theta^2}, & \theta < x < y < 2\theta; \\
0 & \text{ otherwise },
\end{cases}
\end{align*}
it follows that
\begin{align*}
& \int_{X_{(1)} > s, X_{(n)} \leq t}(X_{(1)} + X_{(n)})dP_\theta \\
=& \iint_{(x, y): s < x < y \leq t}(x + y)n(n - 1)\left(\frac{y - x}{\theta}\right)^{n - 2}\frac{1}{\theta^2} dxdy \\
=& \frac{(t - s)^{n + 1}}{\theta^n} + \frac{2s(t - s)^{n}}{\theta^n}. \tag{3}\label{3}
\end{align*}
$\eqref{2}$ and $\eqref{3}$ together imply that $\int_G X_jdP_\theta = \int_G\frac{X_{(1)} + X_{(n)}}{2}dP_\theta, G \in \sigma(X_{(1)}, X_{(n)})$, hence $\eqref{1}$ holds.
Substituting $s = \theta$ and $t = 2\theta$ in $\eqref{3}$ yields $E[X_{(1)} + X_{(n)}] = 3\theta$. By $\eqref{1}$ and Rao-Blackwell theorem, $\frac{1}{3}(X_{(1)} + X_{(n)})$ is an unbiased estimator of $\theta$ which has smaller variance compared to the pre-conditioning unbiased estimator $\frac{2}{3}X_j$.