# Inverse Problems Course Notes — The X-Ray Transform for Distributions

These notes are based on Gunther Uhlmann’s lectures for MATH 581 taught at the University of Washington in Autumn 2009.

An index to all of the notes is available here.

We have studied the X-Ray transform on very restricted domains — ${C_0^\infty}$ and ${\mathcal{S}}$ — and found an inversion formula there. Now we want to move to our next basic question: is the inversion stable?

As with the Radon transform, we will see that it is stable: small errors in the X-Ray transform lead to small errors in the reconstructed function, and vice versa, when errors are measured in an appropriate norms.

Sobolev spaces provide one family of norms that will work, but to use these we need to extend the domain of ${X}$ to distributions, and prove the inversion formula on the broader domain.

1. The X-Ray Transform of Distributions

As usual, we define ${X}$ on distributions by duality, and most of the results are tautologies.

Definition 1 For ${u\in \mathcal{E}^{\prime}(\mathbb{R}^n)}$, define ${Xu}$ by its action on test functions ${g \in C^{\infty}(T)}$

$\displaystyle \langle Xu, g\rangle := \langle u, X^tg\rangle$

Proposition 2 ${X: \mathcal{E}^\prime(\mathbb{R}^n) \rightarrow \mathcal{E}^\prime(T)}$ is linear and continuous.

Proof: Exercise.$\Box$

We define ${X^t}$ similarly.

Definition 3 For ${v\in \mathcal{D}^{\prime}(T)}$, define ${X^tu}$ by its action on test functions ${\varphi \in C_0^{\infty}(\mathbb{R}^n)}$

$\displaystyle \langle X^tv, \varphi\rangle := \langle v, X\varphi\rangle$

Proposition 4 ${X^t: \mathcal{D}^\prime(T) \rightarrow \mathcal{D}^\prime(\mathbb{R}^n)}$ is linear and continuous.

We can also define ${X}$ and ${X^t}$ for tempered distributions but we will see that these domains are more natural for the problem at hand because we want to study domains where we can use our inversion formula. But the inversion formula involves a fractional power of the Laplacian, which is not tempered!

2. Powers of the Laplacian, Sobolev Spaces

For ${u \in \mathcal{E}^\prime(\mathbb{R}^n)}$ define powers of the negative Laplacian using the fourier Transform as follows.

$\displaystyle \widehat{(-\bigtriangleup)^{\frac{\alpha}{2}}u}(\xi) = |\xi|^\alpha \hat{u}(\xi)$

for ${\alpha > -n}$, so that the right hand side is locally integrable.

Proposition 5 ${(-\bigtriangleup)^{\frac{\alpha}{2}}: H^{s}(K) \rightarrow H^{s-\alpha}(\mathbb{R}^n)}$ for ${K \Subset \mathbb{R}^n}$ and ${\alpha > -\frac{n}{2}}$.

Proof:

$\displaystyle \begin{array}{rcl} \|(-\bigtriangleup)^{\frac{\alpha}{2}}u\|_{H^{s-\alpha}(\mathbb{R}^n)} &=& \int_{\mathbb{R}^n}|\xi|^{2\alpha}|\hat{u}(\xi)|^2(1 + |\xi|^2)^{s-\alpha} d\xi \\ \end{array}$

If ${\alpha \geq 0}$ then ${|\xi|^{2\alpha}(1 + |\xi|^2)^{s-\alpha} \leq (1 + |\xi|^2)^{s}}$, so

$\displaystyle \|(-\bigtriangleup)^{\frac{\alpha}{2}}u\|_{H^{s-\alpha}(\mathbb{R}^n)} \leq \int_{\mathbb{R}^n}(1 + |\xi|^2)^2 d\xi = \|u\|^2_{H^s(\mathbb{R}^n)}, \quad (\alpha \geq 0)$

proving the result. Now we need to consider the case ${-\frac{n}{2} < s < \alpha}$. We will proceed by splitting the integral into a high-frequency and low-frequency part.

$\displaystyle \begin{array}{rcl} \|(-\bigtriangleup)^{\frac{\alpha}{2}}u\|_{H^{s-\alpha}(\mathbb{R}^n)} &=& \int_{|\xi| \leq 1}|\xi|^{2\alpha}|\hat{u}(\xi)|^2(1 + |\xi|^2)^{s-\alpha} d\xi \\ & & \quad + \int_{|\xi| > 1}|\xi|^{2\alpha}|\hat{u}(\xi)|^2(1 + |\xi|^2)^{s-\alpha} d\xi \\ &\leq& \int_{|\xi| \leq 1}|\xi|^{2\alpha}|\hat{u}(\xi)|^2(1 + |\xi|^2)^{s-\alpha} d\xi + \|u\|_{H^s(\mathbb{R}^n} \\ &\leq& \sup_{|\xi| \leq 1} |\hat{u}(\xi)|\int_{|\xi| \leq 1} |\xi|^{2\alpha} + \|u\|_{H^s(\mathbb{R}^n} \end{array}$

But

$\displaystyle \hat{u}(\xi) = \int_{K} e^{-ix\cdot\xi}u(x)dx = \int_{\mathbb{R}^n}e^{-ix\cdot\xi}\varphi(x)u(x)dx$

Where ${\varphi}$ is some smooth function — your choice — that is identically 1 on a neighborhood of ${K}$. But then

$\displaystyle |\hat{u}(\xi)| \leq \|\varphi\|_{H^{-s}}\|u\|_{H^{s}}$

If you chose ${\varphi}$ well, this shows that ${\sup_{|\xi| \leq 1}|\hat{u}(\xi)| \leq C(K)\|u\|_{H^s}}$, and thus (with a slightly larger constant)

$\displaystyle \|(-\bigtriangleup)^{\frac{\alpha}{2}}u\|_{H^{s-\alpha}(\mathbb{R}^n)} \leq C(K)\|u\|_{H^s}$

completing the proof.$\Box$

So now we have defined ${X}$ on ${\mathcal{E}^\prime(\mathbb{R}^n)}$ and defined ${X^t}$ on ${\mathcal{D}^\prime(T)}$. You might wonder why we did not define these on tempered distributions; the definitions are straightforward after all.

The answer is that these operators appear alongside ${(-\bigtriangleup)^{\frac{1}{2}}}$ in the inversion formula and this operator is not tempered. For some ${u \in \mathcal{S}^\prime}$,try applying ${(-\bigtriangleup)^{\frac{1}{2}}u}$ to a test function ${g \in \mathcal{S}(\mathbb{R}^n)}$:

$\displaystyle \begin{array}{rcl} \langle(-\bigtriangleup)^{\frac{1}{2}}u, g\rangle &=& \langle |\xi|^{\frac{1}{2}}\hat{u}, \hat{g}\rangle \\ &=& \langle \hat{u}, |\xi|^{\frac{1}{2}}\hat{g} \rangle \end{array}$

But this is not well defined because ${|\xi|^{\frac{1}{2}}\hat{g} \notin \mathcal{S}}$. For now, we will restrict our attention to domains where the inversion formula makes sense.

We’ll close this post with the obvious theorem. The Proof is left as an exercise.

Theorem 6 (X-Ray Inversion I and II For Distributions) ${u \in \mathcal{E}^{\prime}(\mathbb{R}^n)}$ The following identities hold

$\displaystyle \begin{array}{rcl} (-\bigtriangleup)^{\frac{1}{2}}X^tXu &=& c_nu \\ X^t(-\bigtriangleup_{\theta^\perp})^{\frac{1}{2}}Xu &=& c_nu \end{array}$

(A pdf version of these notes is available here.)

Advertisements

## 2 thoughts on “Inverse Problems Course Notes — The X-Ray Transform for Distributions”

1. This is excellent stuff – not that I understand any of it – but I’d like to ask: any chance of seeing a pop math/pop sci post in the future? 😛

2. If you just remember one thing, make it this: a lot of problems are translation invariant (CT scan reconstruction doesn’t change when you move the machine to the other side of the room), and the Fourier Transform turns translation invariant operations into multiplication without messing up addition. All sorts of good things follow from this.

Maybe once the quarter is over and fellowship applications are in I’ll write some lighter posts. Any topic you’d like to hear about? Lately, when not thinking about tomography, I’ve mostly been thinking about computational biology and databases.

I have been meaning to continue this series of posts about $\zeta$ and the prime numbers, but haven’t been able to find a way to state the theorems I want in a simple enough way. Maybe you can help?