An Alternative Proof of the Radon Transform Support Theorem for Radial Functions

A key technical step in the proof of the Radon Transform support theorem is proving that the result holds for radial functions.  The proof presented in class was elementary but technical and, in my opinion, bewildering.  Not the sort of thing I’d come up with.

Yes, I know, I need to work on my skills.

Instead of doing that, I came up with another proof that I find more satisfying.  It is fairly simple, and uses nothing more than the Paley-Wiener theorem and a bit of Calculus.

Theorem if Rf = g, g \in \mathcal{S}_h(\mathbb{R}\times S^{n-1}) \cap C_0(\mathbb{R}\times S^{n-1}), s > A \implies g(s,\omega) = 0, and f(x) = F(|x|^2) is radial, then f has compact support and |x| > A \implies f(x) = 0.

The rest of this post will be the proof.  Here is a quick sketch:

  1. \mathfrak{F}_s(\rho, \omega) is analytic of exponential type in \rho, by the Paley-Wiener Theorem.
  2. \frac{\partial_n\hat{f}}{\partial\xi_i^n} \Big |_{\xi = 0} = \frac{\partial^n}{\partial\rho^n}\mathfrak{F}_s(\rho, \omega)\Big |_{\rho = 0}, so the exponential type bounds we got in for \mathfrak{F}_sg in (1) transfer to bounds on the pure partial derivatives of \hat{f}
  3. If f(x) = F(|x|^2) is radial, then at the origin
    \partial^{2\alpha}_xf = F^{|\alpha|}(0)\prod \frac{(2\alpha_i - 1)!}{\alpha_i!}
    for all multi-indices \alpha
  4. This shows that the mixed partials of \hat{f} are dominated by the pure partials at the origin, and the exponential type estimates in (2) hold for all partials
  5. So \hat{f} is analytic of exponential type. Thus f has compact support. Continue reading
Advertisements

Inverse Problems Course Notes — Proof of Helgason-Ludwig’s Range Characterization

In the last post we stated and motivated a theorem characterizing the image of the Schwartz space under the Radon transform:

Theorem [Helgason-Ludwig] The Radon transform is a bijective linear map from \mathcal{S}(\mathbb{R}^n) onto \mathcal{S}_H(\mathbb{R}\times S^{n-1}), where \phi \in \mathcal{S}_H(\mathbb{R}\times S^{n-1}) if and only if

  1. \phi \in \mathcal{S}(\mathbb{R}\times S^{n-1}) , i.e. it is smooth and all of its derivatives decay faster than any polynomial.
  2. It is even: \phi(-s, -\omega) = \phi(s,\omega)
  3. It satisfies the “moment conditions”: \mu_k\phi(\omega) = \int_{\mathbb{R}}\phi(s, \omega)s^kds is a homogeneous polynomial of degree k in \omega.

Proof We need to prove the following:

  1. R is linear.  This is trivial
  2. R is injective.  This is a direct corollary of the Fourier Slice Theorem.
  3. f \in \mathcal{S}(\mathbb{R}^n) \implies Rf \in \mathcal{S}_H(\mathbb{R}\times S^{n-1})$.  This was largely done in earlier posts, but we will consolidate the argument below.
  4. g \in \mathcal{S}_H(\mathbb{R}\times S^{n-1}) \implies \exists f \in \mathcal{S}(\mathbb{R}^n), Rf = g, i.e. the map is surjective. Continue reading

Inverse Problems Course Notes — The Range of the Radon Transform

Now we want to study the range of the Radon transform.  This is important for our inverse problem: when we want to reconstruct a function from its Radon transform, all we really have to work with is a finite number of error-prone samples of the Radon transform.  We need to analyze this statistically to find the function “most likely” to have produced the observed data.  We can only do this if we know what sort of functions can be Radon transforms of other functions.

In other words, we can only do this if we know the Range of R. Continue reading

Inverse Problems Course Notes — More About Stability

We have seen that Radon inversion is stable — if Rf_1 \approx Rf_2 then f_1 \approx f_2.  Now we want to look at the other direction, and study the stability of the “forward” Radon transform.  That is, we want to know: if F_1 \approx f_2, is Rf_1 \approx Rf_2?

Definition Let K \Subset \mathbb{R}^n (K is compactly supported in \mathbb{R}^n).  Then define

H^2(K) \equiv \{ u \in H^s(\mathbb{R}^n) | \text{supp} u \subset K \}

This lets us state

Proposition R: H^s(K) \rightarrow H^{s + \frac{n-1}{2}}(\mathbb{R}\times S^{n-1}) and is bounded \forall s, K \Subset \mathbb{R}^n, with a norm depending on K.  In other words

\|Ru\|_{H^{s + \frac{n-1}{2}}} \leq C(K)\|u\|_{H^s(K)} for some C(K) > 0.

Unfortunately this is sharp — there is no estimate independent of K. Continue reading

Inverse Problems Course Notes — Stability of Radon Inversion

With the tools of Sobolev spaces at our disposal, we are ready to go back to one of our basic questions: is our solution to the Radon transform inverse problem stable?  In other words if\

Rf_1 \approx Rf_2, do we know that f_1 \approx f_2.

If we don’t know this, we can’t trust our reconstruction — even tiny rounding errors could be disastrous.

A Formal Look

Let’s consider the case when n is odd (so that n-1 is even).  If u \in C_0^{\infty}(\mathbb{R}^n),  we know that u = c_nR^t\partial_s^{n-1}Ru, so

\langle u, u\rangle = \langle c_nR^t\partial_s^{n-1}Ru, u\rangle = c_n\langle \partial_s^{n-1}Ru, Ru\rangle

= c\langle \partial_s^{\frac{n-1}{2}}Ru, \partial_s^{\frac{n-1}{2}}Ru \rangle = \lVert \partial_s^{\frac{n-1}{2}}Ru \rVert_{L^2(\mathbb{R}\times S^{n-1})}

This suggests that Ru \in H^{\frac{n-1}{2}}(\mathbb{R}\times S^{n-1}).  To prove it, we need to define that space. Continue reading

Inverse Problems Course Notes — Extending the Domain of R to Distributions

The Radon Inversion formula developed in the last section is nice, but there is a problem.  It only applies to smooth functions, and the functions we are interested in are not smooth.  Heart tissue does not smoothly change into lung tissue; body parts have sharp boundaries.  If we want to use this for applications like medical imaging we must extend the results to a larger domain. Continue reading

Inverse Problems Course Notes: The Radon Transform and its Inversion Formula

These notes are based on Gunther Uhlmann’s lectures for MATH 581 taught at the University of Washington in Autumn 2009.

The Radon Transform

We concerned with the set of all hyperplanes in \mathbb{R}^n, in \mathbb{R}^2 this is just the set of lines, as we discussed before.  We parameterize this set by s \in \mathbb{R}, the distance of the hyperplane from the origin, and \omega \in S^{n-1}, the unit normal vector to the hyperplane:

L_{s,\omega} = \{ x\in \mathbb{R}^n | x \cdot \omega = s \}

Before going on, let’s specify some notation we’ll need. Continue reading

Inverse Problems Course Notes — Introduction and Overview

These notes are based on Gunther Uhlmann’s lectures for MATH 581 taught at the University of Washington in Autumn 2009.

An index to all of the notes is available here.

This course is about solving inverse problems. What are inverse problems? It is difficult to define them precisely, let’s just say “I’ll know one when I see one”, but we can give a rough idea.

We have a medium we’re trying to understand — say a patient’s internal organs — and we probe it with waves as in this figure:

We can measure the outgoing waves. Then the “inverse problem” is this:

Problem 1 (General Inverse Problem) Can you recover the internal parameters of the medium by measuring outside the medium its response to incident waves?

Remember that none of these terms are defined precisely. What is a wave?

1. 1st Quarter

In this quarter we will consider the case when waves are X-rays that we can send in different directions and measure the attenuation in intensity as the x-rays pass through the medium. X-rays are high energy electromagnetic waves that travel (almost) in a straight line through media like human tissue. This is the mathematical setup behind the CT scan.

Continue reading