Comments on Wick Rotation

In this article I’ll discuss a few of the ways we can “evaluate” integral expressions for the free propagator.

1. The Propagator

The propagator is the (unique tempered) fundamental solution for the Klein-Gordon equation. That is, it is a function {D: {\mathbb R}^{4} \rightarrow {\mathbb C}} satisfies

\displaystyle -(\partial^2 + m^2)D(x) = \delta^{(4)}(x)

Where

  • {\partial^2} is the “Minkowski space Laplacian”: {\partial^2 f = \frac{\partial^2 f}{\partial x_{0}^{2}} - \frac{\partial^2 f}{\partial x_{1}^{2}} - \frac{\partial^2 f}{\partial x_{2}^{2}} - \frac{\partial^2 f}{\partial x_{3}^{2}} }
  • {\delta^{(4)}(x)} is the Dirac delta distribution on 4-dimensional Minkowski space.

{D(x)} is the amplitude for a particle to propagate from {0} to the point {x} in space-time.

We can take the Fourier Transform of both sides to turn this into an algebraic equation

\displaystyle -(-k^2 + m^2)\hat{D}(k) = 1

Where we define the Fourier transform as

\displaystyle \hat{f}(k) = \int_{{\mathbb R}^4} d^4 x e^{-i <k,x>} f(x)

And the inner products and “squares” are all using the Minkowski metric:

\displaystyle \begin{array}{rcl} <k,x> &=& k_0 x_0 - k_1 x_1 - k_2 x_2 - k_3 x_3 \\ k^2 &=& <k,k> \end{array}

This lets us write

\displaystyle \hat{D}(k) = \frac{1}{(k^2 - m^2)} \ \ \ \ \ (1)

Inverting the Fourier Transform, we now find

\displaystyle D(x) =\frac{1}{(2\pi)^{4}} \int_{{\mathbb R}^4} d^4 k\ \frac{ e^{i<k,x>}}{k^2 - m^2} \ \ \ \ \ (2)

When computing the effective mass of a particle in {\phi^4} theory, we have to sum over all chains of self-interactions. Each of these self-interactions will involve a particle propagating from a point in space time back to itself — propagating from 0 to 0 — and will add a factor of {D(0)} to a term in our sum.

So we need to compute {D(0)}

2. Propagating to Zero

We want to compute the amplitude of propagating to zero:

\displaystyle D(0) = \frac{1}{(2\pi)^{4}} \int_{{\mathbb R}^4} \frac{d^4k}{k^2 - m^2} \ \ \ \ \ (3)

We will do this in three ways, one not terribly rigorous and two that are solid:

  1. Integrate over sheets where {k^2} is constant.
  2. Use calculus of residues to evaluate the integral in the time variable first.
  3. Use a Wick Rotation to convert this to a Euclidean integral.

2.1. Contour Selection and the meaning of that {i\epsilon}

Before proceeding, note that for all fixed values of the space variables {k_1, k_2, k_3} the integrand in the time variable {k_0} has two poles, one at each of {\pm\omega_k} where {\omega_k = \sqrt{k_1^2 + k_2^2 + k_3^2 + m^2}}. To define the integral we must specify whether we go around these poles to the left or to the right. There are, of course, four possible choices. We will follow Feynman and go below the negative pole and above the positive one:

In the literature, you will often see the integral formula for the propagator expressed as

\displaystyle D(x) =\frac{1}{(2\pi)^{4}} \int_{{\mathbb R}^4} d^4 k\ \frac{ e^{i<k,x>}}{k^2 - m^2 + i\epsilon} \ \ \ \ \ (4)

The addition of {i\epsilon} moves the positive pole down a little bit and moves the negative pole up a little bit, allowing us to avoid the poles with a simple contour, like this:

As {\epsilon \rightarrow 0} this is equivalent to our first choice of contour. All the “{i\epsilon} term” is doing is telling us which of the four contours to pick when evaluating Equation 2.

2.2. Approach 1: a first attempt at symmetry

For our first attempt at evaluating Equation 3, notice that the integrand is left unchanged by Minkowski rotations, so it is constant on Minkowski spheres. Say we could compute the surface area of the Minkowski sphere of radius {r}, call it {S(r)}. Then we can use our symmetry to change our 4D integral into a 1D integral in {r}. On the {r}-sphere, the integrand takes the value {\frac{1}{r^2 - m^2}}, so our integral will now be

\displaystyle D(0) = \frac{1}{(2\pi)^{4}} \int_{{\mathbb R}^4} \frac{d^4k}{k^2 - m^2} = \frac{1}{(2\pi)^{4}} \int_{{\mathbb R}} \frac{S(r)dr}{r^2 - m^2}

But the surface area of the sphere in 4D space scales like {r^3}, so this is

\displaystyle D(0)= \frac{1}{(2\pi)^{4}} \int_{{\mathbb R}} \frac{S(1)r^3 dr}{r^2 - m^2}

And it appears that we are integrating a function that grows linearly in {r}. Yes, cancellation may helps is, we need to calculate residues, etc. but none of that really matters because {S(1)} is infinite. Using this symmetry was a good idea, but it needs some work.

2.3. Approach 2: Integrate in time first

Going for the throat with the symmetry didn’t work, so let’s try straightforward calculation. Looking at the contour in the diagram above, we can imagine forming it into a closed loop by adding an arc along the semicircle of radius {R} in the upper half plane. For {R} large, the integrand will be {O(R^{-2})} along this arc and the arc has length {O(R)}, so the contribution of the arc to the integrand is {O(R^{-1})}; it vanishes.

Thus our integral will just be the limit of our closed contour integrals as we let {R \rightarrow \infty}. What is the contour integral? There is only one pole inside the contour, at {-\omega_k}, and it has residue {-\frac{1}{2\omega_k}}. So our integral is {\frac{-i\pi}{\omega_k}}.

To evaluate {D(0)} we now integrate this in the space variables used to define {\omega_k}.

\displaystyle D(0) = \frac{-i\pi}{(2\pi)^{4}} \int_{{\mathbb R}^3} \frac{d^3k}{\sqrt{k^2 + m^2}}

Where now {k^2} represents the square of the ordinary Euclidean norm in {{\mathbb R}^3}. We still have rotational symmetry here so the integrand is constant on spheres. The surface area of the {r}-sphere is {4\pi r^2} so

\displaystyle D(0) = \frac{-i}{4\pi^2} \int_0^\infty \frac{r^2 dr}{\sqrt{r^2 + m^2}} \ \ \ \ \ (5)

Of course this integral still diverges, this is QFT after all. Notice that the integrand grows linearly, so we have a quadratic divergence. We could perform this full integration in {k_0}, then employ cutoffs for renormalization in the space variables. I expect we would get the same results, but that is not what is done. Instead, we use the Wick Rotation.

2.4. Approach 3: Wick Rotation

Instead of integrating all the way out in {k_0}, consider the path integral along the following contour:

This contour encloses no poles, so the integral along it is zero.  Because the integrand is an even function, the integral along the two large arcs will cancel perfectly.  Thus the integral along the two “straight lines” must also cancel perfectly. This gives the equation

\displaystyle D(0) = \frac{1}{(2\pi)^{4}} \int_{{\mathbb R}^4} \frac{d^4k}{k^2 - m^2} = \frac{1}{(2\pi)^{4}} \int_{{\mathbb R}^4} \frac{d^4x}{-\|x\|^2 - m^2}

where {x_0 = ik_0, x_1 = k_1, x_2 = k_2, x_3 = k_3} and {\|x\|} is the ordinary Euclidean norm on {{\mathbb R}^4}. Now we’ve changed our Minkowski rotational symmetry into a Euclidean rotational symmetry, and unlike the Minkowski sphere, the Euclidean sphere has finite surface area. This means we can play those games we tried in our first approach. Integrating over the radius of the sphere (which has 3-area {S(1) = }), we get

\displaystyle D(0) = \frac{-S(1)}{(2\pi)^{4}} \int_0^\infty \frac{r^3 dr}{r^2 + m^2} \ \ \ \ \ (6)

No poles, the integrand grows linearly in {r}, and we have quadratic divergence. When we apply cutoffs in QFT, this is the integral we cutoff. Notice that it is a bit different from cutting things off in the original {k_0} variable but in the end we still get the quadratic divergence we observed with approach 2.

Inverse Problems Course Notes — Numerical Methods for X-Ray Transform Reconstruction

These notes are based on Gunther Uhlmann’s lectures for MATH 581 taught at the University of Washington in Autumn 2009.

An index to all of the notes is available here.

The X-Ray transform is particularly interesting because we can use it for tomography. We can send X-Rays through a patient, over many different lines, and measure the accumulated attentuation. Once this is done, we know the X-Ray transform of our object, so we can simply pass this in to the inversion formula and reconstruct a three-dimensional model. There are two substantial problems with this:

  • We cannot collect data for every line, we can only sample a finite number of lines.
  • Our measurements will have error.

There are in fact even more complications — beam hardening comes to mind — but these two must be dealt with first. Loosely speaking, the solution to the first problem relies on the stability estimates for the X-Ray Transform, and the solution to the second problem depends on the range characterization. In this post we will discuss the first problem, finite sample size, in detail. The second problem is manageable too: the range characterization of the X-Ray Transform lets us find the image function most likely to have produced the error-filled data. It turns out that the filtered backprojection algorithm we introduce below is an important step in this process. The rest of this post is organized as follows: we will start by showing that for an infinite, discrete set of samples we have unique reconstruction. Then we will consider finite sets of samples and see that reconstruction is hopeless without a priori assumptions. From there we will introduce the Filtered Backprojection algorithm and state and Natterer’s theorem about filtered backprojection error estimates. Continue reading

Inverse Problems Course Notes — The Range of the X-Ray Transform

These notes are based on Gunther Uhlmann’s lectures for MATH 581 taught at the University of Washington in Autumn 2009.

An index to all of the notes is available here.

Now we are prepared to study the range of the X-Ray Transform. We will will look at the image of {\mathcal{S}({\mathbb R}^n)} in detail — once we understand this we can easily use our understanding of the Radon transform to characterize the image of {X} for smooth, compactly supported functions.

So let {f \in \mathcal{S}({\mathbb R}^n)}. Recall that

\displaystyle  Xf(x,\theta) = \int_{\mathbb R} f(x + t\theta) dt

It will be convenient to extend {Xf} to all of {{\mathbb R}^n\times({\mathbb R}^n - \{0\})} as follows

\displaystyle  \begin{array}{rcl}  Xf(x,\xi) &=& \int_{\mathbb R} f(x + t\xi) dt, \quad \xi \in {\mathbb R}^n - \{0\}) \\ \\ &=& \frac{1}{|\xi|}\int_{{\mathbb R}}f(x + t\frac{\xi}{|\xi}) dt \\ \\ &=& \frac{1}{|\xi|} \int_{\mathbb R} f(x - \langle x, \frac{\xi}{|\xi|}\rangle\frac{\xi}{|\xi|} + t\frac{\xi}{|\xi|}) dt \\ \\ &=& \frac{1}{|\xi|}Xf(x-\langle x, \frac{\xi}{|\xi|}\rangle\frac{\xi}{|\xi|}, \frac{\xi}{|\xi|}) \end{array}

So {Xf} is said to be positive homogeneous of degree -1 in {\xi}.

Claim 1 For {x \in {\mathbb R}^n}, {\xi \in {\mathbb R}^n - \{0\}} The partial differential equations

\displaystyle  \frac{\partial^2}{\partial x_i\partial \xi_j}Xf - \frac{\partial^2}{\partial x_j\partial \xi_i}Xf = 0

hold for all {1 \leq i, j \leq n}.

To show this, check the following

\displaystyle  \begin{array}{rcl}  \frac{\partial^2}{\partial x_i\partial \xi_j}Xf &=& \int_{\mathbb R} \frac{\partial^2}{\partial x_i\partial \xi_j} f(x + t\xi) dt \\ &=& t\int_{\mathbb R} \partial^2_{ij}f(x + t\xi) dt \end{array}

which is symmetric in {i} and {j}.

These equations are called John’s Equations, named for Fritz John who was the first to point this fact out. The main result is that these are the only conditions we need to characterize the range of {X} Continue reading

Inverse Problems Coruse Notes — Stability Estimates for the X-Ray Transform

These notes are based on Gunther Uhlmann’s lectures for MATH 581 taught at the University of Washington in Autumn 2009.

An index to all of the notes is available here.

With an inversion formula in hand, we are ready to state and prove the basic stability estimates for the X-Ray Transform. They are almost identical to the estimates for the Radon transform, the only difference coming from the fact that {X} smoths by {1/2} of a derivative, where {R} smooths by {n/2} derivatives.

Theorem 1 For all {K \subset {\mathbb R}^n}, {K} compactly supported, and {f \in H^s(K)}, the following estimates hold

\displaystyle  \|f\|^2_{H^s({\mathbb R}^n)} \leq C_s\|Xf\|^2_{H^{s + 1/2}(T)} \leq C_s(K)\|f\|^2_{H^s({\mathbb R}^n)}

The first inequality holds even when {f} does not have compact support.

In other words X-Ray inversion is stable, and so is the X-Ray transform itself, as long as you measure errors with the right norms.

Proof: Step 1. {\|f\|_{H^s} \leq c\|Xf\|_{H^{s + 1/2}}}Let’s start by proving the first inequality — stability, or boundedness, of X-Ray inversion — in the {L^2} case ({s = 0}). The argument goes as follows: use the inversion formula to rewrite the formula for the norm of {f}, then use Plancherel’s Theorem and the Fourier Slice Theorem to move everything to the frequency domain. Here are the details.

\displaystyle  \begin{array}{rcl}  \|f\|^2_{L^2({\mathbb R}^n)} = \langle f,f\rangle_{L^2({\mathbb R}^n)} &=& \langle({(-\bigtriangleup)^{1/2}} X^tXf,f\rangle_{L^2({\mathbb R}^n)} \\ &=& \langle Xf, X{(-\bigtriangleup)^{1/2}} f\rangle_{L^2(T)} \\ &=& \langle \mathfrak{F}_{{\theta^\perp}}Xf, \mathfrak{F}_{{\theta^\perp}}X{(-\bigtriangleup)^{1/2}} f\rangle_{L^2(T)} \\ &=& \langle \hat{f}, |\eta|\hat{f}\rangle_{L^2({\mathbb R}^n)} \\ &=& \int_{{\mathbb R}^n}|\eta||\hat{f}(\eta)|^2 d\eta \\ &\leq& \int_{{\mathbb R}^n}(1 + |\eta|^2)^{1/2}|\hat{f}(\eta)|^2 d\eta \\ &=& \|f\|_{H^{1/2}({\mathbb R}^n)} \end{array}

Now we can extend this to general {s} by replacing {f} with {(I-\bigtriangleup)^{s/2}}. Then

\displaystyle  \begin{array}{rcl}  \|f\|_{H^s} = \|(I-\bigtriangleup)^{s/2}f\|_{L^2} &\leq& c_n\|X(I-\bigtriangleup)^{s/2}f\|_{H^{1/2}} \\ &=& c_n\|(I-\bigtriangleup_{{\theta^\perp}})^{s/2}Xf\|_{H^{1/2}} = c_n\|Xf\|_{H^{s+1/2}} \end{array}

Proving the first inequality. Here we relied on the following result, an easy generalization of the intertwining results for {X}, {\bigtriangleup} and {\mathfrak{F}} we used before.

Claim 1

\displaystyle  X(I-\bigtriangleup)^{s/2}f = (I-\bigtriangleup_{{\theta^\perp}})^{s/2}Xf

The proof is left as an exercise.

Step 2. {\|Xf\|_{H^{s + 1/2}} \leq C_s(K)\|f\|_{H^s}} Again, we will start with the {L^2} case.

\displaystyle  \begin{array}{rcl}  \|Xf\|_{H^{1/2}(T)} &=& \int_{S^{n-1}}\int_{{\theta^\perp}} |\mathfrak{F}_{{\theta^\perp}}Xf|^2(\eta,\theta)(1 + |\eta|^2)^{1/2} dH_{{\theta^\perp}}d\theta \\ &=& \int_{S^{n-1}}\int_{{\theta^\perp}} |\hat{f}(\eta)|^2(1 + |\eta|^2)^{1/2} dH_{{\theta^\perp}}d\theta \end{array}

To go forward we need a result that relates the measure {dH_{{\theta^\perp}}d\theta} to standard Lebesgue measure. As {\theta} varies over {S^{n-1}}, {H_{{\theta^\perp}}} clearly covers all of {{\mathbb R}^n}, but it covers it more than once and points close to the origin are covered “more densely” than points far away. The following claim makes this intuition precise, and the proof can be found in the appendix of Natterer’s book.

Lemma 2

\displaystyle  \begin{array}{rcl}  \int_{S^{n-1}}\int_{{\theta^\perp}} g(\eta) dH_{{\theta^\perp}}d\theta &=& \frac{1}{\text{Vol}(S^{n-2})}\int_{{\mathbb R}^n}\frac{g(y)}{|y|}dy \end{array}

This lets us continue, writing

\displaystyle  \|Xf\|_{H^{1/2}(T)} = c_n\int_{{\mathbb R}^n}\frac{|\hat{f}(\eta)|^2}{|\eta|}(1 + |\eta|^{2})^{1/2} d\eta

We have not used the compact support of {f} yet, but now it appears in a move that should seem familiar. We will split the integral into a sum of two integrals, one over the low fequencies, and the other over the high frequencies. Define

\displaystyle  \begin{array}{rcl}  \text{I} &=& c_n\int_{|\eta| \geq 1}\frac{|\hat{f}(\eta)|^2}{|\eta|}(1 + |\eta|^2)^{1/2} d\eta \\ \text{II} &=& c_n\int_{|\eta| \leq 1}\frac{|\hat{f}(\eta)|^2}{|\eta|}(1 + |\eta|^2)^{1/2} d\eta \end{array}

Then

\displaystyle  \|Xf\|_{H^{1/2}(T)} = {\text{I}} + {\text{II}}

But

\displaystyle  \text{I}\quad \leq \quad c\int_{|\eta| \geq 1} |\hat{f}(\eta)|^2d\eta \leq c\|f\|^2_{L^2}

And

\displaystyle  \text{II}\quad \leq \quad \sup_{|\eta| \leq 1}|\hat{f}(\eta)|^2 \cdot \int_{|\eta| \leq 1}\frac{(1 + |\eta|^2)^{1/2}}{|\eta|} d\eta \leq C(\text{Vol } K)^{1/2}\|f\|_{L^2}

We can repeat this argument for any {s} with the usual modifications (pick smooth compactly supported {\varphi \equiv 1} on {K} and use {\langle \varphi, f\rangle_{L^2} \leq \|\varphi\|_{H^{-s}}\|f\|_{H^s}}). \Box

(A pdf version of these notes is available here.)

Inverse Problems Course Notes — The X-Ray Transform for Distributions

These notes are based on Gunther Uhlmann’s lectures for MATH 581 taught at the University of Washington in Autumn 2009.

An index to all of the notes is available here.

We have studied the X-Ray transform on very restricted domains — {C_0^\infty} and {\mathcal{S}} — and found an inversion formula there. Now we want to move to our next basic question: is the inversion stable?

As with the Radon transform, we will see that it is stable: small errors in the X-Ray transform lead to small errors in the reconstructed function, and vice versa, when errors are measured in an appropriate norms.

Sobolev spaces provide one family of norms that will work, but to use these we need to extend the domain of {X} to distributions, and prove the inversion formula on the broader domain. Continue reading

Inverse Problems Course Notes — An Alternative Development of the X-Ray Transform

These notes are based on Gunther Uhlmann’s lectures for MATH 581 taught at the University of Washington in Autumn 2009.

An index to all of the notes is available here.

Now that we’ve sketched the basic facts about the X-Ray transform, let’s look at it from a different perspective. When we developed the theory of the Radon transform, we made extensive use of the Fourier Slice Theorem — something we did not mention at all for the X-Ray transform. We also notice that our inversion formula looked different from the one we developed for {R}.

In this post we will see that these differences are superficial. We will find the analog for the Fourier Slice Theorem and use it to redevelop our X-Ray transform results, mimicking our theory of the Radon transform. Continue reading

The Geometry of the Slice Theorems

When studying the Radon transform, we saw that we could reconstruct the Fourier transform of a function from its Radon Transform:

  • The Fourier Transform integrates the product of a function with “waves” that are constant on hyperplanes.
  • The Radon Transform computes the integral of a function over these hyperplanes.
  • So the Fourier Transform of a function is a sort of phase-weighted sum of integrals over hyperplanes, i.e. a phase-weighted sum of the Radon transform.

When evaluating the Fourier Transform of a function f at a point \rho\omega, the hyperplane x\cdot\omega = s appears with “weight” e^{-i\rho x\cdot\omega} = e^{-i\rho s}. So we can write the Fourier transform of f in terms of the Radon Transform as follows:

\hat{f}(\rho\omega) = \int_{-\infty}^{\infty}e^{-i\rho s}Rf(s, \omega) ds = \mathfrak{F}_sRf(\rho, \omega)

The same arguments apply to the X-Ray transform.  In fact, we can compute the Radon transform from the X-Ray transform at a hyperplane by integrating Xf over a set of parallel lines that covers the hyperplane (there is more than one way to do this!).

To compute the Fourier Transform directly from the X-Ray transform, consider the following.  Say we want to compute the Fourier Transform of f at a frequency vector \eta.  We can pick any orthogonal \theta, and have \eta \in \theta^\perp.  Now the “wave” with frequency \eta will be constant in direction \theta, so to compute the Fourier Transform of f at \eta we just need to add up the integrals of f along the lines in direction \thetaXf(y, \theta) — weighted it by the value of the wave on these lines.  In other words

\hat{f}(\eta) = \int_{\theta^\perp}e^{-iy\cdot\eta}Xf(y,\theta)dH_{\theta^\perp}(y) = \mathfrak{F}_{\theta^\perp}Xf(\eta, \theta)

for \eta \in \theta^\perp.  Notice that our choice of \theta was arbitrary, so this really gives us a continuum of formulas, one for each \theta \in \eta^\perp.

Inverse Problems Course Notes — The X-Ray Transform

We motivated the study of the Radon transform with a tomographic problem: given the change in intensity of X-rays along all lines through a region, can we reconstruct the attenuation (think of this as density) in the region?

For our purposes in this problem, light travels along straight lines and in two dimensions, those lines are hyperplanes.  So inverting the Radon transform — which sums functions over hyperplanes — solves our problem in 2-D.  But in higher dimensions this is not what we need.  We need to integrate along lines, not hyperplanes.

So we introduce the X-ray transform.

Definition Given a function f: \mathbb{R}^n \rightarrow \mathbb{C}, the X-ray transform of f is defined as

Xf(x, \theta) = \int_{\mathbb{R}} f(x + t\theta) dt

where x\in \mathbb{R}^n, \theta \in S^{n-1}.

The first thing to notice is that there is some redundancy here. Continue reading

Inverse Problems Course Notes — The Paley-Wiener Theorem

Having seen the support theorem for the Radon transform, and seeing how much we rely on the Fourier transform as a tool, it is natural to ask an analogous question for the Fourier transform.

The Easy Pieces: L^2 and \mathcal{S}

For some important spaces, the Fourier transform is very well behaved.  In particular

\mathfrak{F}: \mathcal{S}(\mathbb{R}^n) \rightarrow \mathcal{S}(\mathbb{R}^n)

\mathfrak{F}: \mathcal{S}^{\prime}(\mathbb{R}^n) \rightarrow \mathcal{S}^{\prime}(\mathbb{R}^n)

\mathfrak{F}: L^2(\mathbb{R}^n) \rightarrow L^2(\mathbb{R}^n)

and it is an isomorphism in each case.  But what about C_0^{\infty}(\mathbb{R}^n)\mathcal{E}^{\prime}(\mathbb{R}^n)?

A Harder Piece: f \in C_0^{\infty}(\mathbb{R}^n)

Say f \in C_0^{\infty}(\mathbb{R}^n), and specificallyr,  say A \leq |x| \implies f(x) = 0.  Then we can write

\hat{f}(\xi) = \int_{|x| < A} e^{-ix\cdot\xi}f(x)dx

for all \xi in \mathbb{R}^n.  It can be extended to \mathbb{C}^n:

\hat{f}(z) = \int_{|x| < A} e^{-ix\cdot z}f(x)dx Continue reading

Inverse Problems Course Notes — Motivating the Range Characterization for the Radon Transform

With the proofs of the range characterization and support theorems for the radon transform complete, let’s take a step back and look at the intuition behind these results again.  In particular, let’s try to understand the moment conditions (It might be good to read this post before reading the full proofs).  It was easy to directly verify that the moment conditions were necessary, so

R(\mathcal{S}(\mathbb{R}^n) \subset \mathcal{S}_H(\mathbb{R}\times S^{n-1})

We need to show it is sufficient — that \mathcal{S}_H(\mathbb{R}\times S^{n-1}) \subset R(\mathcal{S}(\mathbb{R}^n).  Take $g \ in \mathcal{S}_H(\mathbb{R}\times S^{n-1})$, we need to find f \in \mathcal{S}(\mathbb{R}^n) with Rf = g Continue reading