test99




What's new | Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao












This set of notes discusses aspects of one of the oldest questions in Fourier analysis, namely the nature of convergence of Fourier series.

If {f: {\bf R}/{\bf Z} \rightarrow {\bf C}} is an absolutely integrable function, its Fourier coefficients {\hat f: {\bf Z} \rightarrow {\bf C}} are defined by the formula

\displaystyle  \hat f(n) := \int_{{\bf R}/{\bf Z}} f(x) e^{-2\pi i nx}\ dx.

If {f} is smooth, then the Fourier coefficients {\hat f} are absolutely summable, and we have the Fourier inversion formula

\displaystyle  f(x) = \sum_{n \in {\bf Z}} \hat f(n) e^{2\pi i nx}

where the series here is uniformly convergent. In particular, if we define the partial summation operators

\displaystyle  S_N f(x) := \sum_{|n| \leq N} \hat f(n) e^{2\pi i nx}

then {S_N f} converges uniformly to {f} when {f} is smooth.

What if {f} is not smooth, but merely lies in an {L^p({\bf R}/{\bf Z})} class for some {1 \leq p \leq \infty}? The Fourier coefficients {\hat f} remain well-defined, as do the partial summation operators {S_N}. The question of convergence in norm is relatively easy to settle:

Exercise 1

  • (i) If {1 < p < \infty} and {f \in L^p({\bf R}/{\bf Z})}, show that {S_N f} converges in {L^p({\bf R}/{\bf Z})} norm to {f}. (Hint: first use the boundedness of the Hilbert transform to show that {S_N} is bounded in {L^p({\bf R}/{\bf Z})} uniformly in {N}.)
  • (ii) If {p=1} or {p=\infty}, show that there exists {f \in L^p({\bf R}/{\bf Z})} such that the sequence {S_N f} is unbounded in {L^p({\bf R}/{\bf Z})} (so in particular it certainly does not converge in {L^p({\bf R}/{\bf Z})} norm to {f}. (Hint: first show that {S_N} is not bounded in {L^p({\bf R}/{\bf Z})} uniformly in {N}, then apply the uniform boundedness principle in the contrapositive.)

The question of pointwise almost everywhere convergence turned out to be a significantly harder problem:

Theorem 2 (Pointwise almost everywhere convergence)

  • (i) (Kolmogorov, 1923) There exists {f \in L^1({\bf R}/{\bf Z})} such that {S_N f(x)} is unbounded in {N} for almost every {x}.
  • (ii) (Carleson, 1966; conjectured by Lusin, 1913) For every {f \in L^2({\bf R}/{\bf Z})}, {S_N f(x)} converges to {f(x)} as {N \rightarrow \infty} for almost every {x}.
  • (iii) (Hunt, 1967) For every {1 < p \leq \infty} and {f \in L^p({\bf R}/{\bf Z})}, {S_N f(x)} converges to {f(x)} as {N \rightarrow \infty} for almost every {x}.

Note from Hölder’s inequality that {L^2({\bf R}/{\bf Z})} contains {L^p({\bf R}/{\bf Z})} for all {p\geq 2}, so Carleson’s theorem covers the {p \geq 2} case of Hunt’s theorem. We remark that the precise threshold near {L^1} between Kolmogorov-type divergence results and Carleson-Hunt pointwise convergence results, in the category of Orlicz spaces, is still an active area of research; see this paper of Lie for further discussion.

Carleson’s theorem in particular was a surprisingly difficult result, lying just out of reach of classical methods (as we shall see later, the result is much easier if we smooth either the function {f} or the summation method {S_N} by a tiny bit). Nowadays we realise that the reason for this is that Carleson’s theorem essentially contains a frequency modulation symmetry in addition to the more familiar translation symmetry and dilation symmetry. This basically rules out the possibility of attacking Carleson’s theorem with tools such as Calderón-Zygmund theory or Littlewood-Paley theory, which respect the latter two symmetries but not the former. Instead, tools from “time-frequency analysis” that essentially respect all three symmetries should be employed. We will illustrate this by giving a relatively short proof of Carleson’s theorem due to Lacey and Thiele. (There are other proofs of Carleson’s theorem, including Carleson’s original proof, its modification by Hunt, and a later time-frequency proof by Fefferman; see Remark 18 below.)

Read the rest of this entry »

In contrast to previous notes, in this set of notes we shall focus exclusively on Fourier analysis in the one-dimensional setting {d=1} for simplicity of notation, although all of the results here have natural extensions to higher dimensions. Depending on the physical context, one can view the physical domain {{\bf R}} as representing either space or time; we will mostly think in terms of the former interpretation, even though the standard terminology of “time-frequency analysis”, which we will make more prominent use of in later notes, clearly originates from the latter.

In previous notes we have often performed various localisations in either physical space or Fourier space {{\bf R}}, for instance in order to take advantage of the uncertainty principle. One can formalise these operations in terms of the functional calculus of two basic operations on Schwartz functions {{\mathcal S}({\bf R})}, the position operator {X: {\mathcal S}({\bf R}) \rightarrow {\mathcal S}({\bf R})} defined by

\displaystyle  (Xf)(x) := x f(x)

and the momentum operator {D: {\mathcal S}({\bf R}) \rightarrow {\mathcal S}({\bf R})}, defined by

\displaystyle  (Df)(x) := \frac{1}{2\pi i} \frac{d}{dx} f(x). \ \ \ \ \ (1)

(The terminology comes from quantum mechanics, where it is customary to also insert a small constant {h} on the right-hand side of (1) in accordance with de Broglie’s law. Such a normalisation is also used in several branches of mathematics, most notably semiclassical analysis and microlocal analysis, where it becomes profitable to consider the semiclassical limit {h \rightarrow 0}, but we will not emphasise this perspective here.) The momentum operator can be viewed as the counterpart to the position operator, but in frequency space instead of physical space, since we have the standard identity

\displaystyle  \widehat{Df}(\xi) = \xi \hat f(\xi)

for any {\xi \in {\bf R}} and {f \in {\mathcal S}({\bf R})}. We observe that both operators {X,D} are formally self-adjoint in the sense that

\displaystyle  \langle Xf, g \rangle = \langle f, Xg \rangle; \quad \langle Df, g \rangle = \langle f, Dg \rangle

for all {f,g \in {\mathcal S}({\bf R})}, where we use the {L^2({\bf R})} Hermitian inner product

\displaystyle  \langle f, g\rangle := \int_{\bf R} f(x) \overline{g(x)}\ dx.

Clearly, for any polynomial {P(x)} of one real variable {x} (with complex coefficients), the operator {P(X): {\mathcal S}({\bf R}) \rightarrow {\mathcal S}({\bf R})} is given by the spatial multiplier operator

\displaystyle  (P(X) f)(x) = P(x) f(x)

and similarly the operator {P(D): {\mathcal S}({\bf R}) \rightarrow {\mathcal S}({\bf R})} is given by the Fourier multiplier operator

\displaystyle  \widehat{P(D) f}(\xi) = P(\xi) \hat f(\xi).

Inspired by this, if {m: {\bf R} \rightarrow {\bf C}} is any smooth function that obeys the derivative bounds

\displaystyle  \frac{d^j}{dx^j} m(x) \lesssim_{m,j} \langle x \rangle^{O_{m,j}(1)} \ \ \ \ \ (2)

for all {j \geq 0} and {x \in {\bf R}} (that is to say, all derivatives of {m} grow at most polynomially), then we can define the spatial multiplier operator {m(X): {\mathcal S}({\bf R}) \rightarrow {\mathcal S}({\bf R})} by the formula

\displaystyle  (m(X) f)(x) := m(x) f(x);

one can easily verify from several applications of the Leibniz rule that {m(X)} maps Schwartz functions to Schwartz functions. We refer to {m(x)} as the symbol of this spatial multiplier operator. In a similar fashion, we define the Fourier multiplier operator {m(D)} associated to the symbol {m(\xi)} by the formula

\displaystyle  \widehat{m(D) f}(\xi) := m(\xi) \hat f(\xi).

For instance, any constant coefficient linear differential operators {\sum_{k=0}^n c_k \frac{d^k}{dx^k}} can be written in this notation as

\displaystyle \sum_{k=0}^n c_k \frac{d^k}{dx^k} =\sum_{k=0}^n c_k (2\pi i D)^k;

however there are many Fourier multiplier operators that are not of this form, such as fractional derivative operators {\langle D \rangle^s = (1- \frac{1}{4\pi^2} \frac{d^2}{dx^2})^{s/2}} for non-integer values of {s}, which is a Fourier multiplier operator with symbol {\langle \xi \rangle^s}. It is also very common to use spatial cutoffs {\psi(X)} and Fourier cutoffs {\psi(D)} for various bump functions {\psi} to localise functions in either space or frequency; we have seen several examples of such cutoffs in action in previous notes (often in the higher dimensional setting {d>1}).

We observe that the maps {m \mapsto m(X)} and {m \mapsto m(D)} are ring homomorphisms, thus for instance

\displaystyle  (m_1 + m_2)(D) = m_1(D) + m_2(D)

and

\displaystyle  (m_1 m_2)(D) = m_1(D) m_2(D)

for any {m_1,m_2} obeying the derivative bounds (2); also {m(D)} is formally adjoint to {\overline{m(D)}} in the sense that

\displaystyle  \langle m(D) f, g \rangle = \langle f, \overline{m}(D) g \rangle

for {f,g \in {\mathcal S}({\bf R})}, and similarly for {m(X)} and {\overline{m}(X)}. One can interpret these facts as part of the functional calculus of the operators {X,D}, which can be interpreted as densely defined self-adjoint operators on {L^2({\bf R})}. However, in this set of notes we will not develop the spectral theory necessary in order to fully set out this functional calculus rigorously.

In the field of PDE and ODE, it is also very common to study variable coefficient linear differential operators

\displaystyle  \sum_{k=0}^n c_k(x) \frac{d^k}{dx^k} \ \ \ \ \ (3)

where the {c_0,\dots,c_n} are now functions of the spatial variable {x} obeying the derivative bounds (2). A simple example is the quantum harmonic oscillator Hamiltonian {-\frac{d^2}{dx^2} + x^2}. One can rewrite this operator in our notation as

\displaystyle  \sum_{k=0}^n c_k(X) (2\pi i D)^k

and so it is natural to interpret this operator as a combination {a(X,D)} of both the position operator {X} and the momentum operator {D}, where the symbol {a: {\bf R} \times {\bf R} \rightarrow {\bf C}} this operator is the function

\displaystyle  a(x,\xi) := \sum_{k=0}^n c_k(x) (2\pi i \xi)^k. \ \ \ \ \ (4)

Indeed, from the Fourier inversion formula

\displaystyle  f(x) = \int_{\bf R} \hat f(\xi) e^{2\pi i x \xi}\ d\xi

for any {f \in {\mathcal S}({\bf R})} we have

\displaystyle  (2\pi i D)^k f(x) = \int_{\bf R} (2\pi i \xi)^k \hat f(\xi) e^{2\pi i x \xi}\ d\xi

and hence on multiplying by {c_k(x)} and summing we have

\displaystyle (\sum_{k=0}^n c_k(X) (2\pi i D)^k) f(x) = \int_{\bf R} a(x,\xi) \hat f(\xi) e^{2\pi i x \xi}\ d\xi.

Inspired by this, we can introduce the Kohn-Nirenberg quantisation by defining the operator {a(X,D) = a_{KN}(X,D): {\mathcal S}({\bf R}) \rightarrow {\mathcal S}({\bf R})} by the formula

\displaystyle  a(X,D) f(x) = \int_{\bf R} a(x,\xi) \hat f(\xi) e^{2\pi i x \xi}\ d\xi \ \ \ \ \ (5)

whenever {f \in {\mathcal S}({\bf R})} and {a: {\bf R} \times {\bf R} \rightarrow {\bf C}} is any smooth function obeying the derivative bounds

\displaystyle  \frac{\partial^j}{\partial x^j} \frac{\partial^l}{\partial \xi^l} a(x,\xi) \lesssim_{a,j,l} \langle x \rangle^{O_{a,j}(1)} \langle \xi \rangle^{O_{a,j,l}(1)} \ \ \ \ \ (6)

for all {j,l \geq 0} and {x \in {\bf R}} (note carefully that the exponent in {x} on the right-hand side is required to be uniform in {l}). This quantisation clearly generalises both the spatial multiplier operators {m(X)} and the Fourier multiplier operators {m(D)} defined earlier, which correspond to the cases when the symbol {a(x,\xi)} is a function of {x} only or {\xi} only respectively. Thus we have combined the physical space {{\bf R} = \{ x: x \in {\bf R}\}} and the frequency space {{\bf R} = \{ \xi: \xi \in {\bf R}\}} into a single domain, known as phase space {{\bf R} \times {\bf R} = \{ (x,\xi): x,\xi \in {\bf R} \}}. The term “time-frequency analysis” encompasses analysis based on decompositions and other manipulations of phase space, in much the same way that “Fourier analysis” encompasses analysis based on decompositions and other manipulations of frequency space. We remark that the Kohn-Nirenberg quantization is not the only choice of quantization one could use; see Remark 19 below.

Exercise 1

  • (i) Show that for {a} obeying (6), that {a(X,D)} does indeed map {{\mathcal S}({\bf R})} to {{\mathcal S}({\bf R})}.
  • (ii) Show that the symbol {a} is uniquely determined by the operator {a(X,D)}. That is to say, if {a,b} are two functions obeying (6) with {a(X,D) f = b(X,D) f} for all {f \in {\mathcal S}({\bf R})}, then {a=b}. (Hint: apply {a(X,D)-b(X,D)} to a suitable truncation of a plane wave {x \mapsto e^{2\pi i x \xi}} and then take limits.)

In principle, the quantisations {a(X,D)} are potentially very useful for such tasks as inverting variable coefficient linear operators, or to localize a function simultaneously in physical and Fourier space. However, a fundamental difficulty arises: map from symbols {a} to operators {a(X,D)} is now no longer a ring homomorphism, in particular

\displaystyle  (a_1 a_2)(X,D) \neq a_1(X,D) a_2(X,D) \ \ \ \ \ (7)

in general. Fundamentally, this is due to the fact that pointwise multiplication of symbols is a commutative operation, whereas the composition of operators such as {X} and {D} does not necessarily commute. This lack of commutativity can be measured by introducing the commutator

\displaystyle  [A,B] := AB - BA

of two operators {A,B}, and noting from the product rule that

\displaystyle  [X,D] = -\frac{1}{2\pi i} \neq 0.

(In the language of Lie groups and Lie algebras, this tells us that {X,D} are (up to complex constants) the standard Lie algebra generators of the Heisenberg group.) From a quantum mechanical perspective, this lack of commutativity is the root cause of the uncertainty principle that prevents one from simultaneously localizing in both position and momentum past a certain point. Here is one basic way of formalising this principle:

Exercise 2 (Heisenberg uncertainty principle) For any {x_0, \xi_0 \in {\bf R}} and {f \in \mathcal{S}({\bf R})}, show that

\displaystyle  \| (X-x_0) f \|_{L^2({\bf R})} \| (D-\xi_0) f\|_{L^2({\bf R})} \geq \frac{1}{4\pi} \|f\|_{L^2({\bf R})}^2.

(Hint: evaluate the expression {\langle [X-x_0, D - \xi_0] f, f \rangle} in two different ways and apply the Cauchy-Schwarz inequality.) Informally, this exercise asserts that the spatial uncertainty {\Delta x} and the frequency uncertainty {\Delta \xi} of a function obey the Heisenberg uncertainty relation {\Delta x \Delta \xi \gtrsim 1}.

Nevertheless, one still has the correspondence principle, which asserts that in certain regimes (which, with our choice of normalisations, corresponds to the high-frequency regime), quantum mechanics continues to behave like a commutative theory, and one can sometimes proceed as if the operators {X,D} (and the various operators {a(X,D)} constructed from them) commute up to “lower order” errors. This can be formalised using the pseudodifferential calculus, which we give below the fold, in which we restrict the symbol {a} to certain “symbol classes” of various orders (which then restricts {a(X,D)} to be pseudodifferential operators of various orders), and obtains approximate identities such as

\displaystyle  (a_1 a_2)(X,D) \approx a_1(X,D) a_2(X,D)

where the error between the left and right-hand sides is of “lower order” and can in fact enjoys a useful asymptotic expansion. As a first approximation to this calculus, one can think of functions {f \in {\mathcal S}({\bf R})} as having some sort of “phase space portrait{\tilde f(x,\xi)} which somehow combines the physical space representation {x \mapsto f(x)} with its Fourier representation {\xi \mapsto f(\xi)}, and pseudodifferential operators {a(X,D)} behave approximately like “phase space multiplier operators” in this representation in the sense that

\displaystyle  \widetilde{a(X,D) f}(x,\xi) \approx a(x,\xi) \tilde f(x,\xi).

Unfortunately the uncertainty principle (or the non-commutativity of {X} and {D}) prevents us from making these approximations perfectly precise, and it is not always clear how to even define a phase space portrait {\tilde f} of a function {f} precisely (although there are certain popular candidates for such a portrait, such as the FBI transform (also known as the Gabor transform in signal processing literature), or the Wigner quasiprobability distribution, each of which have some advantages and disadvantages). Nevertheless even if the concept of a phase space portrait is somewhat fuzzy, it is of great conceptual benefit both within mathematics and outside of it. For instance, the musical score one assigns a piece of music can be viewed as a phase space portrait of the sound waves generated by that music.

To complement the pseudodifferential calculus we have the basic Calderón-Vaillancourt theorem, which asserts that pseudodifferential operators of order zero are Calderón-Zygmund operators and thus bounded on {L^p({\bf R})} for {1 < p < \infty}. The standard proof of this theorem is a classic application of one of the basic techniques in harmonic analysis, namely the exploitation of almost orthogonality; the proof we will give here will achieve this through the elegant device of the Cotlar-Stein lemma.

Pseudodifferential operators (especially when generalised to higher dimensions {d \geq 1}) are a fundamental tool in the theory of linear PDE, as well as related fields such as semiclassical analysis, microlocal analysis, and geometric quantisation. There is an even wider class of operators that is also of interest, namely the Fourier integral operators, which roughly speaking not only approximately multiply the phase space portrait {\tilde f(x,\xi)} of a function by some multiplier {a(x,\xi)}, but also move the portrait around by a canonical transformation. However, the development of theory of these operators is beyond the scope of these notes; see for instance the texts of Hormander or Eskin.

This set of notes is only the briefest introduction to the theory of pseudodifferential operators. Many texts are available that cover the theory in more detail, for instance this text of Taylor.

Read the rest of this entry »

The square root cancellation heuristic, briefly mentioned in the preceding set of notes, predicts that if a collection {z_1,\dots,z_n} of complex numbers have phases that are sufficiently “independent” of each other, then

\displaystyle  |\sum_{j=1}^n z_j| \approx (\sum_{j=1}^n |z_j|^2)^{1/2};

similarly, if {f_1,\dots,f_n} are a collection of functions in a Lebesgue space {L^p(X,\mu)} that oscillate “independently” of each other, then we expect

\displaystyle  \| \sum_{j=1}^n f_j \|_{L^p(X,\mu)} \approx \| (\sum_{j=1}^n |f_j|^2)^{1/2} \|_{L^p(X,\mu)}.

We have already seen one instance in which this heuristic can be made precise, namely when the phases of {z_j,f_j} are randomised by a random sign, so that Khintchine’s inequality (Lemma 4 from Notes 1) can be applied. There are other contexts in which a square function estimate

\displaystyle  \| (\sum_{j=1}^n |f_j|^2)^{1/2} \|_{L^p(X,\mu)} \lesssim \| \sum_{j=1}^n f_j \|_{L^p(X,\mu)}

or a reverse square function estimate

\displaystyle  \| \sum_{j=1}^n f_j \|_{L^p(X,\mu)} \lesssim \| (\sum_{j=1}^n |f_j|^2)^{1/2} \|_{L^p(X,\mu)}

(or both) are known or conjectured to hold. For instance, the useful Littlewood-Paley inequality implies (among other things) that for any {1 < p < \infty}, we have the reverse square function estimate

\displaystyle  \| \sum_{j=1}^n f_j \|_{L^p({\bf R}^d)} \lesssim_{p,d} \| (\sum_{j=1}^n |f_j|^2)^{1/2} \|_{L^p({\bf R}^d)}, \ \ \ \ \ (1)

whenever the Fourier transforms {\hat f_j} of the {f_j} are supported on disjoint annuli {\{ \xi \in {\bf R}^d: 2^{k_j} \leq |\xi| < 2^{k_j+1} \}}, and we also have the matching square function estimate

\displaystyle  \| (\sum_{j=1}^n |f_j|^2)^{1/2} \|_{L^p({\bf R}^d)} \lesssim_{p,d} \| \sum_{j=1}^n f_j \|_{L^p({\bf R}^d)}

if there is some separation between the annuli (for instance if the {k_j} are {2}-separated). We recall the proofs of these facts below the fold. In the {p=2} case, we of course have Pythagoras’ theorem, which tells us that if the {f_j} are all orthogonal elements of {L^2(X,\mu)}, then

\displaystyle  \| \sum_{j=1}^n f_j \|_{L^2(X,\mu)} = (\sum_{j=1}^n \| f_j \|_{L^2(X,\mu)}^2)^{1/2} = \| (\sum_{j=1}^n |f_j|^2)^{1/2} \|_{L^2(X,\mu)}.

In particular, this identity holds if the {f_j \in L^2({\bf R}^d)} have disjoint Fourier supports in the sense that their Fourier transforms {\hat f_j} are supported on disjoint sets. For {p=4}, the technique of bi-orthogonality can also give square function and reverse square function estimates in some cases, as we shall also see below the fold.
In recent years, it has begun to be realised that in the regime {p > 2}, a variant of reverse square function estimates such as (1) is also useful, namely decoupling estimates such as

\displaystyle  \| \sum_{j=1}^n f_j \|_{L^p({\bf R}^d)} \lesssim_{p,d} (\sum_{j=1}^n \|f_j\|_{L^p({\bf R}^d)}^2)^{1/2} \ \ \ \ \ (2)

(actually in practice we often permit small losses such as {n^\varepsilon} on the right-hand side). An estimate such as (2) is weaker than (1) when {p\geq 2} (or equal when {p=2}), as can be seen by starting with the triangle inequality

\displaystyle  \| \sum_{j=1}^n |f_j|^2 \|_{L^{p/2}({\bf R}^d)} \leq \sum_{j=1}^n \| |f_j|^2 \|_{L^{p/2}({\bf R}^d)},

and taking the square root of both side to conclude that

\displaystyle  \| (\sum_{j=1}^n |f_j|^2)^{1/2} \|_{L^p({\bf R}^d)} \leq (\sum_{j=1}^n \|f_j\|_{L^p({\bf R}^d)}^2)^{1/2}. \ \ \ \ \ (3)

However, the flip side of this weakness is that (2) can be easier to prove. One key reason for this is the ability to iterate decoupling estimates such as (2), in a way that does not seem to be possible with reverse square function estimates such as (1). For instance, suppose that one has a decoupling inequality such as (2), and furthermore each {f_j} can be split further into components {f_j= \sum_{k=1}^m f_{j,k}} for which one has the decoupling inequalities

\displaystyle  \| \sum_{k=1}^m f_{j,k} \|_{L^p({\bf R}^d)} \lesssim_{p,d} (\sum_{k=1}^m \|f_{j,k}\|_{L^p({\bf R}^d)}^2)^{1/2}.

Then by inserting these bounds back into (2) we see that we have the combined decoupling inequality

\displaystyle  \| \sum_{j=1}^n\sum_{k=1}^m f_{j,k} \|_{L^p({\bf R}^d)} \lesssim_{p,d} (\sum_{j=1}^n \sum_{k=1}^m \|f_{j,k}\|_{L^p({\bf R}^d)}^2)^{1/2}.

This iterative feature of decoupling inequalities means that such inequalities work well with the method of induction on scales, that we introduced in the previous set of notes.
In fact, decoupling estimates share many features in common with restriction theorems; in addition to induction on scales, there are several other techniques that first emerged in the restriction theory literature, such as wave packet decompositions, rescaling, and bilinear or multilinear reductions, that turned out to also be well suited to proving decoupling estimates. As with restriction, the curvature or transversality of the different Fourier supports of the {f_j} will be crucial in obtaining non-trivial estimates.
Strikingly, in many important model cases, the optimal decoupling inequalities (except possibly for epsilon losses in the exponents) are now known. These estimates have in turn had a number of important applications, such as establishing certain discrete analogues of the restriction conjecture, or the first proof of the main conjecture for Vinogradov mean value theorems in analytic number theory.
These notes only serve as a brief introduction to decoupling. A systematic exploration of this topic can be found in this recent text of Demeter.
Read the rest of this entry »

I was greatly saddened to learn that John Conway died yesterday from COVID-19, aged 82.

My own mathematical areas of expertise are somewhat far from Conway’s; I have played for instance with finite simple groups on occasion, but have not studied his work on moonshine and the monster group. But I have certainly encountered his results every so often in surprising contexts; most recently, when working on the Collatz conjecture, I looked into Conway’s wonderfully preposterous FRACTRAN language, which can encode any Turing machine as an iteration of a Collatz-type map, showing in particular that there are generalisations of the Collatz conjecture that are undecidable in axiomatic frameworks such as ZFC. [EDIT: also, my belief that the Navier-Stokes equations admit solutions that blow up in finite time is also highly influenced by the ability of Conway’s game of life to generate self-replicating “von Neumann machines“.]

I first met John as an incoming graduate student in Princeton in 1992; indeed, a talk he gave, on “Extreme proofs” (proofs that are in some sense “extreme points” in the “convex hull” of all proofs of a given result), may well have been the first research-level talk I ever attended, and one that set a high standard for all the subsequent talks I went to, with Conway’s ability to tease out deep and interesting mathematics from seemingly frivolous questions making a particular impact on me. (Some version of this talk eventually became this paper of Conway and Shipman many years later.)

Conway was fond of hanging out in the Princeton graduate lounge at the time of my studies there, often tinkering with some game or device, and often enlisting any nearby graduate students to assist him with some experiment or other. I have a vague memory of being drafted into holding various lengths of cloth with several other students in order to compute some element of a braid group; on another occasion he challenged me to a board game he recently invented (now known as “Phutball“) with Elwyn Berlekamp and Richard Guy (who, by sad coincidence, both also passed away in the last 12 months). I still remember being repeatedly obliterated in that game, which was a healthy and needed lesson in humility for me (and several of my fellow graduate students) at the time. I also recall Conway spending several weeks trying to construct a strange periscope-type device to try to help him visualize four-dimensional objects by giving his eyes vertical parallax in addition to the usual horizontal parallax, although he later told me that the only thing the device made him experience was a headache.

About ten years ago we ran into each other at some large mathematics conference, and lacking any other plans, we had a pleasant dinner together at the conference hotel. We talked a little bit of math, but mostly the conversation was philosophical. I regrettably do not remember precisely what we discussed, but it was very refreshing and stimulating to have an extremely frank and heartfelt interaction with someone with Conway’s level of insight and intellectual clarity.

Conway was arguably an extreme point in the convex hull of all mathematicians. He will very much be missed.

My student, Jaume de Dios, has set up a web site to collect upcoming mathematics seminars from any institution that are open online. (For instance, it has a talk that I will be giving in an hour.) There is a form for adding further talks to the site; please feel free to contribute (or make other suggestions) in order to make the seminar list more useful.

UPDATE: Here are some other lists of mathematical seminars online:

Perhaps further links of this type could be added in the comments. It would perhaps make sense to somehow unify these lists into a single one that can be updated through crowdsourcing.

EDIT: See also IPAM’s advice page on running virtual seminars.

This set of notes focuses on the restriction problem in Fourier analysis. Introduced by Elias Stein in the 1970s, the restriction problem is a key model problem for understanding more general oscillatory integral operators, and which has turned out to be connected to many questions in geometric measure theory, harmonic analysis, combinatorics, number theory, and PDE. Only partial results on the problem are known, but these partial results have already proven to be very useful or influential in many applications.
We work in a Euclidean space {{\bf R}^d}. Recall that {L^p({\bf R}^d)} is the space of {p^{th}}-power integrable functions {f: {\bf R}^d \rightarrow {\bf C}}, quotiented out by almost everywhere equivalence, with the usual modifications when {p=\infty}. If {f \in L^1({\bf R}^d)} then the Fourier transform {\hat f: {\bf R}^d \rightarrow {\bf C}} will be defined in this course by the formula

\displaystyle  \hat f(\xi) := \int_{{\bf R}^d} f(x) e^{-2\pi i x \cdot \xi}\ dx. \ \ \ \ \ (1)

From the dominated convergence theorem we see that {\hat f} is a continuous function; from the Riemann-Lebesgue lemma we see that it goes to zero at infinity. Thus {\hat f} lies in the space {C_0({\bf R}^d)} of continuous functions that go to zero at infinity, which is a subspace of {L^\infty({\bf R}^d)}. Indeed, from the triangle inequality it is obvious that

\displaystyle  \|\hat f\|_{L^\infty({\bf R}^d)} \leq \|f\|_{L^1({\bf R}^d)}. \ \ \ \ \ (2)

If {f \in L^1({\bf R}^d) \cap L^2({\bf R}^d)}, then Plancherel’s theorem tells us that we have the identity

\displaystyle  \|\hat f\|_{L^2({\bf R}^d)} = \|f\|_{L^2({\bf R}^d)}. \ \ \ \ \ (3)

Because of this, there is a unique way to extend the Fourier transform {f \mapsto \hat f} from {L^1({\bf R}^d) \cap L^2({\bf R}^d)} to {L^2({\bf R}^d)}, in such a way that it becomes a unitary map from {L^2({\bf R}^d)} to itself. By abuse of notation we continue to denote this extension of the Fourier transform by {f \mapsto \hat f}. Strictly speaking, this extension is no longer defined in a pointwise sense by the formula (1) (indeed, the integral on the RHS ceases to be absolutely integrable once {f} leaves {L^1({\bf R}^d)}; we will return to the (surprisingly difficult) question of whether pointwise convergence continues to hold (at least in an almost everywhere sense) later in this course, when we discuss Carleson’s theorem. On the other hand, the formula (1) remains valid in the sense of distributions, and in practice most of the identities and inequalities one can show about the Fourier transform of “nice” functions (e.g., functions in {L^1({\bf R}^d) \cap L^2({\bf R}^d)}, or in the Schwartz class {{\mathcal S}({\bf R}^d)}, or test function class {C^\infty_c({\bf R}^d)}) can be extended to functions in “rough” function spaces such as {L^2({\bf R}^d)} by standard limiting arguments.
By (2), (3), and the Riesz-Thorin interpolation theorem, we also obtain the Hausdorff-Young inequality

\displaystyle  \|\hat f\|_{L^{p'}({\bf R}^d)} \leq \|f\|_{L^p({\bf R}^d)} \ \ \ \ \ (4)

for all {1 \leq p \leq 2} and {f \in L^1({\bf R}^d) \cap L^2({\bf R}^d)}, where {2 \leq p' \leq \infty} is the dual exponent to {p}, defined by the usual formula {\frac{1}{p} + \frac{1}{p'} = 1}. (One can improve this inequality by a constant factor, with the optimal constant worked out by Beckner, but the focus in these notes will not be on optimal constants.) As a consequence, the Fourier transform can also be uniquely extended as a continuous linear map from {L^p({\bf R}^d) \rightarrow L^{p'}({\bf R}^d)}. (The situation with {p>2} is much worse; see below the fold.)
The restriction problem asks, for a given exponent {1 \leq p \leq 2} and a subset {S} of {{\bf R}^d}, whether it is possible to meaningfully restrict the Fourier transform {\hat f} of a function {f \in L^p({\bf R}^d)} to the set {S}. If the set {S} has positive Lebesgue measure, then the answer is yes, since {\hat f} lies in {L^{p'}({\bf R}^d)} and therefore has a meaningful restriction to {S} even though functions in {L^{p'}} are only defined up to sets of measure zero. But what if {S} has measure zero? If {p=1}, then {\hat f \in C_0({\bf R}^d)} is continuous and therefore can be meaningfully restricted to any set {S}. At the other extreme, if {p=2} and {f} is an arbitrary function in {L^2({\bf R}^d)}, then by Plancherel’s theorem, {\hat f} is also an arbitrary function in {L^2({\bf R}^d)}, and thus has no well-defined restriction to any set {S} of measure zero.
It was observed by Stein (as reported in the Ph.D. thesis of Charlie Fefferman) that for certain measure zero subsets {S} of {{\bf R}^d}, such as the sphere {S^{d-1} := \{ \xi \in {\bf R}^d: |\xi| = 1\}}, one can obtain meaningful restrictions of the Fourier transforms of functions {f \in L^p({\bf R}^d)} for certain {p} between {1} and {2}, thus demonstrating that the Fourier transform of such functions retains more structure than a typical element of {L^{p'}({\bf R}^d)}:

Theorem 1 (Preliminary {L^2} restriction theorem) If {d \geq 2} and {1 \leq p < \frac{4d}{3d+1}}, then one has the estimate

\displaystyle  \| \hat f \|_{L^2(S^{d-1}, d\sigma)} \lesssim_{d,p} \|f\|_{L^p({\bf R}^d)}

for all Schwartz functions {f \in {\mathcal S}({\bf R}^d)}, where {d\sigma} denotes surface measure on the sphere {S^{d-1}}. In particular, the restriction {\hat f|_S} can be meaningfully defined by continuous linear extension to an element of {L^2(S^{d-1},d\sigma)}.

Proof: Fix {d,p,f}. We expand out

\displaystyle  \| \hat f \|_{L^2(S^{d-1}, d\sigma)}^2 = \int_{S^{d-1}} |\hat f(\xi)|^2\ d\sigma(\xi).

From (1) and Fubini’s theorem, the right-hand side may be expanded as

\displaystyle  \int_{{\bf R}^d} \int_{{\bf R}^d} f(x) \overline{f}(y) (d\sigma)^\vee(y-x)\ dx dy

where the inverse Fourier transform {(d\sigma)^\vee} of the measure {d\sigma} is defined by the formula

\displaystyle  (d\sigma)^\vee(x) := \int_{S^{d-1}} e^{2\pi i x \cdot \xi}\ d\sigma(\xi).

In other words, we have the identity

\displaystyle  \| \hat f \|_{L^2(S^{d-1}, d\sigma)}^2 = \langle f, f * (d\sigma)^\vee \rangle_{L^2({\bf R}^d)}, \ \ \ \ \ (5)

using the Hermitian inner product {\langle f, g\rangle_{L^2({\bf R}^d)} := \int_{{\bf R}^d} \overline{f(x)} g(x)\ dx}. Since the sphere {S^{d-1}} have bounded measure, we have from the triangle inequality that

\displaystyle  (d\sigma)^\vee(x) \lesssim_d 1. \ \ \ \ \ (6)

Also, from the method of stationary phase (as covered in the previous class 247A), or Bessel function asymptotics, we have the decay

\displaystyle  (d\sigma)^\vee(x) \lesssim_d |x|^{-(d-1)/2} \ \ \ \ \ (7)

for any {x \in {\bf R}^d} (note that the bound already follows from (6) unless {|x| \geq 1}). We remark that the exponent {-\frac{d-1}{2}} here can be seen geometrically from the following considerations. For {|x|>1}, the phase {e^{2\pi i x \cdot \xi}} on the sphere is stationary at the two antipodal points {x/|x|, -x/|x|} of the sphere, and constant on the tangent hyperplanes to the sphere at these points. The wavelength of this phase is proportional to {1/|x|}, so the phase would be approximately stationary on a cap formed by intersecting the sphere with a {\sim 1/|x|} neighbourhood of the tangent hyperplane to one of the stationary points. As the sphere is tangent to second order at these points, this cap will have diameter {\sim 1/|x|^{1/2}} in the directions of the {d-1}-dimensional tangent space, so the cap will have surface measure {\sim |x|^{-(d-1)/2}}, which leads to the prediction (7). We combine (6), (7) into the unified estimate

\displaystyle  (d\sigma)^\vee(x) \lesssim_d \langle x\rangle^{-(d-1)/2}, \ \ \ \ \ (8)

where the “Japanese bracket” {\langle x\rangle} is defined as {\langle x \rangle := (1+|x|^2)^{1/2}}. Since {\langle x \rangle^{-\alpha}} lies in {L^p({\bf R}^d)} precisely when {p > \frac{d}{\alpha}}, we conclude that

\displaystyle  (d\sigma)^\vee \in L^q({\bf R}^d) \hbox{ iff } q > \frac{d}{(d-1)/2}.

Applying Young’s convolution inequality, we conclude (after some arithmetic) that

\displaystyle  \| f * (d\sigma)^\vee \|_{L^{p'}({\bf R}^d)} \lesssim_{p,d} \|f\|_{L^p({\bf R}^d)}

whenever {1 \leq p < \frac{4d}{3d+1}}, and the claim now follows from (5) and Hölder’s inequality. \Box

Remark 2 By using the Hardy-Littlewood-Sobolev inequality in place of Young’s convolution inequality, one can also establish this result for {p = \frac{4d}{3d+1}}.

Motivated by this result, given any Radon measure {\mu} on {{\bf R}^d} and any exponents {1 \leq p,q \leq \infty}, we use {R_\mu(p \rightarrow q)} to denote the claim that the restriction estimate

\displaystyle  \| \hat f \|_{L^q({\bf R}^d, \mu)} \lesssim_{d,p,q,\mu} \|f\|_{L^p({\bf R}^d)} \ \ \ \ \ (9)

for all Schwartz functions {f}; if {S} is a {k}-dimensional submanifold of {{\bf R}^d} (possibly with boundary), we write {R_S(p \rightarrow q)} for {R_\mu(p \rightarrow q)} where {\mu} is the {k}-dimensional surface measure on {S}. Thus, for instance, we trivially always have {R_S(1 \rightarrow \infty)}, while Theorem 1 asserts that {R_{S^{d-1}}(p \rightarrow 2)} holds whenever {1 \leq p < \frac{4d}{3d+1}}. We will not give a comprehensive survey of restriction theory in these notes, but instead focus on some model results that showcase some of the basic techniques in the field. (I have a more detailed survey on this topic from 2003, but it is somewhat out of date.)
Read the rest of this entry »

After some discussion with the applied math research groups here at UCLA (in particular the groups led by Andrea Bertozzi and Deanna Needell), one of the members of these groups, Chris Strohmeier, has produced a proposal for a Polymath project to crowdsource in a single repository (a) a collection of public data sets relating to the COVID-19 pandemic, (b) requests for such data sets, (c) requests for data cleaning of such sets, and (d) submissions of cleaned data sets. (The proposal can be viewed as a PDF, and is also available on Overleaf). As mentioned in the proposal, this database would be slightly different in focus than existing data sets such as the COVID-19 data sets hosted on Kaggle, with a focus on producing high quality cleaned data sets. (Another relevant data set that I am aware of is the SafeGraph aggregated foot traffic data, although this data set, while open, is not quite public as it requires a non-commercial agreement to execute. Feel free to mention further relevant data sets in the comments.)

This seems like a very interesting and timely proposal to me and I would like to open it up for discussion, for instance by proposing some seed requests for data and data cleaning and to discuss possible platforms that such a repository could be built on. In the spirit of “building the plane while flying it”, one could begin by creating a basic github repository as a prototype and use the comments in this blog post to handle requests, and then migrate to a more high quality platform once it becomes clear what direction this project might move in. (For instance one might eventually move beyond data cleaning to more sophisticated types of data analysis.)

UPDATE, Mar 25: a prototype page for such a clearinghouse is now up at this wiki page.

UPDATE, Mar 27: the data cleaning aspect of this project largely duplicates the existing efforts at the United against COVID-19 project, so we are redirecting requests of this type to that project (and specifically to their data discourse page). The polymath proposal will now refocus on crowdsourcing a list of public data sets relating to the COVID-19 pandemic.

 

At the most recent MSRI board of trustees meeting on Mar 7 (conducted online, naturally), Nicolas Jewell (a Professor of Biostatistics and Statistics at Berkeley, also affiliated with the Berkeley School of Public Health and the London School of Health and Tropical Disease), gave a presentation on the current coronavirus epidemic entitled “2019-2020 Novel Coronavirus outbreak: mathematics of epidemics, and what it can and cannot tell us”. The presentation (updated with Mar 18 data), hosted by David Eisenbud (the director of MSRI), together with a question and answer session, is now on Youtube:

(I am on this board, but could not make it to this particular meeting; I caught up on the presentation later, and thought it would of interest to several readers of this blog.) While there is some mathematics in the presentation, it is relatively non-technical.

Just a short post to note that this year’s Abel prize has been awarded jointly to Hillel Furstenberg and Grigory Margulis for “for pioneering the use of methods from probability and dynamics in group theory, number theory and combinatorics”. I was not involved in the decision making process of the Abel committee this year, but I certainly feel that the contributions of both mathematicians are worthy of the prize. Certainly both mathematicians have influenced my own work (for instance, Furstenberg’s proof of Szemeredi’s theorem ended up being a key influence in my result with Ben Green that the primes contain arbitrarily long arithmetic progressions); see for instance these blog posts mentioning Furstenberg, and these blog posts mentioning Margulis.

Next quarter, starting March 30, I will be teaching “Math 247B: Classical Fourier Analysis” here at UCLA. (The course should more accurately be named “Modern real-variable harmonic analysis”, but we have not gotten around to implementing such a name change.) This class (a continuation of Math 247A from previous quarter, taught by my colleague, Monica Visan) will cover the following topics:

  • Restriction theory and Strichartz estimates
  • Decoupling estimates and applications
  • Paraproducts; time frequency analysis; Carleson’s theorem

As usual, lecture notes will be made available on this blog.

Unlike previous courses, this one will be given online as part of UCLA’s social distancing efforts. In particular, the course will be open to anyone with an internet connection (no UCLA affiliation is required), though non-UCLA participants will not have full access to all aspects of the course, and there is the possibility that some restrictions on participation may be imposed if there are significant disruptions to class activity. For more information, see the course description. UPDATE: due to time limitations, I will not be able to respond to personal email inquiries about this class from non-UCLA participants in the course. Please use the comment thread to this blog post for such inquiries. I will also update the course description throughout the course to reflect the latest information about the course, both for UCLA students enrolled in the course and for non-UCLA participants.

Archives