Exercise 2.22: Concentration for Spin Glasses
(a)
Since \(F_d\) is twice differentiable, it is sufficient to show its Hessian is everywhere positive semidefinite. As the transformation inside each of the exponents in \(F_d\) is linear in \(\theta\), it will suffice to show \begin{align} y \mapsto \log \sum_{i=1}^n e^{y_i} \, , \end{align} with \(n = 2^d\) is convex. The Hessian of this map is \begin{align} H_y = \frac{1}{\sum_i e^{y_i}} \diag (e^y) - \frac{1}{\left(\sum_i e^{y_i}\right)^2} e^y (e^y)^\top \, , \end{align} where \(e^{y} \in \R^n\) is applied elementwise. Then for any \(v \in \R^n\) \begin{align} v^\top H_y v = \frac{ \left( \sum_i v_i^2 e^{y_i} \right) \left( \sum_i e^{y_i} \right) - \left( \sum_i v_i e^{y_i} \right)^2 }{ \left( \sum_i e^{y_i} \right)^2 } \, . \end{align} The proof is concluded by an application of the Cauchy-Schwarz inequality \begin{align} \biggl( \sum_i v_i \sqrt{e^{y_i}} \sqrt{e^{y_i}} \biggr)^2 \leq \biggl( \sum_i v_i^2 e^{y_i} \biggr) \biggl( \sum_i e^{y_i} \biggr) \, . \end{align}
(a) Alternative Solution
We can also verify convexity directly using Hölder’s inequality. Let \(\theta \in (0, 1)\) and \(x, y \in \R^n\). Then \begin{equation} \log \sum_{i=1}^n e^{\theta x_i + (1 - \theta) y_i} = \log \sum_{i=1}^n (e^{x_i})^\theta (e^{y_i})^{1 - \theta}. \end{equation} Note that \(p = 1/\theta > 1\) and \(q = 1/(1 - \theta) > 1\) are conjugate. Hence, using Hölder’s inequality, \begin{equation} \log \sum_{i=1}^n e^{\theta x_i + (1 - \theta) y_i} \le \log\sbrac{\parens{\sum_{i=1}^n e^{x_i}}^\theta \parens{\sum_{i=1}^n e^{y_i}}^{1 - \theta}} = \theta \log \sum_{i=1}^n e^{x_i} + (1 - \theta) \log \sum_{i=1}^n e^{y_i}. \end{equation}
(b)
By the Taylor’s theorem \(\| F_d(\theta) - F_d(\theta') \|_2 \leq \| \nabla F_d (\xi) \|_2 \| \theta - \theta' \|_2\) for some \(\xi \in \R^{ m }\), \(m = {d \choose 2}\). Denoting \begin{align} f_{\theta} (x) = \frac{1}{\sqrt{d}} \sum_{x \in \lbrace -1 , +1 \rbrace^d} \theta_{jk} x_j x_k \, , \end{align} and \(Z_\theta = e^{F_d(\theta)}\), we can bound \begin{align} \| \nabla F_d (\xi) \|_2^2 &= \frac{1}{Z_\xi^2} \sum_{j \neq k} \biggl( \sum_{x \in \lbrace -1 , +1 \rbrace^d} e^{f_\xi (x)} \nabla_{\xi_{jk}} f_\xi (x) \biggr)^2 \newline &\overset{\text{(i)}}{\leq} \frac{1}{Z_\xi^2} \sum_{j \neq k} \frac{1}{d} \biggl( \sum_{x \in \lbrace -1 , +1 \rbrace^d} e^{f_\xi (x)} \biggr)^2 \newline &\overset{\text{(ii)}}{=} \frac{1}{d} {d \choose 2} = \frac{d - 1}{2} \leq d \, , \end{align} where (i) follows by combining the monotonicity of \(z \mapsto z^2\), \(\nabla_{\xi_{jk}} f_\xi (x) = d^{-1/2} x_j x_k\), and \(|x_j x_k | = 1\) for all \(x \in \lbrace -1 , +1 \rbrace^d\), and (ii) is by the definition of \(Z_\xi\).
(c)
We start by bounding \(\E [F_d (\theta)]\) \begin{align} \E [F_d (\theta)] &\overset{\text{(i)}}{\leq} \log \sum_{x \in \lbrace -1, +1 \rbrace^d} \E [ e^{f_\theta (x)} ] \newline &\overset{\text{(ii)}}{=} \log \biggl[ 2^d \exp \biggl\lbrace \frac{\beta^2}{2d} {d \choose 2} \biggr\rbrace \biggr] \newline &= d \log 2 + \frac{\beta^2 (d - 1)}{4} \leq d \log 2 + d \frac{\beta^2}{4} \, , \end{align} where (i) is by the Jensen’s inequality, and (ii) by the definition of Gaussian MGF combined with \(x \in \lbrace -1 , +1 \rbrace^d\).
Observing \(\theta_{jk} \overset{\text{iid}}{\sim} \gauss (0 , \beta^2 )\) and \(\beta \e_{jk}\) are equal in distribution for \(\e_{jk} \overset{\text{iid}}{\sim} \gauss (0, 1)\), we can view \(F_d\) as a \((\beta\sqrt{d})\)-Lipschitz function of \(\e\), and combine Theorem 2.26 with the above bound for \(\E [F_d (\theta)]\) to obtain \begin{align} \P \left (F_d (\theta) \geq d \log 2 + d \frac{\beta^2}{4} + d t \right) &\leq \P \left(F_d (\theta) \geq \E [F_d (\theta)] + dt \right) \newline &\leq 2 \exp \left\lbrace - \frac{d t^2}{2 \beta^2} \right\rbrace \, . \end{align}
The above bound is valid but does not match the required dependence on \(\beta\)!
Notes
-
When viewed as function of \(y\) as in (a), \(F_d\) is also known as the LogSumExp function.
-
The convexity of \(F_d\) can be combined with the Jensen’s inequality to derive the lower bound \(\E [F_d (\theta)] \geq F_d(\E [\theta]) = F_d (0) = d \log 2\). Taking the result from (c), this implies \begin{align} d \log 2 \leq \E [F_d (\theta)] \leq d \log 2 + d \frac{\beta^2}{4} \, , \end{align} showing that the upper bound becomes tight for small \(\beta\).