This is an old revision of the document!
Posting list of questions and practice final solution sketches here. Will add as time goes on.
Problems from https://ywfan-math.github.io/104s21_final_practice.pdf.
We only consider the interval $[0,1]$, as $cos(x)$ is positive on $[-1,0]$, and everything else is outside the range of $cos(x)$. We consider $f(x) = cos(x) - x$ on $[0,1]$. $f$ is strictly decreasing on this interval, since $f'$ is always negative. Then, use the IVT to argue that $f$ has only one zero.
Sequence is clearly bounded, since $cos(x)$ is bounded. Then show that $x_{n+1} \geq x_n$ from results shown above. Since the sequence is bounded and monotone, it must be convergent.
To show divergence, we show that $lim x_n > 0$ from result earlier in part (b). Series can only be convergent if $lim x_n = 0$, so the series must be divergent.
A step function consists of a finite number of constant pieces, so it must be bounded. So, if we replace $f$ with a step function, its absolute value must be less than $|lim_{n \to \infty} \int_a^b Msin(nx)dx| = |lim_{n \to \infty} -\frac{M}{n} cos(nx)| = 0$. This proves the first part of the hint.
The second part of the hint can be shown from definition of Riemann integration. By definition, we know there is some partition $P$ such that $\int_a^b f(x)dx - L(f, P) < \epsilon$. We define our step function off of $L(f, P)$. We note that $S(x) \leq f(x)$, since we defined it off the lower bound.
For the final part: $$0 \leq \int_a^b (f(x) - S(x))dx < \epsilon$$ Since $f - S$ is positive, and $sin(nx) \leq 1$: $$|\int_a^b (f(x) - S(x))sin(nx)|dx < \epsilon$$ Take the limit $n \to \infty$: $$lim |\int_a^b (f(x) - S(x))sin(nx)|dx \leq \epsilon$$ Using result from first hint: $$lim |\int_a^b f(x)sin(nx)|dx \leq \epsilon$$ $$lim \int_a^b f(x)sin(nx)dx \leq \epsilon$$ This holds for all $\epsilon$, so from this we get our desired result.
Compute the determinant, and then conclude that the resulting function is continuous/differentiable due to multiplication/addition properties associated with continuity/differentiability.
Show that $F(a) = F(b) = 0$, then use Rolle's Theorem or MVT to show the desired result.
Use $h(x) = 1$ and part (b) to get the result.
We show that Cauchy condition on $B(X)$ implies that the sequence converges (uniformly) to a function $f(x)$, which can be found through pointwise convergence. We can then use the uniform convergence to show that $f(x)$ is bounded (and obviously, defined on every point in $X$), so it is an element of $B(X)$. So, every Cauchy sequence converges in $B(X)$, so the metric space is complete.
Consider the complement of the set, all discontinuous functions. We focus on one discontinuity, meaning that for some x, there is some $\epsilon > 0$ such that there is no $\delta > 0$ that satisfies $x' \in (x - \delta, x + \delta) \implies f(x') \in (f(x) - \epsilon, f(x) + \epsilon)$. \We then take an open ball with radius $\epsilon/3$, so for all functions $g$ in the open ball, $g(x) \in (f(x) - \epsilon/3, f(x) + \epsilon/3)$. Combining this with the previous discontinuity in $f$, we can show for $\epsilon' = \epsilon/3$, $g$ is discontinuous at $f$. So, an open ball around a discontinuous function still is completely contained in the set of discontinuous functions. The set of discontinuous functions is open.
Therefore, the set of continuous functions is closed.
Say we have a Cauchy sequence in the closed subset. Since we are in a a complete metric space, the Cauchy sequence converges to something inside the metric space. Since a closed set contains all its limit points, it converges to something inside the closed set. Therefore, a closed subset of a complete metric space is complete. See https://math.stackexchange.com/questions/244661/showing-that-if-a-subset-of-a-complete-metric-space-is-closed-it-is-also-comple.
Going off of the hint, we can assume our covering is finite since $[a,b]$ is closed and bounded, therefore compact. We can use the definition of compactness to argue this.
Then, we can take a set of $n$ open intervals and reduce it to $n-1$ intervals by taking the union of two intervals, since the union of two open intervals gives us an open interval.
Using an inductive argument, we get 1 open interval that covers the entire space, which gives us a contradiction.
Set $\epsilon = 1$. By def. of uniform continuity, we can choose a $\delta$ that satisfies $|x-y| < \delta \implies |f(x)-f(y)| < \epsilon$.
Now, consider some $x \geq 1$, and let $d = \left \lfloor{\frac{x-1}{\delta/2}+1}\right \rfloor$. Intuitively, $d$ is the minimum number of intervals of length $\delta/2$ that would cover the interval $[1, x]$. Through the above inequality and the triangle inequality, we show that $|f(x) - f(1)| < d \leq \frac{x-1}{\delta/2} + 1$.
Using reverse triangle inequality (https://math.stackexchange.com/questions/214067/triangle-inequality-for-subtraction/214074), $|f(x)| < \frac{x-1}{\delta/2} + 1 + |f(1)|$.
We can rewrite the above form into $|f(x)| < ax + b$, and we can show that there exists some $M$ such that $Mx \geq ax + b$ for $x \geq 1$. So, we have $|f(x)| < Mx \implies |f(x)|/x < M$, as desired.
Let the term inside the series be $s_n$. Note that $|s_n| \leq e^{-nx}$.
For $x \in (0, \infty)$, use the comparison test and sum of geometric series to show that $x \in E$.
For $x \in (-\infty, 0]$, show that $lim |s_n| \neq 0$, so the series does not converge, and $x \not \in E$.
So, $E = (0, \infty)$.
It does not converge uniformly. Let $f_n = \sum_{k=1}^n e^{-kx}cos(kx)$. $$\lim_{n \to \infty} \sup_x |f_n - f| = \lim_{n \to \infty} \sup_x |\sum_{k=n+1}^\infty e^{-kx}cos(kx)|$$ To be continued. Intuition is that as x goes to 0, the term inside the sup goes to infinity, so our sequence does not converge uniformly.
Rudin 4.29 Intuition (for a right-handed limit) is that we show that the right-handed limit is $\inf_{x \in (x, x+\delta)} f(x)}$. We build up a sequence of $x_n \in (x, x+\delta)$ that converges to $x$ from the right so that $f(x_n)$ converges to $\inf_{x \in (x, x+\delta)} f(x)}$.
Rudin 4.30 Intuition is that we can sandwich a unique rational number between $f(x-)$ and $f(x+)$ for every discontinuity, creating an injection between the discontinuities and rational numbers; therefore, the number of discontinuities is countable.
Easy to show that for a fixed x, for $y \neq 0$, the function is continuous by division of continuous functions. Then, we just show that for any fixed x, $lim_{y \to 0} f(x, y) = 0$, so our function is continuous. Do this by using L'Hopital's for $x=0$ or evaluating the limit directly.
Take the limit, by fixing $x=y$.
We can show through an inductive argument that $a_n = \sqrt{1 + \sum_{k=1}^{n-1} 1/2^k}$. From this, it is clear that when we take the limit as $n \to \infty$, $a_n \to \sqrt{2}$.
Say $P(x)$ has roots $r_k$, each with multiplicity $m_k$. We must have $\sum m_k = n$. WLOG, say that $r_k$ is increasing.
We can factor $P(x) = \prod (x-r_k)^{m_k}$. By product rule, for any $k$ with $m_k>1$, $P'(x)$ has the root $r_k$ with multiplicity $m_k - 1$. So far, we have $\sum (m_k - 1) = n-k$ roots, leaving us with $k-1$ roots left to find.
Now we consider the intervals $(r_i, r_{i+1})$ for $k = 1, \ldots, k-1$. Since $P(r_i) = P(r_{i+1}) = 0$, by MVT (after checking conditions), there is at least one point $r'$ in $(r_i, r_{i+1})$ that satisfies $P'(r') = 0$. Since we have $k-1$ such intervals, and we have at most $k-1$ roots left to find, there is a root in each one of these intervals.
We have found all the roots of $P'$, and they are all real numbers, so we're done.
We want to show $$\lim_{n \to \infty} \sup_x |\sum_{k=n+1}^\infty (-1)^kf_k(x)| = 0$$
The conditions for the Alternating Series Test are met, so we can say $|\sum_{k=n+1}^\infty (-1)^kf_k(x)| \leq f_n(x)$ for all $x$. So, we can rewrite the limit: $$\lim_{n \to \infty} \sup_x |\sum_{k=n+1}^\infty (-1)^kf_k(x)| \leq \lim_{n \to \infty} \sup_x |f_n(x)| = 0$$ $$\lim_{n \to \infty} \sup_x |\sum_{k=n+1}^\infty (-1)^kf_k(x)| = 0$$ This is what we wanted, so the series of functions converges uniformly.
We set $\epsilon > 0$. By continuity, there is a $\delta > 0$ such that $|x-y| < \delta \implies |f(x)-f(y)| < \epsilon$. We set $n$ large enough such that $1/n < \delta$, implying that $|f1)$ and $g(y) = f(\psi(y))$, with $\psi$ strictly increasing. If we take $\alpha = x$ and $\beta = \psi$, we get $\int_{A}^{B} g d\psi = \int_{a}^{b} f dx$. After applying Ross 6.17, we have $\int_{A}^{B} g(\psi(x)) \psi'(x) dx = \int_{a}^{b} f dx$, which is the same form as u-substitution. However, we do have the requirement that $\psi$ is strictly increasing.
34. What is the difference between the Riemann and Darboux integral?
The Darboux integral takes the upper and lower bound for all possible partitions. On the other hand, the Riemann integral considers all possible Riemann sums for a partition with a mesh size lower than $\delta$. It states that for each $\epsilon > 0$, there is a $\delta$ such that all possible Riemann sums for a partition with a mesh size less than $\delta$ differ from the value of the integral by at most $\epsilon$. By Theorem 32.9, Darboux integration and Riemann integration are equivalent.
35. What are the conditions necessary for the mean value theorem?
For the mean value theorem to be valid on an interval $[a,b]$, the function $f$ must be continuous on $[a,b]$ and differentiable on $(a,b)$.
36. What are the conditions necessary to use L'Hopital's rule?
For L'Hopital's rule to be valid, we need that $\lim_{x \to s} \frac{f'(x)}{g'(x)} = L$ to exist, and for both $f'$ and $g'$ to be differentiable at a neighborhood near $s$. We also need the quotient $\frac{f(x)}{g(x)}$ to be of a indeterminate form, or for $|g(x)|$ to go to $\infty$.
37. Conditions necessary for Taylor's Theorem?
If our remainder term is of degree $n$, we just need that the function $f$ be differentiable $n$ times on the interval $(a,b)$. Note that for degree 1, we just get the Mean Value Theorem.
38. What is the Weierstrass M-Test?
The Weierstrass M-Test states that if we have a convergent series $\sum M_k$, with $M_k \geq 0$, and a sequence of functions such that $|g_k(x)| \leq M_k$ for all x in a set $S$, then $\sum g_k$ converges uniformly on $S$. We can use this theorem to prove an “annoying” sequence of functions converges uniformly, such as something like $f_n(x) = \sum_{k=0}^{n} 4^{-k}sin(nx)$.
39. Why does L'Hopital's rule work when $|g(x)|$ goes to $\infty$, even when we don't have an indeterminate form?
Even though we don't have an indeterminate form, this works out in the proof, shown in Ross 30.2. Note that for something like $\lim_{x \to \infty} \frac{\sin{x}}{x}$, the limit $\lim_{x \to s} \frac{f'(x)}{g'(x)}$ doesn't even exist, so we cannot use L'Hopital's rule in the first place.
40. How do we find the radius of convergence?
For a power series $\sum_{n} c_n z^n$, we find the radius of convergence as follows. Let $\alpha = \limsup_{n \to \infty} |c_n|^{1/n}$, and the radius of convergence will be given by $R = 1/\alpha$. For any z satisfying $|z| < R$, the power series will converge; note that if $|z|=R$, we don't know what will happen and we will have to employ another test, such as the alternating series test. Also note that $R=0$ if $\alpha = \infty$. For a proof, we can just use the root test for absolute convergence.
41. Why do we have $a+b \geq 2\sqrt{ab}$?
This is the AM-GM inequality for two terms. Note that we have equality when $a=b$. We prove this as follows. Trivially, $(\sqrt{a}-\sqrt{b})^2 \geq 0$. Expanding this and rearranging terms gives $a + b \geq 2\sqrt{ab}$, which is our desired inequality. Putting $a=b$ makes the original inequality $0 \geq 0$, which shows why we have equality.
42. What is an example of a power series where the radius of convergence is 0?
We want to find $c_n$ such that $\alpha = \limsup_{n \to \infty} |c_n|^{1/n}$ diverges. We can simply do this by choosing $c_n = n^n$, in which case $\alpha = \limsup_{n \to \infty} n = \infty$, so $R=0$. So, one example of this power series is $\sum (nx)^n$.
43. What is an example of a smooth function whose Taylor series does not converge to that function?
We can consider the smooth function $f(x) = e^{-1/x^2}$ for $x>=0$ and $f(x)=0$ elsewhere. We showed in class that this was continuous, and it is also smooth since the derivative to the right side of $x = 0$ goes to $0$, as $e^{-1/x^2}$ shrinks faster than any polynomial. As such, its Taylor series about $x=0$ is just $P(x) = 0$, which definitely is not $f(x)$ for $x > 0$.
44. What does “zero measure” mean when talking about the Lebesgue criterion for Riemann integration?
By definition, a set $S$ has zero measure when, for all $\epsilon > 0$, $S$ can be contained in the union of open balls $U_1, U_2, \ldots$, such that $\sum vol(U_k) < \epsilon$. Here, $vol(U_k)$ is the volume of the open ball $U_k$. Any countable set has zero measure, so for a function with a finite number of discontinuities, its set of discontinuities has zero measure. See http://math.uchicago.edu/~may/REU2013/MeasureZero.pdf for more info.
45. How can a continuous function map a bounded set to an unbounded set?
We consider the function $f(x) = 1/x$ on the interval $(0, \infty)$, noting that $f$ is continuous in this interval. $f$ maps the interval $(0,1)$ to $(1, \infty)$, so this is an example of how it can map a bounded set to an unbounded set. Note that the bounded set must be open; if it was closed, then the set would also be compact, and continuous functions map compact sets to compact sets.
46. Why is the set $[0,1] \cap \mathbb{Q}$ not compact while $[0,1]$ is? (MT2, Q1, (4))
$[0,1]$ is compact since it is closed and bounded. However, $[0,1] \cap \mathbb{Q}$ is bounded, but not closed. To see why, consider its complement, which includes the irrationals in $[0,1]$. If we take any open ball around an irrational, it will contain some rationals by density of the rationals on $\mathbb{R}$. Therefore, the complement of $[0,1] \cap \mathbb{Q}$ is not open, so $[0,1] \cap \mathbb{Q}$ is not closed. Since $[0,1] \cap \mathbb{Q}$ is not closed, it cannot be compact.
47. Why is the set $\{0\} \cup \{1/n | n \in \mathbb{N}\}$ compact? (MT2, Q1, (5))
First of all, we can see clearly that it is bounded, so we just need to show closure. We consider its complement. For any point in $\mathbb{R} - (\{0\} \cup \{1/n | n \in \mathbb{N}\})$, we can find an open ball around it that doesn't intersect with $\{0\} \cup \{1/n | n \in \mathbb{N}\}$. Therefore, the set's complement is open, and the set itself is closed. Since it is closed and bounded on $\mathbb{R}$, it is compact. Also note that since this is compact on $\mathbb{R}$, it is compact on any set.
48. What are the conditions necessary to use the alternating series test?
We must have a series that obeys the following. $a_n \geq 0$ is monotone decreasing, with $lim a_n = 0$. Then, we will have that $\sum (-1)^{n+1} a_n$ converges.
49. When should we use the root or ratio test when determining when a series converges/diverges?
Usually when $a_n$ is in the form of $\alpha^n$, it is most obvious to use the root test since this exponent of $n$ will cancel out most easily. Whenever we have factorials or if the denominator and numerator are not raised to the power of $n$, it is usually a bit simpler to use the ratio test instead. If the results are inconclusive from either of these tests, we should consider the comparison test.
50. How do we know when a function $f$ is Riemann integrable with respect to $\alpha$?
If $f$ is continuous, it is automatically integrable. For a continuous $\alpha$, any monotone increasing or decreasing $f$ is integrable. Also, for any $f$ with a countable number of discontinuities, with $\alpha$ continuous where $f$ has discontinuities, $f$ is integrable. For more info on this last case, see http://www.math.ncku.edu.tw/~rchen/Advanced%20Calculus/Lebesgue%20Criterion%20for%20Riemann%20Integrability.pdf, which shows that a function we encountered in HW is Riemann integrable.
51. What is a step function?
In short, it is a piecewise function that has a finite number of constant pieces. See https://en.wikipedia.org/wiki/Step_function for more info.
52. How do we prove that a metric space is complete?
By the definition, we must consider a Cauchy sequence in that metric space. We then show that the Cauchy sequence converges to some point (not necessarily in the metric space), using a proof similar to the proof that all Cauchy sequences of real numbers are convergent (Ross 10.11). We then show that the convergent result is contained in the metric space.
53. How do we prove a one-sided limit $f(x+)$?
To make things simple, we talk about right-handed limit. From definition, we show that for all $\epsilon > 0$, there is some $\delta > 0$ such that $x \in (x, x + \delta) \implies |f(x) - f(x+)| < \epsilon$.