math104-s21:s:ryotainagaki:problems

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
math104-s21:s:ryotainagaki:problems [2021/05/08 03:07]
73.15.53.135
math104-s21:s:ryotainagaki:problems [2026/02/21 14:41] (current)
Line 39: Line 39:
 $$|\frac{x + y}{x^2 + y^2} - \frac{2}{2}| = |\frac{x+y-x^2-y^2}{2(x^2+y^2)}| = |\frac{x(1- x) + y(1-y)}{x^2 + y^2}|$$ $$|\frac{x + y}{x^2 + y^2} - \frac{2}{2}| = |\frac{x+y-x^2-y^2}{2(x^2+y^2)}| = |\frac{x(1- x) + y(1-y)}{x^2 + y^2}|$$
  
-We intend to prove the continuity through the $\epsilon - \delta$ definition. So we want to find $\delta > 0$ where $\forall d((x, y)(1, 1)) < \delta, |f(x, y) - f(1, 1)| < \epsilon$.+We intend to prove the continuity through the $\epsilon - \delta$ definition. So we want to find $\delta > 0$ where $\forall d(<x , y><1 , 1>) < \delta, |f(x, y) - f(1, 1)| < \epsilon$.
  
-Using the $d((x, y)(1, 1)) < \delta$, we can say that $|y-1| < \delta$ and $|x-1| < \delta$. +Using the $d(<x, y><1,  1>) < \delta$, we can say that $|y-1| < \delta$ and $|x-1| < \delta$. 
  
 Using clever inequalities and upper bounds, we can say... (assuming that our delta is going to be smaller than one, Using clever inequalities and upper bounds, we can say... (assuming that our delta is going to be smaller than one,
Line 55: Line 55:
 That is a positive number on the right hand side (you may want to prove that as a VERY CHALLENGING exercise). That is a positive number on the right hand side (you may want to prove that as a VERY CHALLENGING exercise).
  
-In other words, we know $\forall \epsilon > 0, \exists \delta > 0$, here it can be that $$\delta < \min(\frac{2}{1 + 2\sqrt{2}}, \frac{-(2\epsilon\sqrt{2}) \sqrt{(4\sqrt{2} + 8)\epsilon + 2}}{2(1 - \epsilon)})$$,  |f(x, y) -f(1, 1)| < \epsilon$.+In other words, we know $\forall \epsilon > 0, \exists \delta > 0\forall <x, y> \in \mathbb{R^2}, (\sqrt{(x - 1)^2 + (y - 1)^2} < \delta) \to   (|f(x, y) -f(1, 1)| < \epsilon)$.
  
 Thus by the $\epsilon, \delta$ definition of continuity, we know that $f(x, y)$ is continuous at $(x, y)$. Thus by the $\epsilon, \delta$ definition of continuity, we know that $f(x, y)$ is continuous at $(x, y)$.
Line 264: Line 264:
  
  
-** 32. Give an example of a function that is continuous on $\mathbb{Q}but not on $\mathbb{R}and RIGOROUSLY prove that your functions satisfies such property. **+** 32. (Credits to Midterm 2) In less than 3 sentences and in less than 3 minutes, formally prove or disprove: given $A \subseteq B$ is bounded and f is a continuous map $f: B \to Y$, then $f(A)is bounded. **
  
-**Answer: ** Consider the function $f(x)$ defined as $1$ if $x \in \mathbb{Q}$ and $0$ otherwise. We can say that on $\mathbb{Q}$ the function is rational since on $\mathbb{Q}$ $f(x) = 1$. However on $\mathbb{R}$ the function is not continuous since for any point x_0 in $\mathbb{R}$, $\lim_{x \to x_0}f(x)$ does not existWhy? Using a sequence $x_nof irrational numbers, I get that $f(x_n) = 0 $ and $ f(x_n) \to 0$. Using a sequence $x_nof rational numbers, I get that $f(x_n= 1$ and $ f(x_n) \to 1$.+**Answer: On the spot and on a timed fast exam, this may seem like a hard problem and deceptive. ** We know that the given statement is false. To show that consider the metric space $(S, d(x, y )=|x- y|)$$U = (0, 1)$and $f(x) = \ln(x)$. We know that $Uis bounded but $f(U) = (-\infty, 0)by the property of natural log and is NOT boundedThus, just because $Ais bounded doesn't mean that $f(A)$ is bounded.
  
 +** 33. Suppose that (a_n)_n is a sequence in a metric space (M, d), which converges to a limit
 +$a \in M$. Prove that $K = \{a_n | n \in \mathbb{N}\} \cup {a}$ is compact. **
  
-** 33Suppose that (an)n is a sequence in a metric space (M, d)which converges to a limit +** Answer: ** 
-$a \in M$. Prove that $K = \{an | n \in \mathbb{N}} \cup {a}$ is compact. **+Firstly, consider an arbitrary open cover $\{C_{\alpha}\}$ of KSince $\{C_{\alpha}\}$ is an open cover of $K$, we know that there exists an open set $S$ in the open cover that contains $a$. Since $S$ contains a and since $S$ is openwe know that $\exists r > 0: \{x \in M: d(xa) < r\} \subseteq S$. Because $a_n \to a$, we know that $\exists N \mathbb{N}: \forall n > N, d(a_n, a) < r$. This means that $\{a_n: n > N\} \subseteq \{x \in M: d(x, a) < r\} \subseteq S$. Now consider the first $N$ points in $a_n$. In order for the open cover $\{C_{\alpha}\}$ to cover $\{a_n: n \in \{1, 2, ... , N\}\}$, one needs only at most $N$ sets from $\{C_{\alpha}\}$ whose union contain $a_1, ..., a_N$. Therefore, there exists a finite number of sets that cover the union of a) $\{a\}\cup \{a_n: n > N\}$ and b) finite set $\{a_n: n \in \{1, ... , N\}\}$. Thus, there exists finite subcover of $\{C_{\alpha}\}$. By definition of of compact, $K$ is compact.
  
 ** 34. Problem 5 on Practice Midterm on [[https://ywfan-math.github.io/104s21_midterm2_practice.pdf]] ** ** 34. Problem 5 on Practice Midterm on [[https://ywfan-math.github.io/104s21_midterm2_practice.pdf]] **
Line 313: Line 315:
  
  
-** 37. Prove using epsilon delta that $\lim_{\to \infty} f(x) =\lim_{\to \infty\frac{e^{x}}{x} = \infty$. (Note the weird thing with infinity). **+** 37.(From Ian Charlesworth's fall 2020 Final) A check on definitions: Show that $S = \mathbb{R}^2 \setminus \{0\}$**
  
-** Answer: ** Ignoring some of the chaffe regarding the extended real number systemwe can use that $\forall \epsilon >0, \exists x_t > 0: \forall x > x_t f(x) > \epsilon$ We need to findthe $x_t$.+** Answer: ** I don't think we have encountered too much on connectedness, so I believe this should be used as a teaching moment. Without getting too much into the detailsI suppose that S is  not a connected set. I must show that if $S = U \cup V$ where U and V are disjoint open setsthen one of U and V must be the empty set. Suppose not, consider open, nonempty, disjoint $U , V$ such that $S = U \cup V$. Since $S = U \cup V$ we know that an element in S is either in U or in V. Now consider the boundary between U and V in S; such a boundary exists since U and V are nonempty. In this casewe know that since U and V are open in S, we know that U and V do not contain their boundaries. Let be in the boundary in S. We know that x is not in U nor in V but still in SThis contradicts how $S = U \cup Vsince not all points in S are in the union of U and V
  
-Consider $g(x) = e^x - x\epsilon$.  +Therefore, by proof by contradiction, $\mathbb{R}^2 \setminus \{0\}is a connected set.
  
 ** 38. Prove from ONLY first principles and definitions that $f_n(x) = \frac{1}{nx}$ uniformly converges on $(1, \infty)$, where (for purposes of this problem) we say f is defined on. ** ** 38. Prove from ONLY first principles and definitions that $f_n(x) = \frac{1}{nx}$ uniformly converges on $(1, \infty)$, where (for purposes of this problem) we say f is defined on. **
Line 360: Line 362:
 **Answer: ** We know that since $K_1, K_2, ..., K_n$ are all compact, they are all bounded and hence we know that $K$ is bounded. Thus for every sequence of points in K, there always exists a subsequence that converges. Now, since $K$ is closed (because $K_1, K_2, ..., K_n$ are closed and $\bigcap_{i=1}^{n}K_i$ is therefore closed), we know that such subsequences that converge must converge to a point in $K$. Therefore, for every sequence of points in K, there always exists a subsequence that converges to a point in $K$. By definition of sequential compactness, $K$ is sequentially compact and is therefore compact. **Answer: ** We know that since $K_1, K_2, ..., K_n$ are all compact, they are all bounded and hence we know that $K$ is bounded. Thus for every sequence of points in K, there always exists a subsequence that converges. Now, since $K$ is closed (because $K_1, K_2, ..., K_n$ are closed and $\bigcap_{i=1}^{n}K_i$ is therefore closed), we know that such subsequences that converge must converge to a point in $K$. Therefore, for every sequence of points in K, there always exists a subsequence that converges to a point in $K$. By definition of sequential compactness, $K$ is sequentially compact and is therefore compact.
  
-** 42. Prove the root test for infinite series. **+** 42. (From DrIan Charlesworth's Fall 2020 Final): Create an open cover of $B_5(0, 1) = \{ (x, y) \in \mathbb{R}^2: (x)^2 + (y - 1)^2 \leq 5^2\}$ that does not have a finite subcover**
  
-** 43. Prove that the function $f(x) = x^2 + 4$ is integrable on $[0, 4]$ with respects to $\alpha(x) = \lfloor 2^x \rfloor$. Then find the integral. ** +** Answer: ** The purpose of this problem is a bit more illustrative than challenging and is to exemplify, in perhaps a visualizable manner, the idea of the finite subcover. Consider the open cover $\{C\}$ consistent of $C_{n} = \{(x, y) \in \mathbb{R}^2:(\frac{5}{n} - \frac{5}{2n})^2 < (x)^2 + (y-1)^2 < (\frac{5}{n} + \frac{5}{2n})^2\}$. For this problem, I took some inspiration for the bad open cover for $(0, 1)$. 
 + 
 +One way to prove that there is no finite subcover of of $\{C\}$ is to show that if there is any supposed subcover of $\{C\}$ the union of that collection would miss points in $B_5(0, 1)$ . 
 + 
 +Now, suppose that C does have a finite subcover. In other words, suppose $C_{n_1}, ..., C_{n_m}$ formed a finite subcover of m elements from $\{C\}$. Without loss of generality and to make this problem more manegable,  we shall make $n_1 < ... < n_m$. Now consider the points in $\{(x, y) \in \mathbb{R}^2: 0 \leq (x)^2 + (y-1)^2 < (\frac{5}{n} - \frac{5}{2n_m})^2\}$; these points are not included because of how $C_1, .., C_{n_m}$. Therefore, the finite subcover does not contain all points in $B_5(0, 1)$. Contradiction has been reached. 
 + 
 +Therefore, I have given an example of an open cover of $B_5(0, 1)$ that does NOT have a finite cover. This ultimately shows us that $B_5(0, 1)$ is not compact (even though the original problem has already given us that). 
 + 
 +**43. Prove that the function $f(x) = x^2 + 4$ is integrable on $[0, 4]$ with respects to $\alpha(x) = \lfloor 2^x \rfloor$. Then find the integral. ** 
  
 ** Answer: ** We can prove that $f(x)$ is integrable with respects to $\alpha$ on $[0, 1]$ since $f$, being a polynomial function, is continuous on $[0, 2]$. (Refer to theorem 6.8). ** Answer: ** We can prove that $f(x)$ is integrable with respects to $\alpha$ on $[0, 1]$ since $f$, being a polynomial function, is continuous on $[0, 2]$. (Refer to theorem 6.8).
Line 378: Line 388:
 ** 44. Does $f_N(x) = \sum_{i=0}^{N} \frac{1}{x^2n^2 + 1}\sqrt{\frac{1}{x}}$ converge uniformly on $(1, \infty)$? Support your claim with a proof. ** ** 44. Does $f_N(x) = \sum_{i=0}^{N} \frac{1}{x^2n^2 + 1}\sqrt{\frac{1}{x}}$ converge uniformly on $(1, \infty)$? Support your claim with a proof. **
  
-** Answer: ** Consider that for $x \in (1, \infty), 0 \leq \frac{1}{x^2n^2 + 1}\sqrt{\frac{1}{x}} \leq \frac{1}{n^2 + 1} \leq \frac{1}{n^2}$. Since we know by the P-Series test that $\sum_{n=1}^{\infty}\frac{1}{n^2}$ converges, we know that $\sum_{n=0}^{\infty}\frac{1}{x^2n^2 + 1}\sqrt{\frac{1}{x}}$ converges by comparison test regardless of what $x \in (1, \infty)$ is.+** Answer: ** Consider that $\forall x \in (1, \infty), |\frac{1}{x^2n^2 + 1}\sqrt{\frac{1}{x}}\leq \frac{1}{n^2 + 1}\sqrt{1} = \frac{1}{n^2}$
  
-Since $\sum_{n = 0}^{\infty}\sum_{n=0}^{\infty}\frac{1}{x^2n^2 + 1}\sqrt{\frac{1}{x}}$ converges regardless of what $x \in (1\infty)$ is we know that $F_N(x) = \frac{1}{x^2n^2 + 1}\sqrt{\frac{1}{x}}$ is cauchy for all $x \in (1, \infty)$. +We know that $\sum_{n=1}^{\infty}\frac{1}{n^2}$ converges by the P series test. Thus, by the Weitrass-M test, we know that $f_N(x) = \sum_{i=0}^{N} \frac{1}{x^2n^2 + 1}\sqrt{\frac{1}{x}$ converges uniformly to some function $f$.
- +
-Thus we know that $(x \in (1, \infty))$ implies $F_N$ is cauchy. By cauchy criterion for uniform convergence $f_N$ converges uniformly on $(1, \infty)$+
  
 ** 45. (From Dr. Ian Charlesworth's Fall 2020 Midterm) Given $A, B \subset \mathbb{R}$ and let A be bounded from above and B be bounded from below. Let $A - B = \{s - t: s\ in A, t \in B\}$. Prove that $\sup{(A - B)} = \sup A - \inf B$. ** ** 45. (From Dr. Ian Charlesworth's Fall 2020 Midterm) Given $A, B \subset \mathbb{R}$ and let A be bounded from above and B be bounded from below. Let $A - B = \{s - t: s\ in A, t \in B\}$. Prove that $\sup{(A - B)} = \sup A - \inf B$. **
Line 436: Line 444:
  
  
-** 49. Complete the proof of L'Hopital'theorem in the April 8 notes (morning) by considering the case where A is infinite. **+** 49. From Ian Charlesworth'Fall 2020 Final  (Illustrative question) Without invoking the change of variables, show that $\int_{x=C}^{x = 2C}f(\frac{x}{C})dx$ is integrable and find the integral. We are given that $\int_{x=1}^{x=2}f(x)dx$ is a valid integral. (let h(x) = f(x/C) for $x \in [C, 2C]$ and note that $C > 0$)** 
 + 
 +**Answer: ** This really tests our understanding of the definition of integrability.  
 +We know that since $\int_{x=1}^{x=2}f(x)dx = V$ means that $\forall \epsilon. > 0, \exists P: U(f, P) - L(f, P) < \epsilon $. Recall that $$U(f, P) = \sum_{i=1}^{n}\sup_{x \in [x_{i-1}, x_i]} f(x) (x_i - x_{i-1})$$ and $$L(f, P) = \sum_{i=1}^{n}\inf_{x \in [x_{i-1}, x_i]} f(x) (x_i - x_{i-1})$$. Now consider that for each partition$ P = \{1 = x_0 < x_1 < x_2 ... < x_n = 2\}$ of $[1, 2]$ we can create a partition of $[C, 2C]$ by doing $P* = \{C = Cx_0 < Cx_1 < ... < Cx_n = 2\}$. Now consider $$U(h, P*) = \sum_{i=1}^{n}\sup_{x \in [Cx_{i-1}, Cx_i]} h(x) (Cx_i - Cx_{i-1})= \sum_{i=1}^{n}\sup_{x \in [Cx_{i-1}, Cx_i]} f(x/C) (Cx_i - Cx_{i-1})$$ and $$L(h, P*) = \sum_{i=1}^{n}\sup_{x \in [Cx_{i-1}, Cx_i]} h(x) (Cx_i - Cx_{i-1}) = \sum_{i=1}^{n}\inf_{x \in [Cx_{i-1}, Cx_i]} f(x/C) (Cx_i - Cx_{i-1})$$. 
 + 
 +Notice that because of the x/C used in h(x), we can effectively say that $\sup_{x \in [Cx_{i-1}, Cx_{i}]}f(x/C) = \sup_{x \in [x_{i-1}, x_{i}]}f(x)$. Therefore,  $$U(h, P*) = \sum_{i=1}^{n}\sup_{x \in [x_{i-1}, x_i]} f(x) (Cx_i - Cx_{i-1}) = C U(h, P)$$ and $$L(h, P*)  = \sum_{i=1}^{n}\inf_{x \in [x_{i-1}, x_i]} f(x) (Cx_i - Cx_{i-1}) = C L(h, P)$$.  
 + 
 +Therefore, $$U(h, P*) - L(h, P*) < C(U(f, P) - L(f, P)) < C\epsilon$$. 
 + 
 +In order to make $$U(h, P*) - L(h, P*) < \epsilon$$ we can first find $P = \{1 = x_0 < x_1 < ... , < x_n = 2\}$ such that $U(f, P) - L(h, P) < \epsilon/C$ and then we can make $P* = \{C = x_0 < Cx_1 < ... < Cx_n = 2C}$ and that way $U(h, P*) - L(h, P*) = C (\cdot U(f, P) - \cdot L(f, P)) < C \frac{\epsilon}{C} = \epsilon $. 
 + 
 +Thus, $\forall \epsilon > 0, \exists P*$ (partition of $[C, 2C]$) such that $U(h, P*) - L(h, P*) < \epsilon$. Thus $h$ is integrable on $[C, 2C]$. 
 + 
 +We have shown that $U(h, P*) = C U(f, P)$, $L(h, P*) = C L(f, P)$. We know that $\inf U(f, P) = U(f)$. Since $U(h, P*) = C U(f, P)$ and how $P*$ is setup as $\{Cx_0 < C x_1 < .. < C x_n<\}$, I know $U(h) = \inf U(h, P*) = \inf C(U(f, P)) = C U(f)$. Likewise, we know that $\sup L(f, P) = L(f)$. Since $L(h, P*) = C L(f, P)$ and how $P*$ is setup as $\{Cx_0 < C x_1 < .. < C x_n<\}$, I know $L(h) = \inf L(h, P*) = \inf L(U(f, P)) = C L(f)$. Since $L(f) = U(f) = \int_{x=1}^{x=2}f(x)dx$ by definition of integrability, we know that $L(h) = U(h) = C \int_{x=1}^{x=2}f(x)dx$.
  
 ** 50. Give an example of a function that is Stieltjes Integrable but not Riemann Integrable. ** ** 50. Give an example of a function that is Stieltjes Integrable but not Riemann Integrable. **
Line 450: Line 471:
  
 However, we cannot say that $f$ is Riemann integrable over $[-1, 1]$. This is due to the behaviour of $f$. And when we consider interval $[-1, 0]$ we start to understand that $f$ is not integrable because of the upper integral for that interval is 1 whereas the lower integral for that interval would be 0. However, we cannot say that $f$ is Riemann integrable over $[-1, 1]$. This is due to the behaviour of $f$. And when we consider interval $[-1, 0]$ we start to understand that $f$ is not integrable because of the upper integral for that interval is 1 whereas the lower integral for that interval would be 0.
 +
 +** 51. (From MIT Opencourseweare Math 18.100B Course) True or False? If $f_n : [a, b] \to R$is a sequence of almost everywhere continuous functions, and $f_n \to f$ converges uniformly, then the limit f is almost everywhere continuous. (I wanted an explanation with the specific theorem numbers from Rudin)** 
 +
 +**Answer: ** I kind of do not like how the term "almost everywhere continuous" is used in this problem; it just sounds imprecise. It turns out that the problem meant f has only a finite number of discontinuities.  In order to address this problem we may want to first note that since $f_n \to f$ continuous on $[a, b]$ except for a finite number of points on $[a, b]$ we know that $f_n$ is Riemann integrable ($\alpha(x) = x$ and that is continuous for all $x \in \mathbb{R}$) on $[a, b]$ by theorem 6.10. Because of that and by ** theorem 7.16 ** in Rudin, we can supposedly say that $f$ is integrable as well. And since f is integrable, f MUST have only finitely many discontinuities.(This one does not seem to have a direct theorem from Rudin I can use.)
 +
 +One thing that I am still rightfully confused about is whether it is true that if $f$ has only finite number of discontinuities then and only then it is riemann integrable. I may be able to puncture this argument by considering $f(x) = \frac{1}{x^2}$. We may all know that $f(x)$ is continuous at any point but at $x = 0$, where $\lim_{x \to 0} f(x) = \infty$. Now, we know that f has only a finite number (1) of discontinuities on $[-a, b]$ (let a, b be any positive number of your choosing), but we cannot integrate $\int_{x= -a}^{x = b}f(x)dx$. It will diverge to infinity (not a real number)!
 +
 +Sources for the Questions and Inspiration: Past midterms and/or finals from Dr. Peng Zhou, Ian Charlesworth, Sebastian Eterovic, Yu-Wei Fan, Dr. Charles Rycroft, MIT Opencourseware (MIT's Math 18.100 courses), summer 2018 math 104, spring 2017 math 104, //Principles of Analysis// by Rudin, //A Problem Book in Real Analysis//, and //Principles of Analysis// by Ross.
math104-s21/s/ryotainagaki/problems.1620443276.txt.gz · Last modified: 2026/02/21 14:44 (external edit)