This shows you the differences between two versions of the page.
| Both sides previous revision Previous revision Next revision | Previous revision | ||
|
math104-s21:s:ryotainagaki:problems [2021/05/11 05:05] 73.15.53.135 |
math104-s21:s:ryotainagaki:problems [2026/02/21 14:41] (current) |
||
|---|---|---|---|
| Line 39: | Line 39: | ||
| $$|\frac{x + y}{x^2 + y^2} - \frac{2}{2}| = |\frac{x+y-x^2-y^2}{2(x^2+y^2)}| = |\frac{x(1- x) + y(1-y)}{x^2 + y^2}|$$ | $$|\frac{x + y}{x^2 + y^2} - \frac{2}{2}| = |\frac{x+y-x^2-y^2}{2(x^2+y^2)}| = |\frac{x(1- x) + y(1-y)}{x^2 + y^2}|$$ | ||
| - | We intend to prove the continuity through the $\epsilon - \delta$ definition. So we want to find $\delta > 0$ where $\forall d(\begin{bmatrix}x \\ y\end{bmatrix}, begin{bmatrix}1 \\ 1\end{bmatrix}) < \delta, |f(x, y) - f(1, 1)| < \epsilon$. | + | We intend to prove the continuity through the $\epsilon - \delta$ definition. So we want to find $\delta > 0$ where $\forall d(<x , y>, <1 , 1>) < \delta, |f(x, y) - f(1, 1)| < \epsilon$. |
| - | Using the $d(\begin{bmatrix}x \\ y\end{bmatrix}, begin{bmatrix}1 \\ 1\end{bmatrix}) < \delta$, we can say that $|y-1| < \delta$ and $|x-1| < \delta$. | + | Using the $d(<x, y>, <1, |
| Using clever inequalities and upper bounds, we can say... (assuming that our delta is going to be smaller than one, | Using clever inequalities and upper bounds, we can say... (assuming that our delta is going to be smaller than one, | ||
| Line 55: | Line 55: | ||
| That is a positive number on the right hand side (you may want to prove that as a VERY CHALLENGING exercise). | That is a positive number on the right hand side (you may want to prove that as a VERY CHALLENGING exercise). | ||
| - | In other words, we know $\forall \epsilon > 0, \exists \delta > 0$, here it can be that $$\delta < \min(\frac{2}{1 + 2\sqrt{2}}, \frac{-(1 + 2\epsilon\sqrt{2}) | + | In other words, we know $\forall \epsilon > 0, \exists \delta > 0: \forall |
| Thus by the $\epsilon, \delta$ definition of continuity, we know that $f(x, y)$ is continuous at $(x, y)$. | Thus by the $\epsilon, \delta$ definition of continuity, we know that $f(x, y)$ is continuous at $(x, y)$. | ||
| Line 264: | Line 264: | ||
| - | ** 32. Give an example of a function that is continuous on $\mathbb{Q}$ but not on $\mathbb{R}$ | + | ** 32. (Credits to Midterm 2) In less than 3 sentences |
| - | + | ||
| - | **Answer: ** Consider the function | + | |
| + | **Answer: On the spot and on a timed fast exam, this may seem like a hard problem and deceptive. ** We know that the given statement is false. To show that consider the metric space $(S, d(x, y )=|x- y|)$, $U = (0, 1)$, and $f(x) = \ln(x)$. We know that $U$ is bounded but $f(U) = (-\infty, 0)$ by the property of natural log and is NOT bounded. Thus, just because $A$ is bounded doesn' | ||
| ** 33. Suppose that (a_n)_n is a sequence in a metric space (M, d), which converges to a limit | ** 33. Suppose that (a_n)_n is a sequence in a metric space (M, d), which converges to a limit | ||
| Line 389: | Line 388: | ||
| ** 44. Does $f_N(x) = \sum_{i=0}^{N} \frac{1}{x^2n^2 + 1}\sqrt{\frac{1}{x}}$ converge uniformly on $(1, \infty)$? Support your claim with a proof. ** | ** 44. Does $f_N(x) = \sum_{i=0}^{N} \frac{1}{x^2n^2 + 1}\sqrt{\frac{1}{x}}$ converge uniformly on $(1, \infty)$? Support your claim with a proof. ** | ||
| - | ** Answer: ** Consider that for $x \in (1, \infty), | + | ** Answer: ** Consider that $\forall |
| - | Since $\sum_{n = 0}^{\infty}\sum_{n=0}^{\infty}\frac{1}{x^2n^2 + 1}\sqrt{\frac{1}{x}}$ converges | + | We know that $\sum_{n=1}^{\infty}\frac{1}{n^2}$ converges |
| - | + | ||
| - | Thus we know that $(x \in (1, \infty))$ implies $F_N$ is cauchy. By cauchy criterion for uniform convergence $f_N$ converges uniformly | + | |
| ** 45. (From Dr. Ian Charlesworth' | ** 45. (From Dr. Ian Charlesworth' | ||
| Line 447: | Line 444: | ||
| - | ** 49. Complete the proof of L' | + | ** 49. From Ian Charlesworth' |
| + | |||
| + | **Answer: ** This really tests our understanding of the definition of integrability. | ||
| + | We know that since $\int_{x=1}^{x=2}f(x)dx = V$ means that $\forall \epsilon. > 0, \exists P: U(f, P) - L(f, P) < \epsilon $. Recall that $$U(f, P) = \sum_{i=1}^{n}\sup_{x \in [x_{i-1}, x_i]} f(x) (x_i - x_{i-1})$$ and $$L(f, P) = \sum_{i=1}^{n}\inf_{x \in [x_{i-1}, x_i]} f(x) (x_i - x_{i-1})$$. Now consider that for each partition$ P = \{1 = x_0 < x_1 < x_2 ... < x_n = 2\}$ of $[1, 2]$ we can create a partition of $[C, 2C]$ by doing $P* = \{C = Cx_0 < Cx_1 < ... < Cx_n = 2\}$. Now consider $$U(h, P*) = \sum_{i=1}^{n}\sup_{x \in [Cx_{i-1}, Cx_i]} h(x) (Cx_i - Cx_{i-1})= \sum_{i=1}^{n}\sup_{x \in [Cx_{i-1}, Cx_i]} f(x/C) (Cx_i - Cx_{i-1})$$ and $$L(h, P*) = \sum_{i=1}^{n}\sup_{x \in [Cx_{i-1}, Cx_i]} h(x) (Cx_i - Cx_{i-1}) = \sum_{i=1}^{n}\inf_{x \in [Cx_{i-1}, Cx_i]} f(x/C) (Cx_i - Cx_{i-1})$$. | ||
| + | |||
| + | Notice that because of the x/C used in h(x), we can effectively say that $\sup_{x \in [Cx_{i-1}, Cx_{i}]}f(x/ | ||
| + | |||
| + | Therefore, $$U(h, P*) - L(h, P*) < C(U(f, P) - L(f, P)) < C\epsilon$$. | ||
| + | |||
| + | In order to make $$U(h, P*) - L(h, P*) < \epsilon$$ we can first find $P = \{1 = x_0 < x_1 < ... , < x_n = 2\}$ such that $U(f, P) - L(h, P) < \epsilon/C$ and then we can make $P* = \{C = x_0 < Cx_1 < ... < Cx_n = 2C}$ and that way $U(h, P*) - L(h, P*) = C (\cdot U(f, P) - \cdot L(f, P)) < C \frac{\epsilon}{C} = \epsilon $. | ||
| + | |||
| + | Thus, $\forall \epsilon > 0, \exists P*$ (partition of $[C, 2C]$) such that $U(h, P*) - L(h, P*) < \epsilon$. Thus $h$ is integrable on $[C, 2C]$. | ||
| + | |||
| + | We have shown that $U(h, P*) = C U(f, P)$, $L(h, P*) = C L(f, P)$. We know that $\inf U(f, P) = U(f)$. Since $U(h, P*) = C U(f, P)$ and how $P*$ is setup as $\{Cx_0 < C x_1 < .. < C x_n<\}$, I know $U(h) = \inf U(h, P*) = \inf C(U(f, P)) = C U(f)$. Likewise, we know that $\sup L(f, P) = L(f)$. Since $L(h, P*) = C L(f, P)$ and how $P*$ is setup as $\{Cx_0 < C x_1 < .. < C x_n<\}$, I know $L(h) = \inf L(h, P*) = \inf L(U(f, P)) = C L(f)$. Since $L(f) = U(f) = \int_{x=1}^{x=2}f(x)dx$ by definition of integrability, | ||
| ** 50. Give an example of a function that is Stieltjes Integrable but not Riemann Integrable. ** | ** 50. Give an example of a function that is Stieltjes Integrable but not Riemann Integrable. ** | ||
| Line 461: | Line 471: | ||
| However, we cannot say that $f$ is Riemann integrable over $[-1, 1]$. This is due to the behaviour of $f$. And when we consider interval $[-1, 0]$ we start to understand that $f$ is not integrable because of the upper integral for that interval is 1 whereas the lower integral for that interval would be 0. | However, we cannot say that $f$ is Riemann integrable over $[-1, 1]$. This is due to the behaviour of $f$. And when we consider interval $[-1, 0]$ we start to understand that $f$ is not integrable because of the upper integral for that interval is 1 whereas the lower integral for that interval would be 0. | ||
| + | |||
| + | ** 51. (From MIT Opencourseweare Math 18.100B Course) True or False? If $f_n : [a, b] \to R$is a sequence of almost everywhere continuous functions, and $f_n \to f$ converges uniformly, then the limit f is almost everywhere continuous. (I wanted an explanation with the specific theorem numbers from Rudin)** | ||
| + | |||
| + | **Answer: ** I kind of do not like how the term " | ||
| + | |||
| + | One thing that I am still rightfully confused about is whether it is true that if $f$ has only finite number of discontinuities then and only then it is riemann integrable. I may be able to puncture this argument by considering $f(x) = \frac{1}{x^2}$. We may all know that $f(x)$ is continuous at any point but at $x = 0$, where $\lim_{x \to 0} f(x) = \infty$. Now, we know that f has only a finite number (1) of discontinuities on $[-a, b]$ (let a, b be any positive number of your choosing), but we cannot integrate $\int_{x= -a}^{x = b}f(x)dx$. It will diverge to infinity (not a real number)! | ||
| + | |||
| + | Sources for the Questions and Inspiration: | ||