This is a summary of Honors Calculus (117 + 118) (or as some people call it: "Abstract Calculus") This course will focus on set theory and number theory (and logic)
* note this won't display properly on GitHub, I use a program called Typora with inline math enabled
- Shorthand
- Sets
- Logic
- What is a number? (It's not what you think it is)
- Induction
- Absolute values and Binomial Thm
- Intervals and bounds
- Real numbers
- Sequences and limits
- Functions (and notation)
- Continuity
- Inverse functions
- 1-sided limits
- Intermediate value theorem (IVT)
- Differentiation
- Extrema
- Rolle's theorem
- Mean value theorem (MVT aka Meme value theorem)
- First and second derivative test
- L'Hopital's Rule (Le hospital)
- Convex and concave
- Exponentials and logs
- Logarithmic differentiation
- Series
- Geometric Series
- Ratio, Root and Compression Tests
- Dirichlet's and Leibniz Rule
- Rearranging Series
- Double Seq.
- Power Series
- Taylor Series
- Integration
- The Riemann Integral
- Misc Notes
- Resources
Ex.
Thm
Def
s.t.
, or ; or :
wLOG $\to $ without loss of generality (i.e. similar to previous case, but I'm too lazy to do it)
seq.
diff. $\to $ differentiable
RHS
LHS
a set is defined as a number (can be 0 but then that would be the trivial or stupid set) of elements belonging to the same group
Ex.
- {1,2,3} = {2,1,3}
- {a,b}
- {even, odd}
$\N$ = {1,2,3,4,5...}- {} (trivial set)
If an element 'x' is part of a set 'A', it is described as
If
If elements of A are present in B, then those elements 'C' are said to be
From Bowmen notes
* Most of the time, Russell's paradox will be introduced to scare any students still in the class.
==Optional Exercise: look into Russell's paradox because it is quite interesting==
There are 2 truth values, true (T or topology) and false (F or contradiction)
In math, there are established rules (which were set arbitrarily but is important for consistency around the world) for operations on truth values
Most of the time, a truth table will be used to illustrate how operators effect the truth value of a statement
Some common ones are:
A | A | ||
---|---|---|---|
T | F | ||
B | T | T | F |
B | F | F | F |
A | A | ||
---|---|---|---|
T | F | ||
B | T | T | T |
B | F | T | F |
A | A | ||
---|---|---|---|
T | F | ||
B | T | T | T |
B | F | F | T |
* note this is a different way for displaying the truth table (important for chain multiple operators with more than 2 variables)
A | B | ||
---|---|---|---|
T | T | T | T |
T | F | F | F |
F | T | F | F |
F | F | T | T |
To chain multiple operators together, you can use an extended truth table that considers all the variables (like the one above, but you must consider all variables and all the cases possible use a )
As seen from the table above, an iff is logically equivalent to a if b and b if a, therefore for proving an iff, the easiest way is to prove it using the A premise to get to B and then using the B premise to derive A
The easiest way to prove something is to use proof by contradiction. Assume the opposite of the theorem or a fact that is true and find a contraction in reasoning
Ex.
$\sqrt{2}\ is\ not\ rational$ Start of proof by contraction:
Suppose for contradiction,
$\sqrt{2}\ is\ rational$ , i.e. it can be expressed as$\frac{p}{q}, where\ p\in\Z\ and \ q\in\N$ . $$ \sqrt{2} = \frac{p}{q}\ \to \sqrt{2}^2 = \frac{p^2}{q^2}\ 2q^2=p^2 $$ * note even number are expressed as 2n and odd number can be expressed as 2n+1 for$n\in\Z$ Therefore p must be even because only an even squared is even
$((2n)^2=4n^2\ and\ (2n+1)^2 = 4n^2+2n+1\therefore odd)$
For real numbers, the following must be true
* assume
- Must be associative
$a+(b+c)=(a+b)+c$
- Must have an additive identity
$a+0=0+a=a$
- Must have an additive inverse -a s.t.
$a+-a=-a+a=0$
- Follows additive commutativity
$a+b=b+a$
- Are associate
$a(bc)=(ab)c$
- There exists are multiplicative identity where it is not 0
$a\times 1= a$
- It can distribute
$a\times (b+c)=a\times b+a\times c$
- Has an inverse that is not 0, i.e.
$a^{-1}\ne 0$
$a\times a^{-1}=1$
- Follows multiplicative commutativity
$a\times b=b\times a$
- Trichotomy Law
must be one and only one of the following relations:
$a<b,\ a=b,\ a>b$
- Closed under addition
$a>0\ and \ b>0\to a+b>0$
- Closed under multiplication
$a>0\ and \ b>0\to a\times b>0$
More rules to come
Also note the first lemma (lemma is like a theorem but arbitrary defined to be a small theorem)
* lemmas or thm can be assumed
Steps
- Prove for any element in set (usually 1 or 0 if they are in the set)
- Suppose true for n
- Prove for n+1
==Notice==: This makes it so it holds true for the first tested element and for every subsequent element (like a domino effect)
Ex.
Gauss’ claim: $$ 1+2+...+n\equiv\sum_{i=1}^ni=\frac{n(n+1)}{2}\ Let\ S\ be \ set\ n \ Step\ 1:\ Check\ 1\in S\ 1=\frac{1(1+1)}{2}=1\ Step \ 2: \ Suppose \ k \in S\ \therefore \sum_{i=1}^ni=\frac{k(k+1)}{2} \Then\ prove\ k+1\\sum_{i=1}^{k+1}i=\frac{k(k+1)}{2}=1+2+3+...+k+(k+1)\ = \frac{k(k+1)}{2}+(k+1)\ =(k+1)(\frac{k}{2}+1)\ =\frac{(k+1)((k+1)+1)}{2} $$ Hence
$k+1\in S$ or$k\in S \to k+1\in S$ * see Bowmen notes (starting from page 19) for more examples
Absolute value is defined to be the following:
Properties of absolute values:
A1.
A2.
A3.
A4.
A5.
A6.
A7.
and
Helpful Thm:
$\binom{n}{k}=\binom{n}{n-k}$ $\binom{n}{0}=\binom{n}{n}=1$ $\binom{n}{1}=\binom{n}{n-1}=n$ $\binom{n}{k-1}+\binom{n}{k}=\binom{n+1}{k}$ -
$\sum^{n}_{k=0}\binom{n}{k}=2^n,\ \forall n\in W$ (can be proved via induction as exercise)
Binomial Thm:
Open and closed intervals are defined as the following $$ For \ a,b\in\R\ and \ a<b \ [a,b]={x:a\le x\le b}, \to closed\ (a,b)={x:a< x< b}, \to open\ [a,b)={x:a\le x< b}, \to relatively\ open\ (a,b]={x:a< x\le b}, \to relatively\ open\ $$ not finite intervals $$ (-\infin , \infin) =\R \ [a,\infin)={x:x\ge a} \ (-\infin, a)={x: x<a} $$
A real number b is an upper bound
If no b exists are a upper bound of
* Lower bound is similar (wLOG)
Supremum and Infimum (Sup and Inf)
b is the sup of
If b is the sup of
* Infimum is similar (wLOG)
Max/min
Now what max and min functions are (they are self explanatory, otherwise, look it up in the python docs)
- max f
$=max{f(x)|x\in A}$ if right hand side exists- min f
$=min{f(x)|x\in A}$ if right hand side exists
A real number is only defined if it satisfies all the rules laid out in chapter 3 and it follows the completeness axiom.
Completeness Axiom:
For every non-trivial subset of
${\frac{p}{q}:p^2\le 2p^2, p\in \Z , q\in \N }$ - [0, 1] as sup at 1
- [0,1) has sup at 1
Lemma - Archimedean Property:
No real number is an upper bound for
$\N$ Notes:
$\N\sub\ \R$ (can be proved inductively)
A sequence of real numbers is a function (see chapter below) s.t.
A sequence is improper if
* also note an extended real number system is
**Rules on extended real number system **:
==Limits== (This is mega important)
Translation to English: The limit of the sequence as the index approaches infinity exists, if there exists an error (
If that limit exists, it is convergent, otherwise, it is divergent (improper limit of the sequence)
If
Properties of limits
Let
- If
$c,d\in \R, and\ cL,dM$ are both finite of opposing signs, then$ca_n +db_n \to cL+dM$ - If LM is not undefined, 0
$x\pm\infin, a_b b_n \to LM$ - If
$M\ne 0,$ and both M, N are finite, then$\frac{a_n}{b_n}\to\frac{L}{M}$ (only for n large enough) - If
$L=\pm \infin$ , then$\frac{1}{a_n}\to 0$ (only for n large enough)
* look at page 36 of Bowmen notes to see proofs
Lemma
If
$a_n \le b_n\ \forall\ n\ and \ a_n\to L,b_n\to M. \to \ L\le M$ Squeeze thm (or sandwich thm if and only if you are hungry)
If
$a_n, c_n \to L\ and \ a_n\le b_n\le c_n \forall n. \to b_n \to L$ A sequence is bounded if
$\exists M\in \R, s.t. |a_n|\le M\ \forall\ n$ Helpful example:
$|sin(x)|\le 1$ Why? Homework question
Thm:
Convergent sequence implies bounded but not the other way
(convergent
$\to$ bounded)(bounded
$\not \to$ convergent)
A sequence is increasing if for
A sequence if monotone if it is either an increasing or a decreasing sequence (or both)
In monotone sequences,
Subsequences
Given a seq. $\lim_{n\to\infin}a_n $ and a strictly increasing sequence of natural numbers
Ex.
$\lim_{n\to\infin}(2k-1)^2={1,9,25...}$ is a subsequence of$\lim_{n\to\infin}n^2={1,4,9,16,...}$ Thm.
convergent
$\iff$ all subsequence convergentLemma:
a)
$0\le c<1 \to c^n \le c\le 1$ b)
$c > 1 \to c^n \ge c > 1$
$\forall n\in \N$ c)
$0\le c<1 \to c\le c^{1/n}<1$ d)
$c>1>1 \to c\ge c^{1/n}>1$ Ratio test for seq
$\lim_{n\to \infin} |\frac{a_{n+1}}{a_n}|=r\ where\ r\in[0,1)\ means\ that\ a_n \ is\ bounded\ and \to0$
Bolzano-Weierstrass Thm:
A bounded sequence has a convergent subsequence (see page 49 for proof)
Cauchy Criterion
A sequence
Revisit min/max
* This is not clear so watch this: https://www.youtube.com/watch?v=khypO8MQpdc
* Note:
$x_o$ is always a boundary point of$(x_o,\infin) \cap I\ and\ (-\infin, x_o)\cap I\ if\ x_o \in I^o$
A function
The set of all X is called the domain of f, while the Y is called the codomain
The range or image of f is the set
* think of a rule as a machine that takes X and through a defined process turns that X or all X to Y
==Note:==
$f: \R_{\ge0}\to \R $ defined by f(x)$=x^2$ is different from$g:\R\to\R$ also defined by
Trigonometry
opp = opposite
hyp = hypotonus
adj = adjacent $$ sinx=\frac{opp}{hyp}\ cosx=\frac{adj}{hyp}\ tanx=\frac{opp}{adj}=\frac{sinx}{cosx}\
\ cscx=\frac{1}{sinx}\ \sec x=\frac{1}{cosx}\ cotx=\frac{1}{tanx}\ \ Pythagoras'\ Theorem \ opp^2+adj^2=hyp^2\ \to sin^2x+cos^2x=1 \1+cot^2x=csc^2x\tan^2x+1=sec^2x \Also,\ |sinx|\le1\ &\ |cosx|\le1 $$ This can be demonstrated with a unit circle
* Note, degrees will not be used, use radians
$\pi$ radians =$180^o$ ![figure7unit circle](pics/figure7unit circle.png)
$cos θ = sin (\frac{\pi }{2} − θ ) ,$
$sinθ= cos(\frac{\pi }{2} − θ ) ,$ $$ Supplementary \ Angle \ Identities:\ sin(\pi-x)=sinx\ cos(\pi-x)=-cosx\ Symmetries:\ sin(-x)=-sinx \cos(-x)=cosx \sin(x+2\pi)=sinx\ cos(x+2\pi)=cosx $$ Special values: $$ sin(\pi/2)=cos0=1 \sin(\pi/4)=cos(\pi/4)=1/\sqrt{2}\ sinx(\pi/6)=cos(\pi/3)=1/2\ sin(\pi/3)=cos(\pi/6)=\sqrt{3}/2 \ cos(A-B)=cosAcosB+sinAsinB
$$
$$ Double Angle Formulas: $$ $$ sin2A=2sinAcosA\\ cos2A=2cos^2A-1=1-2sin^2A\\ tan2A=\frac{2tanA}{1-tan^2A}\\ sinx\le x\le tanx\ \forall x\in [0, \pi/2)\\ Also:\\ |sinx|\le |x| \ \forall x\in\R $$
Let I be interval s.t.
For any function with domain I, we say the real number
$\lim_{x\to x_o}f(x)=\infin $ if for every M>0, there is
Thm. Equivalence of Function and Sequence Limits:
$\lim_{x\to a }f(x)=L \iff f$ is defined near a and every sequence point in$x_n$ in the domain of f with$x_n\ne a$ , but$\lim_{n\to \infin} x_n = a$ , satisfies$\lim_{n\to \infin} f(x_n )= L$ * See page 68 for proof
Corollary:
Assume
$\lim_{x\to a }f(x)=L$ and$\lim_{x\to a }g(x)=M$
$\lim_{x\to a }(f(x)+g(x))=L+M$ $\lim_{x\to a }f(x)g(x)=LM$ $\lim_{x\to a }\frac{f(x)}{g(x)}=\frac{L}{M}$ if$M\ne0$ Cauchy Criterion for Function
$\lim_{x\to a }f(x)$ exists$\iff$ for every$\epsilon >0, \exist \delta >0$ s.t. x, y $\in (a-\delta,a)\cup(a,a+\delta), $ their function values satisfy$|f(x)-f(y)|<\epsilon$
Simply, a function is continuous if when you draw it out, your pen/pencil does not leave the paper (sorry I have not defined what pen/pencil and paper is but that is for another course (possibly Bio 399))
Proper def:
Let D
$\to$ R. A point c is an interior point of D if it belongs to some open interval (a, b) entirely contained in D: c$\in$ (a, b)$\sub$ Di.e. 1/10, 1/2, 3/4 are interior points of [0, 1], but 0 and 1 are not
However, all points in (0, 1) are interior points of (0, 1)
A point is continuous at interior point a in domain if $$ \lim_{x\to a}f(x)=f(a) $$
f is continuous at a
$\iff$ for every$\epsilon >0 , \exist \delta>0\ s.t.\ |x-a|<\delta \to |f(x)-f(a)|<\epsilon$ Corollary:
a. If f and g are continuous at a, then f+g and fg are continuous at a and f/g continuous at a if
$g(a)\ne 0$ b. A rational function is continuous at all points of its domain
c. If g is continuous at a and f is continuous at g(a). Then f $\circ $ g (i.e. f(g)) is continuous at a.
Take
The inverse of f is invertible and it is written as
Take
$f:X\to Y$
- f is injective (or 1-1 or one-one) if
$x=x_o$ then$f(x)=f(x_o)$ - f is called surjective (onto) if f(X)=y (i.e.
$y\in Y$ there is$x\in X\ s.t.\ f(x)=y$ - f is bijective (one-to-one) if both injective and surjective
Notes:
- f is injective, if for every 𝑦
$\in$ 𝑌 the equation 𝑓(𝑥) = 𝑦 has at most one solution (but may have none)- ie
$\forall x, x_o:f(x_o)=f(x)\to x=x_o$ - f is surjective, if for every 𝑦
$\in$ 𝑌 the equation 𝑓(𝑥) = 𝑦 has at least one solution (which may not necessarily be unique)
- ie
$\forall y:\exist x:f(x)=y$ - f is bijective, if for every 𝑦
$\in$ 𝑌 the equation 𝑓(𝑥) = 𝑦 has a unique solution
- ie
$\forall y:\exist! x:f(x)=y$
Therefore, f is invertible
Let f be continuous. If f is injective, f is strictly monotone on I; J = f(I) is an interval of the same type and $f^{-1}:J\to I $ is continuous
Likewise for
If the limit exists above, then it is said to be continuous from the right
If a is continuous on [a,b] and f(a)<0<f(b)
then
Differentiation is a measure of the rate of change (or slope, but that's a bad word in AP cal) of a function
The idea is given by a rate of change formula:
$$
v(t) = \frac{\Delta x}{\Delta t}
\
$$
The exact velocity is when
Proper def:
Let I be interval s.t. f is defined on that interval (that means is it must be continuous near
$x_o $ ). f is differentiable at$x_o\in I$ if
$\lim_{x\to x_o}\frac{f(x)-f(x_o)}{x-x_o}=f'(x) =f'$ OR
f'(x)$=\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}$ is finite and exists
OR
$\exist$ c=f'(x) s.t. f(x+h)=f(x)+ch+r(h) and$\lim_{h\to 0}\frac{r(h)}{h}=0$ * notion can also be like the following:
$\frac{df}{dx}(x_o)$ $\frac{d}{dx}|_{x_o}$
Useful:
- Any constant function is differentiable everywhere it is defined, and its derivative is 0
- Any linear function is differentiable everywhere: f(x)=mx+b, then f'(
$x_o $ ) = m- f(x) =
$x^n \to$ f'(x)$=n(x)^{n-1}$- if f=|x|, then f' is defined everywhere except x=0
- In the previous function, if the domain is bounded by either
$x\le0\ or\ x\ge0$ , then it is definedRule on diff.
- f+g diff on x
$\to (f+g)'(x)=f'(x)+g('x')$ - fg diff at x $\to (fg)'(x)=f'(x)g(x)+f(x)g'(x) $ (Product or Leibniz Rule)
- g(x)$\ne 0 \forall x\in I, \to \frac{f}{g}$ diff. and
$\(\frac{f}{g})'=\frac{f'g-fg'}{g^2}$ (Quotient Rule)- f(g(x)) = h(x)
$\to h'=f'(g)g'$ (h must be defined at point and exist at point and same for g and f. Also f and g must be differentiable)To see proof, goto page 85
* Note, any polynomial function is diff. everywhere
For any natural number n we say 𝑓 is n-times differentiable at x if:
-
$f^{(n-1)}$ is defined on a relative open interval containing 𝑥. -
$f^{(n-1)}$ is differentiable at x
A function is called smooth if it is n times differentiable everywhere in its domain for every n
Therefore, know that all polynomial functions (and trig. functions) are smooth everywhere
Any rational function is also diff. everywhere in its domain
==Important Oversights==
If
Trig diffs:
- f(x0=sinx
$\to f'(0)=\lim_{x\to 0}\frac{sinx}{x}=1$ (see l'Hopital)- f(x)=sinx $\to $ f' = cosx
- f(x)=cosx $\to $ f'=-sinx
- f'(tanx)
$=(secx)^2$ - f'(secx)=secx tanx
- f'(csc x) = -csc x cot x
- f'(cotx)=$- (csc x)^2$
Inverses
Chain Rule
Chain rule states that:
Let f be diff on
- local max at x if for
$some\ \delta>0, s.t. \forall x_o\in I\cap(x_o-\delta, x_o+\delta):f(x_o)\le f(x)$ - local min at x if for
$some\ \delta>0, s.t. \forall x_o\in I\cap(x_o-\delta, x_o+\delta):f(x_o)\ge f(x)$
A local extremum exists at x if it is a local max or min at x
Global extremum is f(x)=sup{f(x)} or inf{f(x)}, therefore, global extremum may not be the same as local extremum
If f'=0, then a local extremum exists or is said to be a critical point
if f is continuous on [a, b] and diff on at least (a,b), s.t. f(a)=f(b), then,
This means that if the points a, b are equal on f, then in-between a and b, there is a least one point such that the derivative is 0
if f is continuous on [a, b] and diff on at least (a,b), then,
To see prove, go to page 11 on chpt 2 on class notes or pg 94 on Bowmen
Corollary:
zero derivative means constant
Let f be continuous on I and diff on at least the interior of I
- f is monotone increasing
$\iff$ f'$\ge 0$ on I- f is monotone decreasing
$\iff$ f'$\le 0$ on I
First Derivative Test
Let I be an interval and f continuous on I and a
- If there is a relative open subinterval (
$J \sub I\ and\ f' \le 0\ on\ J\cap (-\infin, c)\ and f' \le 0 \ on\ J \cap (c, \infin), then\ f$ has a local min at c - Likewise for local max, but flip inequalities
Second Derivative Test
Let I be an interval and f continuous on I and twice diff at x
- if f'' < 0 has local max at x
- if f'' > 0 has local min at x
- if f'' = 0 unknown
L'Hôpital's rule is useful for computing division where the top and bottom might be
Conditions for applying l'Hôpital's rule:
- If numerator and dementor both approach 0
- If numerator and dementor both approach
$\pm \infin $
Thm:
If it passes the previous checks, you can
$\lim_{x\to a} \frac{f}{g}=L=\lim_{x\to a} \frac{f'}{g'}$
* See page 100 for proof
Cauchy Mean Value Theorem
Let f, g be continuous on [a, b] and diff. on (a, b). There is
A inflection point occurs when f'' = 0
- f is convex $\iff $f' is increasing on I (or f'' > 0) (ie
$e^x$ (prove as an exercise)) - f is concave $\iff $f' is decreasing on I (or f'' < 0)
Notes:
Suppose f'(c) = 0 at some
- f''
$\ge 0 \forall x\in I$ then f has a global min at c - f''
$\le 0 \forall x\in I$ then f has a global max at c
Suppose f is continuous on I. Then f is one-to-one on I
More Inverses
The Horse Race Thm
Let I = [a, b] and f, g continuous on I, diff on (a, b)
If a. f(a)$\ge$ f(b) and b.
if f' > g' on (a,b) then f(b)>g(b)
The unique exponential function f with f' = f is called the exponential function and often denoted exp. Its base is denoted
**Note
All exponential converges (and E(x) absolutely convergent)
The inverse of exp is natural logarithm function
Properties of logs
$\log(xy) = \log x + \log y$ $\log(x/y) = \log x - \log y$ $\log(x^r) = r \log x$
f'(
$e^x$ ) =$e^x$ f'(lnx) = f'(log x) =
$\frac{1}{x}$ f'(
$x^x$ ) =$x^x(ln(x)+1)$
"A series is nothing but a very special form of sequence" - Kuttler
Take any sequence
**Note: $$ \sum_{n=1}^{\infin}a_n := \lim_{n\to \infin} \sum_{i=1}^{n}a_i $$ The series is convergent if the limit exists and is finite, otherwise, divergent
A partial sum is a part of the series
Remember, the partial sum has limit same as the series
The harmonic series
Def.
A series
$\sum_{n=1}^{\infin}a_n$ is convergent$\iff$ for every $\epsilon>0, $ there is$N_o \in \N$ s.t$\forall n,m>N_o,$ $|\sum_{k=m}^{n}a_n|<\epsilon$ A series
$\sum_{n=1}^{\infin}a_n$ is absolutely convergent if the series$\sum_{n=1}^{\infin}|a_n|$ is converges
If
$a\ne 1,$ $\sum_{n=0}^{N}a^n=\frac{a^{N+1}}{a-1}$
$\sum_{n=0}^{\infin}a^n$ converges$\iff$ $|a|<1$
Ratio Test
Take
Suppose A =
-
$|A| <1$ , series converges -
$|A| =1$ , no general statement -
$|A| >1$ , or$R = \pm \infin$ , the series diverges
Root Test
Take
- L < 1, the series converges
- L > 1, the series diverges
- L = 0, no general statement can be made
Compression Tests
Suppose
Take bounded series
**Note:
If bounded, then convergent
Dirichlet's Rule
Take bounded series
Prove in chapter 3, page 6 of notes
Leibniz Rule
Let
Then
Proof: it is a bounded series and follows Dirichlet's Rule, QED.
Take S =
Also:
If series S is absolutely convergent, then any rearrangement is as well. Also the limits will be the same.
If
Rearrangement Thm.
Take the same S a above and let it be convergent but not absolutely convergent. For any
*See page 9 in chapter 3 review
A sum of two series is another series. A product of series is an infinite series of the original two series. This can be represented by a matrix
An infinite series converges absolutely if both original series converge absolutely (also the limit is the limit of the product of the two series)
A double seq. is a function
As in the case of regular sequences, one can add double sequences and multiply them by constants, both in the obvious ways (so they form a vector space)
Let
Let
Lemma:
A monotone and bounded double sequence converges
See chapter 3, page 15 for proof
Lemma 2:
$\sum_{m,n=1}^{\infin}a_{m,n} = \sum_{m=1}^{\infin}\sum_{n=1}^{\infin}a_{m,n}$ (likewise if row and column change,$\therefore \ \sum_{n=1}^{\infin}\sum_{m=1}^{\infin}a_{m,n}= \sum_{m=1}^{\infin}\sum_{n=1}^{\infin}a_{m,n}$ )
A formal series is centered at
**Note that it is possible that a formal power series doesn’t converge anywhere except c.
Let $L:= \limsup_{n\to \infin} (|a_n|)^{1/n}. $ Then
For formal power series f, its radius of convergence is defined as 1/L. It is
Examples
- radius of f(x) = exp(x) is
$\infin$ - radius of geometric series
$\sum_{n=0}^{\infin}x^n$ is 1- radius of series
$\sum_{n=0}^{\infin}n!x^n$ is 0More examples on page 17 on notes
Fact:
For
$a_n \ge 0$ ,$k\in \N$ ,
$\limsup_{n\to \infin} (a_{n\pm k})^{1/n} = \limsup_{n\to \infin} (a_{n})^{1/n} $
Shifted series have the same radius of convergence
Also for a formal power series centered at c, with radius R > 0, f is continuous and diff. at
Lemma
$f=\sum_{n=0}^{\infin}a_n x^n$ , then$D(f):=\sum_{n=0}^{\infin}(n+1)a_{n+1} x^n$ is called the formal derivative of fAlso f is smooth on (c-R, c+R) and
$f^{(n)}$ is again a power series namely$D^n(f)$ and$a_n=\frac{1}{n!}D^n(f)(c)$
Let I be an open interval, and f a function defined on I. We say that f is analytic at
Let f, g be 2 power series centered at c, both convergent on the same nonempty interval I containing c. Then f = g
Let f(x), g(x) be power series centered at c, convergent on interval I = (c-R, c+R). Let
A Taylor Series is like an approximation to the original. (Check out 3Blue1Brown)
General Rolle’s Theorem:
Take f in I, let
Thm.
Suppose f is n times continuously diff. on I = [a,b] and
$f^{(n+1)}$ exists on at least (a,b). For every$u\in [a,b)$ t. is d strictly between u and b s.t. $$ f(u)-P_{f,n,c}(u)=\frac{(u-c)^{n+1}}{(n+1)!}f^{(n+1)}(d) $$ For n = 0, it is similar to MVT
The error term ($\frac{(u-c)^{n+1}}{(n+1)!}f^{n+1}(d)$) is often called the Lagrange remainder
for u close to c
Take f in I, let
Also known as the antiderivative because it is the inverse of differentiation. It is analogue for an infinite sum.
Def.
note
$F' = f$ from now onTake f in interval I. Then the function
$F'=f, F:I\to \R$ is called the antiderivative or indefinite integral of fTake f, gL
$I\in \R$ and suppose F, G existsFor any a,b
$\in \R$ ,$\int (af+bg)=a\int f+b\int g$ Integration by Parts
$\int Fg=FG-\int fG$ Substitution Rule
Take function h with antiderivative H in J, s.t.
$G(I)\sub J$ . Then$\int (h\circ G)g=H\circ G$ (**Note g = G')
Partitions
A partition P if I = [a,b] is a finite ordered seq.
Partition P and its elements are denoted like the following: $x_1<x_2<...$. Partitions of I can be partially ordered by putting
If P, Q are any two partitions of I, then
Riemann Sums
Def.
Let I = [a,b] and P =
$x_1<x_2<...<x_n\in \Pi (I)$ be any partition of size$n\ge 0$ . A tag vector for P is the element y =$(y_o, y_1,...y_n)\in \R ^{n+1}$ s.t. $$ a =x_o\le y_o \le x_1 \le y_1 \le ... \le x_n \le y_n \le x_{n+1} =b $$ OR$y_i \in [x_i,x_{i+1}]$ , we write T(P) for the set of all tag-vectorsFor
$f: I\to \R , P\in \Pi(I)$ , and$y\in T(P)$ , we define the corresponding Riemann sum: $$ S(P,y,f) := \sum_{i=0}^{|P|}f(y_i)(x_{i+1}-x_i) $$
A Riemann seq. for f is seq of form
$S(P_n,y_n,f)$ where$\lim_{n\to \infin}m(P_n)=0$
Riemann Integral
Take f:[a,b]$\in \R$ where a<b
Lemma:
Suppose f is integrable on [a,b], then all Reimann seq. have the same limit
*see page 5 chapter 4 for proof
Def.
Suppose f is integrable on [a,b]. The Riemann integral or f on [a,b] is defined to be the common limit of of its Riemann seq. or: $$ \int_a^bf\ or\ \int_a^bf dx\ or\ \int_a^b fdx $$
Fundamental Theorem of Calculus (Part I)
Let
- f is integrable or conti.
- f agrees with F' on at least (a,b)
Then
Def.
Take I interval and f on I. f is called uniform continuous if
$\forall\ \epsilon >0 t.\ is \ \delta>0\ s.t.\ \forall x,y\in I\ with\ |x-y|<\delta , then \ |f(x)-f(y)|<\epsilon$ Uniform continuous functions are obviously continuous. The converse is not always true. But it is true if the interval is closed and bounded
Thm.
Take I = [a,b] be a closed bounded interval. If f is continuous f on I, then uniform conti.
**Note:
The theorem does not make any statement about cases where F' is not integrable. There are functions with non-integrable derivatives. Conversely, even if a function is integrable it need not have an antiderivative. This is an unlimited source of mistakes
If f is integrable on [a,b] and has antiderivative F, it can be written as
Linearity of Integration
Suppose f,g:[a,b]$\to \R$ are integrable, then cf+dg is integrable for c,d are constants $$ \int_a^bcf(x)+dg(x)dx=c\int^b_af(x)dx+d\int_a^bf(x)dx $$ Integrable functions are bounded $$ \mathscr{R}[a,b]\sub \mathscr{B}[a,b] $$ ie. every integrable function on [a,b] is bounded
See page 8 on chapter 4 class notes for proof
Useful def:
When in doubt, write ""clearly..." even in multiple choice, and especially for T or F questions.
hand wavy stuff
$\frac{x}{\infin}\to0$ if$x\ne \pm \infin$ - $\frac{x}{0}\to\pm\infin $ if
$x\ne 0$ and depending on x > 0 or x < 0$\frac{\pm\infin}{x}\to\pm\infin$ if$x\ne \pm \infin$
$\lim_{x\to \infin} x^n = \lim_{x\to 0} x^{1/n}$
- school notes
- Bowmen notes
My school had an extraordinary honors calculus year. The main prof left by second semester and a German prof can into sub, but, because of his Germanicness (he literally goes off of a German textbook), it was difficult to follow his sometimes rigorous and sometimes not rigorous (he called them trivial) proofs.
Also, at my university, compute 272 is trivially similar to 117.
The rules for copy and distributing this project licence are outlined in the licence.txt file.
This project is under an MIT licence
Dedicated to Dr. Terry Gannon who definitely did not abandon the class
Contribute if and only if you know what you are doing (and optionally, it would help if you were Germanic or fluent in Germanics)
* If someone wants to convert this to Latex, go ahead and pull request (must have an index or contents page with links). I tried with pandoc, but it was not as complete as I thought so if anyone wants to do clean up, go ahead, also I can just export as pdf