
Amaan M. answered 02/17/18
Tutor
5.0
(114)
Quant Researcher with Years of Experience Teaching and Tutoring
Hi Karen, there are actually a couple of ways to do this. The simplest, if you've seen lim sup and lim inf, is that in this case, lim_inf x_n = I and lim_sup x_n = I, so lim x_n = I.
If you're not as familiar with these ideas, here's another approach. This isn't exactly a proof, but it sketches out what you need to show formally. In general, if you've got some set of sequences that are all converging to the same point and are asked to prove that another sequence also converges to that point, it's usually easiest to do it by contrapositive (or contradiction). For sake of contradiction, assume {xn} doesn't converge to I. Then it either converges to some other number or it doesn't converge.
In the first case, it's impossible to have a sequence converge to one point and a subsequence converge to another point. There's going to be some distance between the two limits, call it d. Let's say that the limit of the sequence is x'. Then, for the sequence, there's some natural number N such that for all n>N, |x_n-x'|<d/3. Similarly, there's some other natural number M such that for all m>M, |x_n_m-I|<d/3. If you take the larger of these numbers, you can use the triangle inequality to show that this is impossible.
In the second case, if a sequence does not converge but is bounded, then you have to have at least two possible limits for convergent subsequences. Since the sequence doesn't converge, you know that no matter how far out you go along the sequence, there must be some minimum distance, let's call it d1, that the points have to have between them (otherwise, the sequence would converge if there were no such minimum distance). Then, there's going to be a convergent subsequence on each side of that d1 divide. Since both sets of points on either side of that distance are bounded, each one of them will have to have a convergent subsequence, and there's going to be a distance of at least d1 between those limits. Think of the peaks and troughs of a sine function (just an example, not really a proof), each set forms a convergent subsequence with a distance that's the amplitude of the sine function between the limits, even though the sine doesn't converge itself.
Hope this helps!