Bobosharif S. answered • 02/21/18

Tutor

4.6
(16)
Expert in R with 10 years of statistical analysis of data and ..

x

_{n }⊆ R^{2}. Let x_{n}=(x_{1n}, x_{2n})→(x_{1}, x_{2}) (meaning by coordinate convergence).Now, Ax

_{n}=(ax_{1n}+bx_{2n}, cx_{1n}+cx_{2n}) is also a vector and we need to show that both coordinates converge.Since x

_{1n}and x_{2n}converge and a, b are real numbers, thenax

_{1n}+bx_{2n}→(ax_{1}+bx_{2}). The same with (cx_{1n}+dx_{2n}).Thus,

Ax

_{n}=(ax_{1n}+bx_{2n}, cx_{1n}+cx_{2n})→(ax_{1}+bx_{2}, cx_{1}+dx_{2})=AxBobosharif S.

tutor

1) You don't have to prove that x

_{ni}→ x_{i}, because it is an assumption here and indeed you are using this fact,2) Yes, indeed it is better to use Ax

_{n}− Ax = A(x_{n}− x). As x_{n}→ x, A(x_{n}− x)→ 0, which is in principle the same as "by coordinate convergent"3) I'm not sure what you mean by k, k1 and k2.

Report

02/21/18

Ashley H.

(n)

i

) → xi

, i = 1, 2 without proof.

2. Realise that Axn − Ax = A(xn − x).

3. It is easier to estimate kA(xn − x)k1 than kA(xn − x)k2. Look at the cover sheet for

a relation between the two.

02/21/18