Linearly dependent and linearly independent vectors. Linear dependence and linear independence of vectors. Basis of vectors. Affine coordinate system Are the vectors linearly dependent?

Let there be a collection of vectors in an -dimensional arithmetic space .

Definition 2.1.Set of vectors called linearly independent system of vectors, if the equality is of the form

executed only with zero values ​​of numeric parameters .

If equality (2.1) can be satisfied provided that at least one of the coefficients is different from zero, then such a system of vectors will be called linearly dependent .

Example 2.1. Check linear independence of vectors

Solution. Let us create an equality of the form (2.1)

The left side of this expression can become zero only if the condition is met , which means that the system is linearly independent.

Example 2.1. Will there be vectors? linearly independent?

Solution. It is easy to check that the equality true for values , . This means that this system of vectors is linearly dependent.

Theorem 2.1. If a system of vectors is linearly dependent, then any vector from this system can be represented as a linear combination (or superposition) of the remaining vectors of the system.

Proof. Let us assume that the system of vectors linearly dependent. Then, by definition, there is a set of numbers , among which at least one number is different from zero, and equality (2.1) is valid:

Without loss of generality, we assume that the non-zero coefficient is , that is . Then the last equality can be divided by and then expressed as a vector:

.

Thus, the vector is represented as a superposition of vectors . Theorem 1 is proven.

Consequence. If is a set of linearly independent vectors, then not a single vector from this set can be expressed in terms of the others.

Theorem 2.2. If the system of vectors contains a zero vector, then such a system will necessarily be linearly dependent.

Proof. Let the vector be a zero vector, that is .

Then we choose constants ( ) in the following way:

, .

In this case, equality (2.1) is satisfied. The first term on the left is equal to zero due to the fact that is a zero vector. The remaining terms become zero when multiplied by zero constants ( ). Thus,

at , which means vectors linearly dependent. Theorem 2.2 is proven.

The next question we have to answer is what the largest number of vectors can form a linearly independent system V n-dimensional arithmetic space. In paragraph 2.1, the natural basis (1.4) was considered:

It was established that an arbitrary vector of -dimensional space is a linear combination of natural basis vectors, that is, an arbitrary vector is expressed in a natural basis as



, (2.2)

where are the coordinates of the vector, which are some numbers. Then equality

is possible only for , and therefore vectors natural basis form a linearly independent system. If we add an arbitrary vector to this system , then based on the corollary of Theorem 1 the system will be dependent, since the vector is expressed in terms of vectors according to formula (2.2).

This example shows that in n-dimensional arithmetic space there are systems consisting of linearly independent vectors. And if we add at least one vector to this system, we get a system of linearly dependent vectors. Let us prove that if the number of vectors exceeds the dimension of space, then they are linearly dependent.

Theorem 2.3.In -dimensional arithmetic space there is no system consisting of more than linearly independent vectors.

Proof. Consider arbitrary -dimensional vectors:

………………………

Let . Let's make a linear combination of vectors (2.3) and equate it to zero:

Vector equality (2.4) is equivalent to scalar equalities for coordinates vectors :

(2.5)

These equalities form a system of homogeneous equations with unknowns . Since the number of unknowns is greater than the number of equations ( ), then by virtue of the corollary of Theorem 9.3 of Section 1, the homogeneous system (2.5) has a nonzero solution. Consequently, equality (2.4) is valid for some values , among which not all are equal to zero, which means that the system of vectors (2.3) is linearly dependent. Theorem 2.3 is proven.

Consequence. In -dimensional space there are systems consisting of linearly independent vectors, and any system containing more than vectors will be linearly dependent.

Definition 2.2.A system of linearly independent vectors is called basis of space, if any vector in space can be expressed as a linear combination of these linearly independent vectors.



2.3. Linear vector transformation

Consider two vectors and -dimensional arithmetic space.

Definition 3.1.If each vector If a vector from the same space is associated, then we say that some transformation of an -dimensional arithmetic space is given.

We will denote this transformation by . We will call the vector an image. We can write the equality

. (3.1)

Definition 3.2.Transformation (3.1) will be called linear if it satisfies the following properties:

, (3.2)

, (3.3)

where is an arbitrary scalar (number).

Let us define transformation (3.1) in coordinate form. Let the coordinates of the vectors And bound by addiction

(3.4)

Formulas (3.4) define transformation (3.1) in coordinate form. Odds ( ) systems of equalities (3.4) can be represented as a matrix

called the transformation matrix (3.1).

Let's introduce column vectors

,

the elements of which are the coordinates of the vectors And accordingly, so And . We will henceforth call column vectors vectors.

Then transformation (3.4) can be written in matrix form

. (3.5)

Transformation (3.5) is linear due to the properties of arithmetic operations on matrices.

Let's consider some transformation whose image is a zero vector. In matrix form this transformation will look like

, (3.6)

and in coordinate form – represent a system of linear homogeneous equations

(3.7)

Definition 3.3.A linear transformation is called non-singular if the determinant of the linear transformation matrix is ​​not equal to zero, that is . If the determinant vanishes, then the transformation will be degenerate .

It is known that system (3.7) has a trivial (obvious) solution – zero. This solution is unique unless the determinant of the matrix is ​​zero.

Non-zero solutions of system (3.7) can appear if the linear transformation is degenerate, that is, if the determinant of the matrix is ​​zero.

Definition 3.4. The rank of transformation (3.5) is the rank of the transformation matrix.

We can say that the same number is equal to the number of linearly independent rows of the matrix.

Let us turn to the geometric interpretation of linear transformation (3.5).

Example 3.1. Let a linear transformation matrix be given , where Take an arbitrary vector , Where and find its image:
Then the vector
.

If , then the vector will change both length and direction. In Fig. 1.

If , then we get the image

,

that is, a vector
or , which means that it will only change the length, but will not change the direction (Fig. 2).

Example 3.2. Let , . Let's find the image:

,

that is
, or .

Vector as a result of the transformation, it changed its direction to the opposite, while the length of the vector was preserved (Fig. 3).

Example 3.3. Consider the matrix linear transformation. It is easy to show that in this case the image of the vector completely coincides with the vector itself (Fig. 4). Really,

.

We can say that linear transformation of vectors changes the original vector both in length and direction. However, in some cases there are matrices that transform the vector only in direction (example 3.2) or only in length (example 3.1, case ).

It should be noted that all vectors lying on the same line form a system of linearly dependent vectors.

Let's return to linear transformation (3.5)

and consider the collection of vectors , for which the image is a null vector, so .

Definition 3.5. A set of vectors that are a solution to the equation , forms a subspace of -dimensional arithmetic space and is called linear transformation kernel.

Definition 3.6. Linear transformation defect the dimension of the kernel of this transformation is called, that is, the largest number of linearly independent vectors satisfying the equation .

Since the rank of a linear transformation is the rank of the matrix, we can formulate the following statement regarding the defect of the matrix: the defect is equal to the difference , where is the dimension of the matrix and is its rank.

If the rank of the linear transformation matrix (3.5) is sought by the Gaussian method, then the rank coincides with the number of non-zero elements on the main diagonal of the already transformed matrix, and the defect is determined by the number of zero rows.

If the linear transformation is non-degenerate, that is , then its defect becomes zero, since the kernel is the only zero vector.

If the linear transformation is degenerate and , then system (3.6) has other solutions besides the zero one, and the defect in this case is already different from zero.

Of particular interest are transformations that, while changing the length, do not change the direction of the vector. More precisely, they leave the vector on the line containing the original vector, provided that the line passes through the origin. Such transformations will be discussed in the next paragraph 2.4.

Vectors, their properties and actions with them

Vectors, actions with vectors, linear vector space.

Vectors are an ordered collection of a finite number of real numbers.

Actions: 1.Multiplying a vector by a number: lambda*vector x=(lamda*x 1, lambda*x 2 ... lambda*x n).(3.4, 0, 7)*3=(9, 12,0.21)

2. Addition of vectors (belong to the same vector space) vector x + vector y = (x 1 + y 1, x 2 + y 2, ... x n + y n,)

3. Vector 0=(0,0…0)---n E n – n-dimensional (linear space) vector x + vector 0 = vector x

Theorem. In order for a system of n vectors, an n-dimensional linear space, to be linearly dependent, it is necessary and sufficient that one of the vectors be a linear combination of the others.

Theorem. Any set of n+ 1st vectors of n-dimensional linear space of phenomena. linearly dependent.

Addition of vectors, multiplication of vectors by numbers. Subtraction of vectors.

The sum of two vectors is a vector directed from the beginning of the vector to the end of the vector, provided that the beginning coincides with the end of the vector. If vectors are given by their expansions in basis unit vectors, then when adding vectors, their corresponding coordinates are added.

Let's consider this using the example of a Cartesian coordinate system. Let

Let's show that

From Figure 3 it is clear that

The sum of any finite number of vectors can be found using the polygon rule (Fig. 4): to construct the sum of a finite number of vectors, it is enough to combine the beginning of each subsequent vector with the end of the previous one and construct a vector connecting the beginning of the first vector with the end of the last.

Properties of the vector addition operation:

In these expressions m, n are numbers.

The difference between vectors is called a vector. The second term is a vector opposite to the vector in direction, but equal to it in length.

Thus, the operation of subtracting vectors is replaced by an addition operation

A vector whose beginning is at the origin and end at point A (x1, y1, z1) is called the radius vector of point A and is denoted simply. Since its coordinates coincide with the coordinates of point A, its expansion in unit vectors has the form

A vector that starts at point A(x1, y1, z1) and ends at point B(x2, y2, z2) can be written as

where r 2 is the radius vector of point B; r 1 - radius vector of point A.

Therefore, the expansion of the vector in unit vectors has the form

Its length is equal to the distance between points A and B

MULTIPLICATION

So in the case of a plane problem, the product of a vector by a = (ax; ay) by the number b is found by the formula

a b = (ax b; ay b)

Example 1. Find the product of the vector a = (1; 2) by 3.

3 a = (3 1; 3 2) = (3; 6)

So, in the case of a spatial problem, the product of the vector a = (ax; ay; az) by the number b is found by the formula

a b = (ax b; ay b; az b)

Example 1. Find the product of the vector a = (1; 2; -5) by 2.

2 a = (2 1; 2 2; 2 (-5)) = (2; 4; -10)

Dot product of vectors and where is the angle between the vectors and ; if either, then

From the definition of the scalar product it follows that

where, for example, is the magnitude of the projection of the vector onto the direction of the vector.

Scalar squared vector:

Properties of the dot product:

Dot product in coordinates

If That

Angle between vectors

Angle between vectors - the angle between the directions of these vectors (smallest angle).

Cross product (Cross product of two vectors.) - this is a pseudovector perpendicular to a plane constructed from two factors, which is the result of the binary operation “vector multiplication” over vectors in three-dimensional Euclidean space. The product is neither commutative nor associative (it is anticommutative) and is different from the dot product of vectors. In many engineering and physics problems, you need to be able to construct a vector perpendicular to two existing ones - the vector product provides this opportunity. The cross product is useful for "measuring" the perpendicularity of vectors - the length of the cross product of two vectors is equal to the product of their lengths if they are perpendicular, and decreases to zero if the vectors are parallel or antiparallel.

The cross product is defined only in three-dimensional and seven-dimensional spaces. The result of a vector product, like a scalar product, depends on the metric of Euclidean space.

Unlike the formula for calculating scalar product vectors from coordinates in a three-dimensional rectangular coordinate system, the formula for the cross product depends on the orientation of the rectangular coordinate system or, in other words, its “chirality”

Collinearity of vectors.

Two non-zero (not equal to 0) vectors are called collinear if they lie on parallel lines or on the same line. An acceptable, but not recommended, synonym is “parallel” vectors. Collinear vectors can be identically directed ("codirectional") or oppositely directed (in the latter case they are sometimes called "anticollinear" or "antiparallel").

Mixed product of vectors( a, b, c)- scalar product of vector a and the vector product of vectors b and c:

(a,b,c)=a ⋅(b ×c)

it is sometimes called the triple dot product of vectors, apparently because the result is a scalar (more precisely, a pseudoscalar).

Geometric meaning: The modulus of the mixed product is numerically equal to the volume of the parallelepiped formed by the vectors (a,b,c) .

Properties

A mixed product is skew-symmetric with respect to all its arguments: i.e. e. rearranging any two factors changes the sign of the product. It follows that the Mixed product in the right Cartesian coordinate system (in an orthonormal basis) is equal to the determinant of a matrix composed of vectors and:

The mixed product in the left Cartesian coordinate system (in an orthonormal basis) is equal to the determinant of the matrix composed of vectors and, taken with a minus sign:

In particular,

If any two vectors are parallel, then with any third vector they form a mixed product equal to zero.

If three vectors are linearly dependent (that is, coplanar, lying in the same plane), then their mixed product is equal to zero.

Geometric meaning - The mixed product is equal in absolute value to the volume of the parallelepiped (see figure) formed by the vectors and; the sign depends on whether this triple of vectors is right-handed or left-handed.

Coplanarity of vectors.

Three vectors (or a larger number) are called coplanar if they, being reduced to a common origin, lie in the same plane

Properties of coplanarity

If at least one of the three vectors is zero, then the three vectors are also considered coplanar.

A triple of vectors containing a pair of collinear vectors is coplanar.

Mixed product of coplanar vectors. This is a criterion for the coplanarity of three vectors.

Coplanar vectors are linearly dependent. This is also a criterion for coplanarity.

In 3-dimensional space, 3 non-coplanar vectors form a basis

Linearly dependent and linearly independent vectors.

Linearly dependent and independent vector systems.Definition. The vector system is called linearly dependent, if there is at least one nontrivial linear combination of these vectors equal to the zero vector. Otherwise, i.e. if only a trivial linear combination of given vectors equals the null vector, the vectors are called linearly independent.

Theorem (linear dependence criterion). In order for a system of vectors in a linear space to be linearly dependent, it is necessary and sufficient that at least one of these vectors is a linear combination of the others.

1) If among the vectors there is at least one zero vector, then the entire system of vectors is linearly dependent.

In fact, if, for example, , then, assuming , we have a nontrivial linear combination .▲

2) If among the vectors some form a linearly dependent system, then the entire system is linearly dependent.

Indeed, let the vectors , , be linearly dependent. This means that there is a non-trivial linear combination equal to the zero vector. But then, assuming , we also obtain a nontrivial linear combination equal to the zero vector.

2. Basis and dimension. Definition. System of linearly independent vectors vector space is called basis of this space if any vector from can be represented as a linear combination of vectors of this system, i.e. for each vector there are real numbers such that the equality holds. This equality is called vector decomposition according to the basis, and the numbers are called coordinates of the vector relative to the basis(or in the basis) .

Theorem (on the uniqueness of the expansion with respect to the basis). Every vector in space can be expanded into a basis in the only way, i.e. coordinates of each vector in the basis are determined unambiguously.

a 1 = { 3, 5, 1 , 4 }, a 2 = { –2, 1, -5 , -7 }, a 3 = { -1, –2, 0, –1 }.

Solution. We are looking for a general solution to the system of equations

a 1 x 1 + a 2 x 2 + a 3 x 3 = Θ

Gauss method. To do this, we write this homogeneous system in coordinates:

System Matrix

The allowed system has the form: (r A = 2, n= 3). The system is cooperative and uncertain. Its general solution ( x 2 – free variable): x 3 = 13x 2 ; 3x 1 – 2x 2 – 13x 2 = 0 => x 1 = 5x 2 => X o = . The presence of a non-zero particular solution, for example, indicates that the vectors a 1 , a 2 , a 3 linearly dependent.

Example 2.

Find out whether a given system of vectors is linearly dependent or linearly independent:

1. a 1 = { -20, -15, - 4 }, a 2 = { –7, -2, -4 }, a 3 = { 3, –1, –2 }.

Solution. Consider a homogeneous system of equations a 1 x 1 + a 2 x 2 + a 3 x 3 = Θ

or in expanded form (by coordinates)

The system is homogeneous. If it is non-degenerate, then it has a unique solution. In the case of a homogeneous system, there is a zero (trivial) solution. This means that in this case the system of vectors is independent. If the system is degenerate, then it has non-zero solutions and, therefore, it is dependent.

We check the system for degeneracy:

= –80 – 28 + 180 – 48 + 80 – 210 = – 106 ≠ 0.

The system is non-degenerate and, thus, the vectors a 1 , a 2 , a 3 linearly independent.

Tasks. Find out whether a given system of vectors is linearly dependent or linearly independent:

1. a 1 = { -4, 2, 8 }, a 2 = { 14, -7, -28 }.

2. a 1 = { 2, -1, 3, 5 }, a 2 = { 6, -3, 3, 15 }.

3. a 1 = { -7, 5, 19 }, a 2 = { -5, 7 , -7 }, a 3 = { -8, 7, 14 }.

4. a 1 = { 1, 2, -2 }, a 2 = { 0, -1, 4 }, a 3 = { 2, -3, 3 }.

5. a 1 = { 1, 8 , -1 }, a 2 = { -2, 3, 3 }, a 3 = { 4, -11, 9 }.

6. a 1 = { 1, 2 , 3 }, a 2 = { 2, -1 , 1 }, a 3 = { 1, 3, 4 }.

7. a 1 = {0, 1, 1 , 0}, a 2 = {1, 1 , 3, 1}, a 3 = {1, 3, 5, 1}, a 4 = {0, 1, 1, -2}.

8. a 1 = {-1, 7, 1 , -2}, a 2 = {2, 3 , 2, 1}, a 3 = {4, 4, 4, -3}, a 4 = {1, 6, -11, 1}.

9. Prove that a system of vectors will be linearly dependent if it contains:

a) two equal vectors;

b) two proportional vectors.

Definition 1. A system of vectors is called linearly dependent if one of the vectors of the system can be represented as a linear combination of the remaining vectors of the system, and linearly independent - otherwise.

Definition 1´. A system of vectors is called linearly dependent if there are numbers With 1 , With 2 , …, With k , not all equal to zero, such that the linear combination of vectors with given coefficients is equal to the zero vector: = , otherwise the system is called linearly independent.

Let us show that these definitions are equivalent.

Let Definition 1 be satisfied, i.e. one of the system vectors is equal to a linear combination of the others:

A linear combination of a system of vectors is equal to the zero vector, and not all coefficients of this combination are equal to zero, i.e. Definition 1´ is satisfied.

Let Definition 1´ hold. A linear combination of a system of vectors is equal to , and not all coefficients of the combination are equal to zero, for example, the coefficients of the vector .

We presented one of the vectors of the system as a linear combination of the others, i.e. Definition 1 is satisfied.

Definition 2. A unit vector, or unit vector, is called n-dimensional vector, which one i The -th coordinate is equal to one, and the rest are zero.

. (1, 0, 0, …, 0),

(0, 1, 0, …, 0),

(0, 0, 0, …, 1).

Theorem 1. Various unit vectors n-dimensional space are linearly independent.

Proof. Let the linear combination of these vectors with arbitrary coefficients be equal to the zero vector.

From this equality it follows that all coefficients are equal to zero. We got a contradiction.

Each vector n-dimensional space ā (A 1 , A 2 , ..., A n) can be represented as a linear combination of unit vectors with coefficients equal to the vector coordinates

Theorem 2. If a system of vectors contains a zero vector, then it is linearly dependent.

Proof. Let a system of vectors be given and one of the vectors is zero, for example = . Then, with the vectors of this system, you can make a linear combination equal to the zero vector, and not all coefficients will be zero:

Therefore, the system is linearly dependent.

Theorem 3. If some subsystem of a system of vectors is linearly dependent, then the entire system is linearly dependent.

Proof. A system of vectors is given. Let us assume that the system is linearly dependent, i.e. there are numbers With 1 , With 2 , …, With r , not all equal to zero, such that = . Then

It turned out that the linear combination of vectors of the entire system is equal to , and not all coefficients of this combination are equal to zero. Consequently, the system of vectors is linearly dependent.

Consequence. If a system of vectors is linearly independent, then any of its subsystems is also linearly independent.

Proof.

Let's assume the opposite, i.e. some subsystem is linearly dependent. It follows from the theorem that the entire system is linearly dependent. We have arrived at a contradiction.

Theorem 4 (Steinitz's theorem). If each of the vectors is a linear combination of vectors and m>n, then the system of vectors is linearly dependent.

Consequence. In any system of n-dimensional vectors there cannot be more than n linearly independent ones.

Proof. Every n-dimensional vector is expressed as a linear combination of n unit vectors. Therefore, if the system contains m vectors and m>n, then, according to the theorem, this system is linearly dependent.

Let L is an arbitrary linear space, a i Î L,- its elements (vectors).

Definition 3.3.1. Expression , Where , - arbitrary real numbers, called a linear combination vectors a 1 , a 2 ,…, a n.

If the vector R = , then they say that R decomposed into vectors a 1 , a 2 ,…, a n.

Definition 3.3.2. A linear combination of vectors is called non-trivial, if among the numbers there is at least one non-zero. Otherwise, the linear combination is called trivial.

Definition 3.3.3 . Vectors a 1 , a 2 ,…, a n are called linearly dependent if there exists a nontrivial linear combination of them such that

= 0 .

Definition 3.3.4. Vectors a 1 ,a 2 ,…, a n are called linearly independent if the equality = 0 is possible only in the case when all the numbers l 1, l 2,…, l n are simultaneously equal to zero.

Note that any non-zero element a 1 can be considered as a linearly independent system, since the equality l a 1 = 0 possible only if l= 0.

Theorem 3.3.1. A necessary and sufficient condition for the linear dependence a 1 , a 2 ,…, a n is the possibility of decomposing at least one of these elements into the rest.

Proof. Necessity. Let the elements a 1 , a 2 ,…, a n linearly dependent. It means that = 0 , and at least one of the numbers l 1, l 2,…, l n different from zero. Let for certainty l 1 ¹ 0. Then

i.e. element a 1 is decomposed into elements a 2 , a 3 , …, a n.

Adequacy. Let element a 1 be decomposed into elements a 2 , a 3 , …, a n, i.e. a 1 = . Then = 0 , therefore, there is a non-trivial linear combination of vectors a 1 , a 2 ,…, a n, equal 0 , so they are linearly dependent .

Theorem 3.3.2. If at least one of the elements a 1 , a 2 ,…, a n zero, then these vectors are linearly dependent.

Proof . Let a n= 0 , then = 0 , which means the linear dependence of these elements.

Theorem 3.3.3. If among n vectors any p (p< n) векторов линейно зависимы, то и все n элементов линейно зависимы.

Proof. Let, for definiteness, the elements a 1 , a 2 ,…, a p linearly dependent. This means that there is a non-trivial linear combination such that = 0 . The specified equality will be preserved if we add the element to both its parts. Then + = 0 , and at least one of the numbers l 1, l 2,…, lp different from zero. Therefore, vectors a 1 , a 2 ,…, a n are linearly dependent.

Corollary 3.3.1. If n elements are linearly independent, then any k of them are linearly independent (k< n).

Theorem 3.3.4. If the vectors a 1 , a 2 ,…, a n- 1 are linearly independent, and the elements a 1 , a 2 ,…, a n- 1, a n are linearly dependent, then the vector a n can be expanded into vectors a 1 , a 2 ,…, a n- 1 .



Proof. Since by condition a 1 , a 2 ,…,a n- 1, a n are linearly dependent, then there is a nontrivial linear combination of them = 0 , and (otherwise, the vectors a 1 , a 2 ,…, a will turn out to be linearly dependent n- 1). But then the vector

Q.E.D.