First here is a definition of polynomials in many variables which have coefficients in a commutative ring. A
commutative ring would be a field except you don’t know that every nonzero element has a multiplicative
inverse. If you like, let these coefficients be in a field. It is still interesting. A good example of
a commutative ring is the integers. In particular, every field is a commutative ring. Thus, a
commutative ring satisfies the following axioms. They are just the field axioms with one omission
mentioned above. You don’t have x^{−1} if x≠0. We will assume that the ring has 1, the multiplicative
identity.
Axiom 10.1.1Here are the axioms for a commutative ring.
x + y = y + x, (commutative law for addition)
There exists 0 such that x + 0 = x for all x, (additive identity).
For each x ∈ F, there exists −x ∈ F such that x +
(− x)
= 0, (existence of additive inverse).
(x +y)
+ z = x +
(y+ z)
,(associative law for addition).
xy = yx,(commutative law for multiplication). You could write this as x × y = y × x.
(xy)
z = x
(yz)
,(associative law for multiplication).
There exists 1 such that 1x = x for all x,(multiplicative identity).
x
(y +z)
= xy + xz.(distributive law).
Next is a definition of what is meant by a polynomial.
Definition 10.1.2Let k ≡
(k1,k2,⋅⋅⋅,kn)
where each k_{i}is a nonnegative integer. Let
∑
|k| ≡ ki
i
Polynomials of degree p in the variables x_{1},x_{2},
⋅⋅⋅
,x_{n}are expressions of the form
g(x ,x ,⋅⋅⋅,x ) = ∑ a xk1⋅⋅⋅xkn
1 2 n k 1 n
|k|≤p
where each a_{k}is ina commutative ring. If all a_{k} = 0, the polynomial has no degree. Such a polynomial issaid to be symmetric if whenever σ is a permutation of
Then the following result is the fundamental theorem in the subject. It is the symmetric polynomial
theorem. It says that these elementary symmetric polynomials raised to powers are a lot like a basis for the
symmetric polynomials. This is a really remarkable result.
Theorem 10.1.4Let g
(x1,x2,⋅⋅⋅,xn )
be a symmetric polynomial. Then g
(x1,x2,⋅⋅⋅,xn )
equals apolynomial in the elementary symmetric polynomials.
∑
g (x1,x2,⋅⋅⋅,xn) = aksk11⋅⋅⋅sknn
k
and the a_{k}in the commutative ring are unique.
Proof:The proof is by induction on the number of variables. If n = 1, it is obviously true because
s_{1} = x_{1} and g
(x1)
can only be a_{1}x_{1}^{m} for some m. Suppose the theorem is true for n − 1 and
g
(x1,x2,⋅⋅⋅,xn)
has degree d. Let
′
g(x1,x2,⋅⋅⋅,xn−1) ≡ g(x1,x2,⋅⋅⋅,xn−1,0)
By induction, there are unique a_{k} such that
′ ∑ ′k1 ′kn−1
g (x1,x2,⋅⋅⋅,xn− 1) = aks1 ⋅⋅⋅sn− 1
k
where s_{i}^{′} is the corresponding symmetric polynomial which pertains to x_{1},x_{2},
⋅⋅⋅
,x_{n−1}. Note
that
′
sk(x1,x2,⋅⋅⋅,xn−1,0) = sk(x1,x2,⋅⋅⋅,xn−1)
This follows from the definition of these symmetric polynomials. Indeed, the coefficient of x^{n−k}
in
(x− x1)(x− x2)⋅⋅⋅(x− xn−1)(x− 0)
is the same as the coefficient of x^{}
(n−1)
−k in
(x− x1)(x− x2)⋅⋅⋅(x− xn−1)
Now consider
∑
g(x1,x2,⋅⋅⋅,xn)− aksk11⋅⋅⋅sknn−−11 ≡ q(x1,x2,⋅⋅⋅,xn)
k
Then
q(x1,x2,⋅⋅⋅,xn) = g(x1,x2,⋅⋅⋅,xn)− g (x1,x2,⋅⋅⋅,xn− 1,0)
Thus the only terms which survive in q are those in which k_{n}> 0. That is, all terms have a x_{n}^{kn},k_{n}> 0.
Therefore,
∑
q(x1,x2,⋅⋅⋅,xn) = akxk11⋅⋅⋅xknn, ak ⁄= 0
|k|≤p,kn>0
If a term in the sum has k_{j} = 0, you could switch x_{n} and x_{j} and get a contradiction. Thus all k_{i}≠0. It
follows that
q(x1,x2,⋅⋅⋅,xn) = snh(x1,x2,⋅⋅⋅,xn)
and it follows that h
(x1,x2,⋅⋅⋅,xn)
is symmetric of degree no more than d − n where d is the degree of
q
(x1,x2,⋅⋅⋅,xn)
and is uniquely determined. Thus, if g
(x1,x2,⋅⋅⋅,xn)
is symmetric of degree
d,
∑
g(x1,x2,⋅⋅⋅,xn) = aksk11 ⋅⋅⋅sknn−−11+ snh(x1,x2,⋅⋅⋅,xn)
k
where h has degree no more than d−n. Now apply the same argument to h
(x1,x2,⋅⋅⋅,xn )
and continue,
repeatedly obtaining a sequence of symmetric polynomials h_{i}, of strictly decreasing degree, obtaining
expressions of the form
∑
g(x1,x2,⋅⋅⋅,xn) = bksk11⋅⋅⋅sknn−−11 sknn+ snhm (x1,x2,⋅⋅⋅,xn)
k
Eventually h_{m} must be a constant or zero. By induction, each step in the argument yields uniqueness and
so, the final sum of combinations of elementary symmetric functions is uniquely determined.
■
Note that if you have
∏m
(x− xi)
i=1
then by definition, it is the sum of terms like g
(x1,⋅⋅⋅,xm)
x^{m−k}. If you replace x with x_{i} and
sum over all i, you would get ∑_{i=1}^{m}g
(x1,⋅⋅⋅,xm )
x_{i}^{m−k} which would also be a symmetric
polynomial.
Here is a very interesting result which I saw claimed in a paper by Steinberg and Redheffer on
Lindermannn’s theorem which follows from the above theorem.
Theorem 10.1.5Let α_{1},
⋅⋅⋅
,α_{n}be roots of the polynomial equation
n n−1
p(x) ≡ anx + an−1x + ⋅⋅⋅+ a1x+ a0 = 0
where each a_{i}is an integer. Then any symmetric polynomial in the quantities a_{n}α_{1},
⋅⋅⋅
,a_{n}α_{n}havinginteger coefficients is also an integer. Also any symmetric polynomial in the quantities α_{1},
⋅⋅⋅
,α_{n}havingrational coefficients is a rational number.
Proof: Let f
(x ,⋅⋅⋅,x )
1 n
be the symmetric polynomial. Thus
f (x1,⋅⋅⋅,xn) ∈ ℤ[x1⋅⋅⋅xn ], the polynomials having integer coefficients
From Theorem 10.1.4 it follows there are integers a_{k1}
⋅⋅⋅
k_{n} such that
∑ k1 kn
f (x1,⋅⋅⋅,xn) = ak1⋅⋅⋅knp1 ⋅⋅⋅pn (10.1)
k1+⋅⋅⋅+kn≤m
(10.1)
where the p_{i} are elementary symmetric polynomials defined as the coefficients of
∏n
p(x) = (x− xj)
j=1
Earlier we had them ± these coefficients. Thus
f (an∑α1,⋅⋅⋅,anαn)
= ak1⋅⋅⋅knpk11 (anα1,⋅⋅⋅,anαn)⋅⋅⋅pknn (anα1,⋅⋅⋅,anαn)
k1+⋅⋅⋅+kn=d
Now the given polynomial p
(x)
is of the form
n ( n )
∏ ∑ n−k
an (x − αj) ≡ an pk(α1,⋅⋅⋅,αn)x
j=1 k=0
= a xn + a xn− 1 + ⋅⋅⋅+ a x + a
n n−1 1 0
Thus, equating coefficients, a_{n}p_{k}
(α1,⋅⋅⋅,αn)
= a_{n−k}. Multiply both sides by a_{n}^{k−1}. Thus
k− 1
pk(anα1,⋅⋅⋅,anαn) = an an− k
an integer. Therefore,
f (an∑α1,⋅⋅⋅,anαn)
= ak1⋅⋅⋅knpk11 (anα1,⋅⋅⋅,anαn)⋅⋅⋅pknn (anα1,⋅⋅⋅,anαn)
k1+⋅⋅⋅+kn=d
and each p_{k}
(anα1,⋅⋅⋅,an αn)
is an integer. Thus f
(anα1,⋅⋅⋅,anαn)
is an integer as claimed. From this, it
is obvious that f