Just as with integers there exists an Euclidean algorithm with which you can find the gcd of two polynomials. Rules 1 and 2 for the gcd of polynomials are sufficient to calculate the gcd, as shown by the following algorithm.
An algorithm is a series of steps that need to be executed after one another to obtain a particular prescribed result out of a certain type of data. When presenting an algorithm, we start by specifying with which data the algorithm starts, the input, and what the algorithm delivers, the output. Next we indicate the steps which the algorithm consists of.
The following algorithm, called the Euclidean algorithm for polynomials, calculates the gcd of two polynomials if at least one of them is not equal to #0#.
input: a set #\rv{f,g}# of polynomials with #f\ne0# or #g\ne0#
output: #\gcd\left(f,g\right)#
- choose #\rv{r,s}= \rv{f,g}#
- keep replacing #\rv{r,s}# by #\rv{s,t}# until #s=0#, in which #t# is the remainder when dividing #r# by #s#,
- return #r#
Let #f=x^6-1# and #g=x^4-1#. In step 1 we get #r=x^6-1# and #s=x^4-1#. In step 2 division of #r# by #s# with remainder gives #t = x^2-1# (and quotient #x^2#) such that the new values for #r# and #s# are: #\rv{r,s} = \rv{x^4-1,x^2-1}#. Dividing #r# by #s# with remainder a second time gives #t = 0# (and quotient # x^2+1#) such that the new values for #r# and #s# become #\rv{r,s} = \rv{x^2-1,0}#. Now the second component is equal to #0#, such that in step 3 the first component is delivered: #\gcd(f,g) = x^2-1#.
Each instance of the set #\rv{r,s}# in step 2 satisfies #\gcd(r,s)=\gcd(f,g)#:
1. because of the definition;
2. at the end of each iteration in step 2 because of the gcd rule 3.
When we are finished with step 2, we have #s=0# and #\gcd(f,g)=\gcd(r,0)=r#. Hence, the output is indeed #\gcd(f,g)#.
Since the degree of #r# becomes strictly smaller with each iteration in step 2, the algorithm does not move through more iterations than the maximum of the degrees of #f# and #g#.
If we write the vectors as columns, then step 2 in terms of matrices says
\[\matrix{r\\ s} \phantom{xx}\text{is replaced by }\phantom{xx} \matrix{0&1\\ 1&-q}\matrix{r\\ s}\]in which #q# is the quotient when dividing #r# by #s#.
Hence, if step 2 is executed #m# times before #s=0#, this results in
\[\matrix{\gcd(f,g)\\ 0} =\matrix{0&1\\ 1&-q_m}\cdots \matrix{0&1\\ 1&-q_1}\matrix{f\\ g}\]
for polynomials #q_1,\ldots,q_m#.
The first element of the column vector on the right-hand side gives a combination of the form #a\cdot f + b\cdot g#, in which #a# and #b# are polynomials, equal to the calculated gcd. The second element gives a combination #u\cdot f + v\cdot g#, in which #u# and #v# are polynomials, equal to #0#. We will take a look at this later.
A special application of the Euclidean algorithm is the following simple determination of the existence of multiple roots of polynomials:
Let #f# be a polynomial. Then #\gcd(f,f') = 1# if and only if #f# has no multiple complex roots.
Assume that #a# is a complex multiple root of #f#. Then there is a polynomial #g# with #f = g\cdot (x-a)^2#. Because of the product rule for differentiation we then have \[f' = g'\cdot (x-a)^2+2 g\cdot(x-a) = (x-a)\left(g'\cdot (x-a)+2 g\right)\]In particular #x-a# is a divisor of #f'#, such that #x-a# is a divisor of #\gcd(f,f')# and hence #\gcd(f,f')\ne1#.
Assume that #a# is a complex singular root of #f#. Then there exists a polynomial #g# with #f = g\cdot (x-a)# and #g(a)\ne0#. Then the product rule for differentiation gives #f' = g'\cdot (x-a) + g#, such that #f'(a) = g(a)\ne0#. Hence, if #a# is a singular root of #f#, we have #f'(a)\ne0#. In particular #x-a# is not a divisor of #f'# and therefore also not a divisor of #\gcd(f,f')#. We conclude that if #f# does nat have any complex multiple roots, no single linear factor of #f# is a divisor of #f'#, such that #\gcd(f,f')=1#.
The existence of multiple roots of a polynomial becomes simple. That does not mean that the actual finding of such a root is simple. It became easier since such a root is also a root of the polynomial #d = \gcd(f,f')#, which is usually of a lower degree than #f#. But still, for #d# as well it can still be difficult to determine a root.
The last criterium provides us with a quick method to check if a square matrix is diagonalizable:
A square matrix #A# is then and only then diagonalizable if the minimal polynomial #m_A# satisfies#\gcd(m_A,m_A') = 1#, in which #m_A'# is the derivative of #m_A#.
According to this result a #(2\times2)#-matrix #A# is exactly non-diagonalizable if the minimal polynomial is the square of a linear polynomial #x-\lambda#. In particular the characteristic polynomial then is #(x-\lambda)^2#, such that
\[x^2-2\lambda\cdot x +\lambda^2 = p_A(x) = x^2-\text{trace}(A)\cdot x+\det(A)\]
Comparing the coefficients of #x# and the constants on both sides gives
\[\eqs{2\lambda &=& \text{trace}(A)\\ \lambda^2 &=& \det(A)}\]
Elimination of #\lambda# gives that the characteristic polynomial of #A# is exactly a square if \[\left(\text{trace}(A)\right)^2 - 4 \det(A)=0\]In that case the unique eigenvalue is #\lambda=\frac{\text{trace}(A)}{2}#. We see that #A# is only non-diagonalizable if the minimal polynomial and characteristic polynomial of #A# are both equal to #\left(x -\frac{\text{trace}(A)}{2}\right)^2#.
A concrete example is the matrix \( A = \matrix{1&a\\ 0&1}\). Here #\lambda = 1#, the characteristic polynomial is equal to #(x-1)^2# and the minimal polynomial is equal to #(x-1)^2# if and only if #a\ne0#. Hence, the matrix #A# is only diagonalizable if #a=0#.
Calculate the greatest common divisor of \[\begin{array}{rcl}f(x)&=&x^3+9 x^2+27 x+28\\
&\text{and}&\\
g(x)&=&x^4+11 x^3+46 x^2+89 x+68\end{array}\] by using the
Euclidean algorithm.
#x+4#
By use of the
Euclidean algorithm we find
\[\begin{array}{rcl}\gcd(f(x),g(x))&=&{\small \gcd(x^3+9 x^2+27 x+28,x^4+11 x^3+46 x^2+89 x+68)}\\
&=&{\small \gcd(x^4+11 x^3+46 x^2+89 x+68,x^3+9 x^2+27 x+28)}\\
&&\phantom{xx}\color{blue}{\text{highest degree polynomial in first argument}}\\
&=& \gcd(x^3+9 x^2+27 x+28,x^2+7 x+12)\\
&&\phantom{xx}\color{blue}{\text{iteration 1: }x^2+7 x+12\text{ is the remainder}}\\
&&\phantom{xx}\color{blue}{\text{of division of }x^4+11 x^3+46 x^2+89 x+68}\\
&&\phantom{xx}\color{blue}{\text{by }x^3+9 x^2+27 x+28}\\
&=& \gcd(x^2+7 x+12,x+4)\\
&&\phantom{xx}\color{blue}{\text{iteration 2: } x+4\text{ is the remainder}}\\
&&\phantom{xx}\color{blue}{\text{of division of }x^3+9 x^2+27 x+28}\\
&&\phantom{xx}\color{blue}{\text{by }x^2+7 x+12}\\
&=&\gcd(x+4,0)\\
&&\phantom{xx}\color{blue}{\text{iteration 3: } 0 \text{ is the remainder}}\\
&&\phantom{xx}\color{blue}{\text{of division of }x^2+7 x+12}\\
&&\phantom{xx}\color{blue}{\text{by }x+4}\\
&=&x+4\\&&\phantom{xx}\color{blue}{\gcd(p(x),0)=p(x)}\end{array} \]