Labels

Thursday, November 17, 2011

Diophantine equations

Diophantine equations 
There are two main topics that everyone who has any knowledge of number theory is aware of: prime numbers and Diophantine equations. Prime numbers are learned about early in the study of elementary arithmetic, but there are deep and fascinating questions connected with their distribution, and an elegant existing theory dealing with such questions, so let's defer that. Diophantine equations, on the other hand, provoke a multitude of special questions, but relatively little organized theory has developed to deal with them.
Diophantine equations get their name from Diophantos of Alexandria. Diophantos flourished in the 3rd century CE and wrote a highly regarded treatise he called the Arithmetica. Much of this dealt with solving algebraic equations in several unknowns, with the added restriction that solutions had to be rational or integral. This was especially important to the Greeks because of the discovery by Pythagoreans centuries earlier that many solutions of algebraic equations, even something as simple as the square root of 2, were not rational numbers. Then, as now, the term "irrational" had very negative connotations (though no longer in modern times as regards numbers).
In any case, early mathematicians were very interested in finding solutions of simple algebraic equations that are rational or integral numbers. This is often a very natural condition to impose on a particular problem. The equation, for example, may apply to things which ordinarily occur in whole number amounts, such as living animals.
A Diophantine equation, then, is an algebraic equation for which rational or integral solutions are sought. (Systems of such equations are also considered.) An algebraic equation is one that involves only polynomial expressions in one or more variables. What makes the equation "Diophantine" is that the coefficients of the polynomials should be rational numbers (or often, integers), and further that only rational (or integer) solutions are sought. Since these are the only conditions, not much in the way of a general theory has developed, though a lot is known about many more specialized cases.
The closest thing to a general theory is to be found in algebraic geometry, which deals with the geometric properties of solution sets of algebraic equations, usually over an "algebraically closed" field of numbers, such as the complex numbers. Algebraic geometry, indeed, turns out to be very helpful in solving problems with Diophantine equations, and a great deal of very deep and beautiful theory has been developed. However, a general theory that can deal with finding integral or rational solutions of arbitrary algebraic equations, or even with determining whether such solutions exist, is still a long way off.
In 1970 Yuri Matiyasevich gave a negative solution of the so-called 10th Hilbert problem by showing that there is no general decision procedure to determine whether solutions of an arbitrary Diophantine equation exist. Subsequently Matiyasevich showed that equations with as few as 9 variables lack a decision procedure. It is possible there may be equations with even fewer variables that lack such a procedure. However, the methods used in Wiles' proof of Fermat's Last Theorem indicate that all equations in 3 variables should be decidable. It is at present an open question what the least number of variables of an undecidable equation might be.
Nevertheless, we do have some very illuminating theory. For instance, there is a very celebrated conjecture stated by Louis Mordell in 1922 which says, roughly, that if P(x,y) is a polynomial with rational coefficients, then the equation P(x,y)=0 has only finitely many rational solutions (provided P(x,y) also has a "genus" greater than 1). This was an open problem until a proof was finally given by Gerd Faltings in 1983. The result can be interpreted geometrically as saying that a surface which contains all complex number solutions of the equation has only finitely many points on it with rational coordinates.
The most famous Diophantine equation of all, of course, is Fermat's equation: xn + yn = zn. Andrew Wiles finally proved in 1995 that, if n is 3 or more, this equation has no nonzero integral solutions -- even though Fermat had conjectured as much more than 350 years earlier. The final solution turned out to require extremely sophisticated tools from algebraic number theory, algebraic geometry (such as the theory of elliptic curves), and much more besides from a great deal of the remainder of modern mathematics.
Note that Fermat's problem is really about solutions of an infinite set of equations (one for each value of n). Technically speaking, if n were regarded as one of the variables in the equation, it would no longer be algebraic. Many other famous Diophantine problems have this feature, where one of the "variables" occurs as an exponent.
Elementary examples
Diophantine equations need not be limited to equations in only one variable (such as x2-2 = 0). It's frequently more interesting to consider equations with several variables, such as the Fermat equation. Just about the simplest equation with two variables and degree higher than 1 is x2-y2 = 0. (The degree of a single term is the sum of exponents of all variables occuring in the term. The degree of the equation is the largest of the degrees of its terms.) But this equation isn't very interesting, because it can be written as a product of polynomials of first degree: (x+y)(x-y) = 0. When the polynomial in such an equation is a product of factors of degree one having suitable coefficients, the solutions consist of the solutions of any of the factors, and those are trivial.
The next simplest second degree equation in two variables is x2+y2 = 1. (The equation x2+y2 = 0 has only the trivial solution x=y=0 when excluding complex numbers.) You probably recognize that as the equation of a circle of radius 1. Any point on the circle has coordinates satisfying the equation, and vice versa. The polynomial x2+y2-1 does not factor into first degree polynomials with real (i. e., not imaginary) coefficients. Nevertheless, it has solutions for x and y that are rational numbers, and even integers, such as (x,y) = (±1,0) or (0,±1). Clearly these are the only integer solutions, because we must have -1≤x≤1, so x has to be -1, 0, or 1, and so does y.
It was Diophantos himself who discovered all the rational solutions of this equation. He found what is called a parametric solution, obtained via elementary algebra. Namely, if t is any rational number (or even complex number, for that matter), then
x=(1-t2)/(1+t2), y=2t/(1+t2)
gives a point on the circle, and the coordinates are obviously rational if t is. Conversely, suppose (X,Y) is any point on the circle. It also lies on a straight line passing through the point (-1,0), having some slope t. The equation of this line is Y=t(X+1). The line intersects the circle at a unique point besides (-1,0), and if you do the algebra, you see that point must be given by X=(1-t2) and Y=2t/(1+t2). Since t = Y/(X+1), t must be rational if X and Y are. This shows that every rational point (except (-1, 0)) on the circle has the parametric form given, for some rational t. I. e., there's a very tidy 1:1 correspondence between rational numbers and points on the circle with rational coordinates.
This is actually just a special case of a second degree equation in three variables, which was extremely important to Greek mathematicians, namely x2+y2 = z2. Geometrically, this is the equation of a circle of radius z, centered on the origin. But more importantly to the Greeks, if x and y are the lengths of the sides of a right angled triangle, the length of the hypotenuse is z. This equation was discovered by Pythagoras (or someone in his school) some time in the 6th century BCE, and (of course) it was known as the Pythagorean theorem. If the lengths of the sides of the triangle are whole numbers, it is not necessarily true that the so is the length of the hypotenuse, since all we know is z=√(x2+y2). Indeed, the length isn't necessarily even rational – a fact which was considered quite scandalous at the time.
But if the length of the hypotenuse is a whole number, then (x,y,z) is called a Pythagorean triple. Such a triple was held in mystical regard by the Pythagoreans, so the search for such triples assumed a high importance. This search amounted to finding integer solutions of what we now call an instance of a Diophantine equation in 3 variables. Note that any solution in rational numbers also yields an integral solution. This is because if (x,y,z) is a rational triple, then multiplying through by the least common multiple of the denominators gives a triple of integers. (This works because the equation is homogeneous, with all terms having the same degree.)
No lesser a man than Euclid discovered how to describe all solutions in (positive) integers of this equation. Again, this is given in parametric form:
x=(u2-w2)w, y=2uvw, z=(u2+w2)w
It's trivial to check that you get a soution of the equation for any integer u, v, w (including 0 and negative values). The proof that this gives all solutions is harder and we leave that for you to think about.
Pell's equation
With the next step up in complexity, we reach some interesting territory that's directly connected with algebraic number theory. This involves an equation of the form x2 - ny2 = 1 for some positive integer n. n is fixed and not an additional variable. Solutions of this equation, however, depend very much on the arithmetic properties of n, and in some sense help us understand these properties.
The name "Pell's equation" was conferred by Leonhard Euler, who mistakenly gave credit for it to the otherwise obscure mathematician John Pell. However, the history of the equation goes all the way back to the Greeks, who were especially concerned with the case n=2, because it shines some light on the number √2. As you recall, √2 was of great interest (or concern) to them, ever since they realized √2 is not a rational number.
If y=0, then x=±1, so let's exclude that trivial solution. Then we can rewrite the equation as (x/y)2 = n+1/y2. Suppose there exist infinitely many solutions of the equation (as is in fact the case). Taking one with y large enough, 1/y2 can be arbitrarily small. And therefore, a solution (x,y) gives us a rational number x/y as close as we like to √n. This fact was already known implicitly to the Greeks, who also understood limit arguments. So they were pleased to know that even though √2 isn't rational, it can be approximated arbitrarily well by rational numbers. And the same is true of √n if n isn't a perfect square, so that √n is not rational.
Rather than pursue the details of Pell's equation, we need to look at how it's related to algebraic number theory. Recall that the set of all rational numbers is usually denoted by Q. Mathematically, this set has the structure known as a ring, in that it has the arithmetic operations of addition, subtraction, and multiplication, which observe certain rules. In fact, Q also has the structure of a field, which also allows for division by any of its elements other than 0. We will say much more about rings and fields in the next installment, so let's dispense with the formalities for now. It's safe, for present purposes, to think of the arithmetic operations of rings and fields as the ones you are already quite familiar with.
The integers, Z, also form a ring. Let Z[√n] stand for the set of all polynomials in powers of √n with coefficients in Z, and likewise for Q[√n] (with coefficients in Q). If n is a perfect square, these sets are just Z and Q, so let's assume n isn't a prefect square. It's obvious that these are rings also. But they are not necessarily fields, since 1/√n isn't rational if √n isn't.
What may be surprising, however, is that some elements of the form a+b√n may have inverses in these rings even if b≠0 and a≠±1. For instance, let n=2, and consider the number α=3+2√2 in Z[√2]. Observe that (3+2√2)(3-2√2) = 1. Hence 1/α = 1/(3+2√2) = 3-2√2, which is also an element of Z[√2]. Thus α has an inverse in Z[√2]. Such an element is called aunit. The only units of Z itself are ±1. But it turns out that for any positive n that's not a perfect square, the ring Z[√n] has infinitely many units.
What do these units look like? Well, first we need to define one more thing. If a+b√n is in Z[√n], define the function N(a+b√n) to be the number (a+b√n)(a-b√n) = a2-nb2. (This is the same definition as the square of the norm of a complex number, where n=-1.) Clearly, N(α) is also an integer for any α∈Z[√n]. It's simple algebra to show that if α,β∈Z[√n], then N(αβ)=N(α)N(β).
Given all this, if α∈Z[√n] is a unit, with inverse 1/α, so α(1/α)=1, we must have N(α)=1/N(1/α) is an integer, so N(α) is a unit of Z, which means it must be ±1. Conversely, if N(&alpha) = N(a+b√n) = (a+b√n)(a-b√n) = ±1, then α is a unit because it has an inverse. Hence the units α of Z[√n] are precisely the α such that N(α)=±1.
This is what allows us to show that Pell's equation has an infinite number of solutions, if there are any at all besides ±1 (which we haven't actually shown here). This is because every solution of Pell's equation a2-nb2 = 1 gives us a unit α = a+b√n with N(α)=1. For any such α, N(±αk)=1 too, for any k∈Z by the above, hence all ±αk are units, and give solutions of Pell's equation. Moreover, these are all distinct, because the only rational numbers that are units are ±1 (when b=0). For any irrational unit α the absolute value |α|≠1. We may assume |α|>1 (otherwise use 1/α). Then αk are distinct numbers for all positive k∈Z because the absolute values |αk|=|α|k are all distinct.
There may be other units as well, satisfying a2-nb2 = -1. For example, if n=5, we have N(2+√5)=-1, so 2+√5 is a unit of Z[√5]. If there exists a unit β = a+b√n with N(β)=-1, then there are also an infinite number of solutions of a2-nb2 = -1, corresponding to the units βk for odd integers k. (If k is even, we get solutions of Pell's equation.)
Catalan's conjecture
Catalan's conjecture can be stated as saying: the only consecutive whole numbers that have powers which differ by just one is the pair (2,3) -- where 32 - 23 = 1. Stated in the form of a Diophantine equation, the conjecture says there is only one solution of the equation xp - yq = 1, where x, y, p, and q are each more than 1, and x and y are consecutive (either y=x+1 or vice versa). Note that, like the Fermat problem, this is an assertion about an infinite set of possible equations.
To be sure, this problem sounds rather specialized, but it's still rather surprising to imagine that only one set of numbers meeting the stated conditions can exist. There are, after all, infinitely many pairs of consecutive numbers, and we ask only that there be two different powers of them which differ by one.
It's still an open question, but there has been recent progress. In 1976 it was shown that there can be at most finitely many exponents p, q for which any solution can be found. In other words, there is some (probably very large) number M for which all solutions have p and q less than M. Theoretical studies have discovered suitable values of M. (1017 will do.)
Then in 2000 Preda Mihailescu found that, except for 2 and 3, p and q must be what are called "double Wieferich primes", i. e. p(q-1) must be congruent to 1 modulo q2 and q(p-1) must be congruent to 1 modulo p2. Wieferich primes turn up in other number theoretical problems, but this is obviously a rather restrictive condition. (2 and 3 do not fit this criterion, but as often happens, very small primes are a special case.)
Given this result, Catalan's conjecture could be decided by computation alone, if all pairs of double Wieferich primes below the known limits can be identified. But a purely theoretical solution also seems tantalizingly close.
Beal's conjecture
Beal's conjecture is:
There are no positive integers x, m, y, n, z, r satisfying the equation
xm + yn = zr
where m, n, r > 2 and x, y, z are co-prime (have no prime factors in common)

No comments:

Post a Comment