The Rational Roots Theorem

One of the skills students learn in secondary school is to factor quadratic expressions. In particular, they learn how to solve equations like x2+2x+1=0. There are a slew of techniques one can use to deal with quadratics, and they mostly rely on the fact that questions asked in assignments and tests have “nice” factorizations. Most expressions have integer solutions, or at worst rational ones. This makes it straightforward to factor. Of course, this might take a while to get used to, but it’s a skill that many students pick up.

If they really have no idea how to factor an expression, they learn about the tool that deals with basically every case: the quadratic formula. This is something that most students will remember from their days in mathematics class (even if they don’t remember the formula explicitly). The nice thing about this is that, as long as you can crank the gears of arithmetic, the quadratic formula won’t fail you.

This is great, but the reality is that the world has much more than just quadratics and linear equations. In fact, what you learn soon after leaving secondary school is that the whole class of polynomials tends to be considered “nice”. Therefore, a question you might ask is, “How can I factor these expressions?” After all, there’s nothing different in principle with factoring a polynomial of degree greater than two. You still have to find the roots of the equation, though this looks more difficult when you have a polynomial with a higher degree.

Being able to factor larger polynomials comes up from time to time. I had to do this in my differential equations class, which doesn’t immediately suggest factoring. While one of us was bemoaning the difficulty of factoring these large polynomials, my professor stated an interesting little result that I didn’t know about. It’s called the rational roots theorem, and it made enough of an impression on me that I still remember it.

Here it is. The rational roots theorem tells us that if a polynomial with a constant term has a rational root in it of the form a/b where a and b are relatively prime, then a will divide the constant term and b will divide the coefficient of the leading term.

I think it’s quite instructive to see this in an example, so let’s look at one before the proof of why this works. Suppose we have the following polynomial:

4x2+9x+2.

The question is, can we factor this nicely? Since I took a simple quadratic, you can probably figure out its factors without this method. However, if we refer to the rational roots theorem, we need to look at the constant term 2 and the coefficient of the leading term, 4. The factors of 2 are 2 and 1, while the factors of 4 are 1, 2, and 4. Furthermore, since our theorem simply says that the numbers a and b will divide the constant and leading coefficient, our values can be negative too. As such, our possible values of a are 1, 2, -1, 2, while the value of b could be 1, 2, 4, -1, -2, and -4. The possible values for a solution are given by the rational number a/b:

±1, ±1/2, ±1/4, and ±2.

This gives us eight possible solutions to the equation. If there’s a rational solution, it will be in the above list. We can then test each one and see if the output is zero (meaning it’s a root). In our case, since the coefficients of each term are positive, the only way to get an output of zero will be if the input is negative. That eliminates half the values. We can then test to find that the solutions are x=-2 and x=-1/4, which are both on the list.

Now, you might not be too impressed with this. After all, this doesn’t guarantee we will find solutions. It just gives us possible candidates. What happens if we have a few solutions in our list, but some that aren’t there (since they aren’t rational)?

This is a valid concern, but the reason I’m discussing this theorem is because it has practical value in class. When factoring polynomials, chances are the solutions will be rational. For quadratics, I wouldn’t employ this method since I’m used to eyeballing the solution (through lots of practice). However, when the degree of the polynomial is larger, this theorem comes in handy.

The idea isn’t to necessarily find all the factors in one go. In fact, the context that my professor mentioned this was when he explained that factoring a polynomial becomes easier as soon as you find one factor. That’s because you can then exploit long division to make the polynomial smaller, which is easier to work with.

So what is the proof of this result? Thankfully, there isn’t anything too difficult with it. The crux of the proof lies in rearranging the equation for the roots.

To begin with, let’s consider a polynomial p(x) of the form:

p(x) = anxn + … + a1x + a0.

For convenience, we’ll say that the leading coefficient an isn’t zero (or else we’ll just consider the next leading coefficient). One other requirement is that the constant term a0 isn’t zero either. This is crucial for the proof, as we will see below.

Now, consider a rational root to p(x). Furthermore, since we can always simplify a rational number until the numerator and denominator are relatively prime, let’s call the rational root u/v. Substituting this into the polynomial gives:

p(u/v) = an(u/v)n + … + a1(u/v) + a0 = 0.

We then want to see if this imposes any conditions on u or v. To start with, we can move the constant term a0 to the other side of the equation, and then multiply both sides by vn, since this will remove any fractions. Doing so gives:

anun + … + a1uvn-1 = -a0vn.

At this point, look at the left side of the equation. Each term contains at least one u, which means the whole of the left side is divisible by u. However, since the two sides are equal, this means u also divides the right side. In terms of the above equation, this looks like:

u(anun-1 + … + a1vn-1) = -a0vn.

We know a bit more than this though. Since the root for p(x) was u/v, we know that u can’t divide v. Why? Because u and v are relatively prime, which means they don’t share any common factors. Therefore, one definitely can’t be a multiple of the other. Since u doesn’t divide v, this also means u can’t divide vn. This gives us only one final possibility, which is that u must divide a0!

We can play a similar game with our equation of p(u/v) = 0 by rearranging the terms for v. Doing so gives us:

anun = -a0vn - a1uvn-1 - … - an-1un-1v = v(-a0vn - a1uvn-1 - … - an-1un-1).

Again, we that there is an overall factor left in the polynomial terms, so we can pull out a v from each term. This means the whole right side is divisible by v. As such, we get that the left side is divisible by v. Then, we can note that the same argument applies as before, which lets us conclude that v must divide an. This gives us the desired result we wanted.

Note here that is was very important to have a constant term in our polynomial. Without it, we couldn’t bring a term onto the other side of the equation and then multiply through by vn.

There’s one more thing we can note here. If the leading term in the polynomial is an = 1, this implies v = 1, since it’s the only integer that divides 1. (Strictly speaking, we could also consider v = -1, but we can just absorb the sign in the numerator.) What this means is that, if the leading term of a polynomial has a coefficient of one, the roots will either be integers or irrational numbers.


What I find so interesting about this theorem is how the leading coefficient and the constant term impose “constraints” on the roots of a polynomial. It’s not really surprising when you understand that when you expand the factored form of a polynomial, the “information” from the roots is encoded in the constant term. I’m not sure how rigorous this interpretation is, but I like the heuristic.

Like most interesting results, there’s no amazing practical use of it. Yes, it can make your life easier if you find yourself needing to factor large polynomials, but other than that I just find it makes for an interesting theorem.

As a final note, there is a clever application of this theorem that I want to mention. In the essay, I didn’t mention the straightforward implication of the result: if you find that there’s a solution to the equation which doesn’t fall into the rational category, the solution is irrational. This might sound sort of obvious, but it means we can “test” to see if certain numbers are irrational by constructing polynomials with them as solutions. I learned about this from a Mathologer video, so I highly recommend checking it out if you want more details.

Mathematics Isn't Just Numbers

We often equate mathematics with numbers, as if mathematics doesn’t extend further than doing arithmetic. Each time this happens, I have to restrain myself from going on a rant. I want to grab the person by the collar and exclaim, “There’s so much more to mathematics than just numbers! It’s like saying that running is just a bunch of one-legged hops. While that might be technically true, it’s not the way most people would describe their experience. In the same way, mathematics is way larger than just numbers.”

Even within my own family, some of them still see mathematics as essentially just a bunch of numbers with the associated arithmetic. I’m in a mathematics program, and this still isn’t clear to them. I think that’s clearly a failure on my part to share the diverse aspects of mathematics.

I encounter (and recognize) mathematics everywhere in my life. I know that it’s responsible for a lot of what I see on the web in terms of illustrations and graphics, algorithms rule our lives both online and offline, and mathematics is present in all the engineering I see. Most of these examples aren’t purely about numbers. Sure, it’s difficult to get away from numbers in mathematics, but they aren’t always the primary players. And yes, you can find plenty of examples of mathematics in the “real world” which aren’t cringe-inducing (like you might see in textbooks).

As a mathematics student, I’ve learned so much that doesn’t directly involve numbers. I’ve learned a bunch of geometry, analysis, probability and statistics, abstract algebra, discrete mathematics, and graph theory. While numbers are present in each of these topics, they only serve to make the concepts easier to handle.

If I wanted to draw “mathematics” and “numbers” as two sets, the former would encompass the latter. In other words, numbers are a part of mathematics, but they aren’t everything. This is something that I want to make more clear to a general audience. Mathematics isn’t all about numbers. In particular, if you just look at the area of geometry, there is so much you can learn without even worrying about numbers. This is an especially fertile ground for those with a passing interest in mathematics.


The lesson is simple: there’s a lot more to mathematics than numbers. Of course, numbers are present almost everywhere in mathematics, but they aren’t the point in and of themselves (unless you’re studying number theory, perhaps). Mathematics is a lot richer than that, so there’s no reason to put it off if you’ve “never been good with numbers”.

Degeneracy of the Quantum Harmonic Oscillator

Note: This post uses MathJax for rendering, so I would recommend going to the site for the best experience.

I just love being able to find neat ways to solve problems. In particular, there’s something about a combinatorial problem that is so satisfying when solved. The problem may initially look difficult, but a slight shift in perspective can bring the solution right into focus. This is the case with this problem, which is why I’m sharing it with you today. Don’t worry if you don’t know any of the quantum mechanics that goes in here. The ingredients themselves aren’t important to the solution of the problem.

Energy of the quantum harmonic oscillator

If you have taken a quantum mechanics class, there’s a good chance you studied this system. The quantum harmonic oscillator is one that can be solved exactly, and allows one to learn some interesting properties about quantum mechanical systems. Briefly, the idea is that the system has a potential that is proportional to the position squared (like a regular oscillator). In the quantum mechanical case, the aspect we often seek to find are the energy levels of the system. This is what is of interest in our problem here.

In one dimension, the energy is given by the relation $E_n = \left(n+1/2 \right) \hbar \omega$, where n is an integer greater or equal to zero, and the terms outside of the parentheses are constant (Planck’s constant and the angular frequency, respectively). However, what’s nice is that this extends into any number of dimensions in a straightforward way. If we want to look at the harmonic oscillator in three dimensions, the energy is then given by:

In other words, there’s a n value for each dimension. We can even consider the harmonic oscillator in N dimensions, and the energy would change in the same way. We would just add a new n index, and throw in an extra factor of 1/2. Furthermore, it’s important to know that each combination of n’s gives a different physical system.

What you might notice from the three dimensional case is that there are different combinations of nx, ny, and nz that give rise to the same total energy. For example, we can note that the combinations (1,0,0), (0,1,0), and (0,0,1) all give the same total energy. This is called degeneracy, and it means that a system can be in multiple, distinct states (which are denoted by those integers) but yield the same energy. In this essay, we are interested in finding the number of degenerate states of the system.

The counting problem

Here’s the question. Given a certain value of n (which in the three-dimensional case is n = nx + ny + nz), how many different combinations of those three numbers can you make to get the same energy? If we want to be more general, for a given n and N (the number of dimensions), what is the degeneracy?

If we do a few examples, we will see that the degeneracy in three dimensions is one (no degeneracy) for n = 0, three for n = 1, six for n = 2, and ten for n = 3.

I don’t know if you’re seeing a pattern here, but it’s not super clear to me. I definitely don’t see how to generalize this to any n, let alone for more dimensions. As such, we’re going to look at this in a whole different way.

The method I’m going to discuss is one I found here, and is called the “stars and bars” method. It’s a beautiful technique that captures exactly what this problem is asking.

We start by thinking about how we can represent this problem. For a given n, we want to find a way to split this number into separate parts. Say we want to split the number into four parts. Then, we would need to introduce three splits in the number n so that there are four “pieces”.

How many parts do we want for our particular problem? Well, it depends on the dimension we’re working in! For example, if the dimension is three, we want to split n into three parts. This means we need to “cut” the number twice.

Words don’t describe this as well as an explicit visual example. Let’s pretend we have n = 5, and we are working in three dimensions. We will represent the number five by circles, and the splitting will be done using vertical bars. Then, here’s a way we can “cut” n.

A way to break the number 5 into three pieces.

As you can see, this is just another way to word our original question. What’s neat though is that the construction doesn’t actually “know” about the way the number n is being split. In other words, all we’ve done is introduce a second kind of object into the mix (the vertical bars). We recognize those vertical bars as dividing the number into three pieces, but the mathematics doesn’t care.

If you’ve taken some discrete mathematics, you may know where we’re going with this. We’ve reduced the question to finding the number of ways we can arrange objects. This is a common combinatorial problem, and one that is well-studied. The answer is ripe for the taking.

For our scenario, how many objects do we have in all? Well, there are n objects, and we have to also include the number of bars. But the number of bars is just N-1. Therefore, the total number of objects is N-1+n. Then, from those objects, we want to fix the position of the bars. Therefore, we get the usual combinations formula. If we want to label the degeneracy as gn, we get:

In particular, we can now solve this question for three dimensions. Substituting N=3, we get:

All I can say is that this is slick. I remember first trying to solve this on my own, and getting stuck. Even when I was given the answer, it didn’t feel satisfying to me. I knew that there had to be some better explanation for the degeneracy. I felt like there should be some combinatorial argument for the degeneracy, and it turned out that I was right! I hope that this argument helps clear things up for other students who were wondering about the formula and how to get it. In my mind, this is one of the clearest ways to get it.

Being Happy With Being Repetitive

One of the most difficult things to do in life is to focus. Maybe you’re different than me, but I have a lot of trouble sitting down and focusing on one task or idea. Instead, my mind buzzes with activity while my hands do another. I’m always switching between ideas, and it takes a lot of energy to focus on just one.

On a more macro level, my trouble with focus manifests in the types of activities I want to do. There are so many wonderful things I could spend my time on. I could become a better runner, I could type words all day, I could learn to program, or I could work on teaching others what I know in mathematics or physics. This is just a small sampling of the activities which interest me, and the difficulty is being able to focus on just one (or at most, a few). After all, I don’t want to just do these activities. I want to to be great at them!

This is where reality comes knocking at the door. We simply don’t have enough time to do all the activities we wish we could do. Furthermore, it’s likely that you don’t even like a lot of those activities. Instead, you think you would enjoy them, but you haven’t actually tested that theory out. The result is that you have a bunch of ideas flying around in your head that are either not grounded in reality or aren’t practical to do all at once.

With only finite amount of time in a day, we can only make meaningful progress in a few of those areas.

This sounds limiting. When I encountered this obstacle, my instinct was to rebel and try to find a loophole. Surely I was different? Maybe most people can’t focus on a bunch of hobbies and projects, but I’m sure that I can.

Perhaps you are different than me, but I found that I can’t do it. If I try to take on multiple projects or hobbies, I can’t sustain it. I might be able to do them, but I won’t improve and get to a level of excellence that I want. It just doesn’t happen.

Instead, I’ve learned to embrace repetition.

Instead of looking to do a bunch of activities, choose a few of your favourites. Perhaps one or two to start. Then, focus all of your attention on these pursuits. If you need to, block out the sources which broadcast what you’re “missing”. They aren’t helping you if you feel the need to go do those activities every time you see them.

This isn’t easy. It’s very difficult, but it is also freeing. When you remove the other sources of activities you could be doing, you don’t have to fight a mental battle each day to choose what to do. Instead, you can focus on the few pursuits which you want to do for this season of your life.

If you want to really get better though, you need to embrace the repetition. Not only will you take small steps every day to improve in your pursuit, you will look forward to the very act of doing the thing you want. This might cause some confusion. Aren’t I doing this thing because I love it already? That may be true on a mental level, but you have to feel it.

For myself, this is the difference between feeling like I “have” to go on a run every day because that’s how I’ll get faster and remain a lifelong athlete versus feeling lucky to go run. When I go to bed at night, I look forward to the run the next morning, even if the weather is supposed to be terrible. I’m looking forward to the run of today. Not the results in three months, but the activity in the moment. This is what I mean by embracing repetition. It’s more than just doing the same thing over and over. It’s finding joy in the daily act, not in the long term results.

I must confess: this didn’t come automatically to me. For a long time, I was pursuing activities as a means to an end. I had one eye on the moment and one towards the result that I was seeking. However, I now think this is the wrong way to go about it. If you want to improve and get really good at your thing, start by loving the act of doing it every day. If you enjoy writing, learn to love the experience of facing your keyboard and putting down words on the page.

Why? Because that’s all you will ever get.

Yes, you might someday be published in your dream publication. Yes, you might bring a book into the world. However, that won’t change the fact that the act of writing isn’t seeing the published work, but the actual work of crafting words. There are a lot of details and specifics depending on exactly what you do, but the essence of writing is putting words on the page. That’s it. If you can really enjoy doing that each day, you will improve at your craft.

Furthermore, when you start enjoying the daily act of your work, it’s easier to ignore those other calls for attention. Sure, it might be fun to try out this new pursuit, but if I already love what I’m doing right here, why bother changing it up?

I want to be clear. There’s no problem in changing things up with your pursuits. In fact, that’s a great thing, because it lets you expand your skills and grow as a person. However, this can be taken too far, and that results in switching from pursuit to pursuit, never quite satisfied.

The problem is that a surface-level familiarity with a pursuit isn’t enough. To really enjoy it, you have to dig deeper. This takes time and focus. Only then will you start to see that this pursuit is meaningful, perhaps just as much as that “other” pursuit which society loves.


I’ve experienced what happens when you switch from pursuit to pursuit. You become restless, thinking that maybe the next one will be right for you. This makes you want to keep on searching, never being quite satisfied because there is a possibility that something else is better. That might be true or false. I can’t answer that. What I can say is that you can waste a lot of time searching for the “right” activity without ever doing anything. Therefore, my advice would be to find something you seem to like, and then dig deeper. Give yourself a few months in which you will focus exclusively on this pursuit. It doesn’t have to be forever, but you need to give yourself enough time to find joy in the repetitiveness. At that point, you can decide if you want to keep your search going.

Finding the work we love to do seems like the difficult part, but the real challenge is being brave enough to stop searching and say, “I’m going to try this and see what happens if I give it all of my focus.”