Why Can't We Give An Answer to 0/0?
Student: What’s the answer to 0/0?
Teacher: It’s undefined.
Student: Then why can’t we define it?
Teacher: Because that’s just how division works.
The exchange above is one that happens often when students start learning division. It’s a simple question, arising since we can divide by any other number, so why not this one? Unfortunately, the answers given to this question don’t attack the consequences of defining 0/0, but explains it as something given by authority.
I often like to remind students of a very important lesson when they start wondering about the definitions and restrictions that come up. We get to make up the definitions used in mathematics. We aren’t forced to use a certain definition for a concept if we don’t want to! Mathematics is about defining concepts and building from them, but we get to choose those starting points.
One specific example where things can get controversial is when we start considering an expression like 0/0. In this case, what is the answer?
As the teacher said above, we leave this expression as undefined. In other words, we say that the statement just doesn’t make sense, and we move on. However, if the definitions are up to us, why can’t we define these to take on certain values? What’s the harm in that?
This is a good question, and answering it will give us insight into the notion of divisibility. What does it mean for a number to “divide” another? If you ponder this question, you will find that we can divide b by a if we can write b=ac, where c is another element. Note that I’m using the word “element” here because we don’t necessarily have to be working with integers, though that is the setting which is most familiar.
There’s another property that we would like, even though you may have implicity assumed this. It’s that if we can indeed write b=ac, the element c is unique. If we take the example of 20/5, we know that the only way to write this is 20=(5)(4). Four is the unique number that we get when performing the division.
So what about zero? If we want to find an answer to 0/0, we need to find a number such that 0=0a. What kind of numbers fit the bill? One works, since (0)(1)=0, but two works as well. Actually, you will quite quickly realize that all numbers work. That’s because you’re multiplying by zero, which “collapses” every single number to zero when you multiply them together.
Now we’re faced with a bit of a dilemma. Which number should we choose to be the answer? Remember, we have all the freedom in the world to create our own definition of things! Let’s say we choose 0/0=5. Then, what happens if we consider (0/0)(0/0)? On the one hand, we know that each term in the parentheses is 5, so we should get a result of 25. However, if we do the multiplication in our usual way, we also get that (0/0)(0/0)=(0/0)=5. As such, we would conclude that 5=25. This is clearly not a very good number system, since any time I owe you twentyfive dollars, I’ll only give you back five. We know that those two numbers should be different, so it’s a bit of a concern when we manage to say that they are equal to each other.
The lesson we learn from this is that, while we do have the freedom to define concepts however we wish, we also want to be consistent. If we can transform five into twentyfive, it turns out that we can make basically any number we want equal to five. This is not a system in which the usual arithmetic rules apply. That’s not inherently bad, but we need to ask ourselves if the tradeoffs are worth it. Is it a good idea to have 0/0 being equal to any number we want as soon as we say it’s equal to a specific number, or is it better to leave it as undefined? As a community, mathematicians have obviously chosen the path of not defining terms like 0/0, and it’s precisely for this reason. Our common sense goes out the window once we start allowing these kinds of expressions.
In fact, you may have seen “pseudoproofs” that 1=2, and these proofs rely on the fact that they are sneaking in a “divide by zero” operation at some point. Of course, these “proofs” won’t mention that, but that is the trick that is being played.
While it is easy to define these expressions to take on a certain value, there’s a reason why this isn’t done. We aren’t just lazy and don’t want to divide by zero. It’s that zero is a fundamentally different sort of number, and dividing by it doesn’t give us a unique answer.
However, it is much more fascinating to delve into why we don’t divide by zero, rather than simply forcing you to memorize this in class, don’t you think?
The Limits of Life
Note: I received a copy of this book as an ARC from NetGalley. It comes out next week. I tried to stick to the concepts he describes, but I’m sorry if you see any bias.
If you ever read a piece describing evolution or biology, chances are you will read something about how life admits infinite variety. Words like boundless and limitless get thrown around quite a bit. These make for nice narratives, but the truth is that life isn’t quite so free to do what it wants.
To be clear, I’m not saying that life isn’t diverse or that evolution is wrong. What I’m trying to get across is that one needs to be careful about what “infinite variety” means.
If you’re talking about the fact that the exact colour of one’s eyes can take on a gradient of values, then sure, there’s a functionally infinite amount of choice. (Eye colour isn’t anything special, it’s just an example.) In that sense, there’s an infinite amount of variety to be found in nature.
However, this doesn’t imply that life can take any form. Biology is beholden to something, and that something is the collection of equations of physics.
Evolution cannot just roam free and explore any form of life it wants. Evolution must comply with the equations and principles from physics. This immediately implies that life has limits, which means life cannot have infinite variety.
In his new book, The Equations of Life, Charles Cockell explores how physics can inform biological questions. It’s a great book that takes simple physical principles and shows how they have consequences for life. He links equations to the qualitative aspects of different life forms in a way that is easy to understand and shows how physics and biology do work together.
An analogy that I found useful was how he explains that the variety of life is more like a zoo than an infinite expanse. While there is room for a lot of variety, life is contained within a sharp perimeter, given by physical principles.
He writes:
No evolutionary roll of the dice can overcome a lack of a solvent within which to do biochemistry or the energetic extremes of high temperatures. The details, the temperature sensitivity of this and that protein, may well modify the exact transition between the living and the dead, particularly for individual life forms, maybe for life as a whole. But in broad scope, life’s boundaries, the insuperable laws of physics, establish a solid wall that bounds us all together.
One particular characteristic he explores is the notion of temperature. What kind of temperature limits are imposed on life? Is there an environment that is just too cold or too hot? This turns out to be an interesting question where the nature of atomic bonds in molecules plays a large part.
For high temperatures, there is a point in which the temperature of the environment will break the bonds of the atoms that make up the living organism. For carboncarbon bonds, this limit is about 450 degrees Celsius. After that, the bond breaks. In other words, no more living organism. The important part isn’t the particular value. Rather, it’s the fact that we know we can’t have carbonbased life at a temperature of 1000 degrees Celsius. It just won’t work. There are similar constraints at the high end of temperature for other atoms as well. There’s no getting around them because they are physical principles that apply to everything.
There’s a similar story for the lower end of the range. As Cockell writes, the issue here is that molecules don’t move fast when it’s cold. As a result, any sort of radiation can kill an organism because it will not be able to repair itself fast enough. Imagine you’re building a house and someone is removing bricks faster than you’re putting them on. Even though you keep on working, eventually there is no house left.
Cockell tackles similar ranges in the book, such as pH level. During these excursions, his point isn’t to derive a value and say, “There we go! Life can’t pass this barrier.” Rather, it’s to acknowledge that there does exist a point in which life cannot cross a certain barrier. These barriers can’t be jumped over by clever tricks from evolution. They are a fact of our universe.
This was one of the key lessons I got from The Equations of Life. As a physics student, I found myself interested in the relationship between physics and other sciences. Cockell does a great job of illuminating that relationship. Even if you’re not brushed up on your classical physics, this is a book that will reel you in. Plus, he has a wealth of citations at the end, if you ever want to explore the topics in more detail!
Hidden Assumptions
I remember when I first took a class in linear algebra, and we were talking about vector spaces. In addition to the definition of a vector space, we were also given multiple axioms that define what the structure of a vector space looks like. This included a bunch of boring things, like the fact that if you have a vector v, you should have a corresponding vector v such that v+(v)=0, where 0 is the zero vector. There are eight of these axioms, and together they describe exactly what can be called a vector space.
If you look at the axiom above, you might think that’s obvious. Of course I can go backwards from my vector v to get to zero. Similarly, you would probably balk at the notion of most of the other axioms, such as commutativity (a+b=b+a) or that there is a zero vector such that v+0=v. These properties seem more than obvious to you. They are ingrained. Why do these define a vector space specifically? Doesn’t this always hold for any kind of space?
Well, not quite. If we just focus on commutativity, this is something that isn’t always true. To take an example that isn’t part of mathematics, but is something else we do in our everyday lives. Imagine you were eating a bowl of cereal in the morning. After taking out your bowl and putting cereal in it, you then pour your milk, followed by eating the cereal. But what it you did those last two steps in the opposite order? Now, you eat your cereal, and then you pour your milk in the bowl. Evidently, your cerealeating experience won’t be the same in both cases. This illustrates something we don’t often appreciate in mathematics. Order is important. It’s just that we tend to work in spaces that have this nice and special property, so we start to think that all spaces do.
If you want a more mathematical example of commutativity not holding, the classic example is that of matrices. This is another part of linear algebra, and while taking the course you quite quickly figure out that the order does matter when multiplying matrices. Suffice to say, bad things happen when you start assuming things will work out without knowing if your assumption is correct.
Why is this important? We often go through a lot of our mathematical lives without considering our assumptions. Instead, we focus on solving particular problems, and then comparing with a known answer. Furthermore, we tend to get used to thinking about the particular spaces we work in, and assuming that these nice things will hold for all other spaces. This lack of diversity creates bad habits in our mental models, creating prejudice for one specific way of things working.
Remember, axioms are the foundation for basically everything you do in mathematics. If you keep on digging, you should always hit a floor of axioms. This is simply because the goal of mathematics is to discover truth from those axioms. These axioms can’t be proven. They are taken for granted. But the surprising thing is that the rules and elements we take for granted on a daytoday basis within mathematics aren’t axioms. They are actually statements that can be proved using other axioms.
For example, take an innocentlooking statement that almost no one will blink an eye at while reading:
If a ≠0 and ab=ac, then b=c.
This is known as the law of cancellation. We get quite good at doing this in secondary school, where we learn how to simplify fractions and other expressions. This naturally comes in handy, because it lets us avoid working with larger numbers than we have to do. If 2x=8, we can “cancel” a factor of two from both sides of the equation and obtain x=4.
So far, so good. This is harmless, right? How could a simpe rule like this possibly go wrong?
The problem is that you’re not thinking about this in general. The subtlety here is that it matters where the elements a and b come from. If they are part of the real numbers, then fine, that’s not a problem. But it takes a particular type of “setting” to have this cancellation rule. In particular, you need to be in an integral domain, which is a space that is basically defined by this property.
And no, there are very real examples that aren’t integral domains. For example, let’s do some arithmetic in modulo eight. This just means that we reduce any number until it’s the remainder upon division by eight. In this setting, we then have that 6 ≡ 14. But look what happens here. We know that 2⋅ 3 = 6, and 2⋅ 7 = 14, but even though 2⋅ 3 ≡ 2⋅ 7, we certainly don’t conclude that 3 ≡ 7. When you divide both of these numbers by eight, the remainders are three and seven respectively, so they are not the same. As such, you can immediately see that the cancellation doesn’t work all the time.
What this hints at is that we can often be surprised by our prejudices. The cancellation rule is very common and is acknowledged throughout secondary school, so you might almost feel cheated that this is suddenly not taken for granted. This is a normal reaction, and it’s one you will slowly learn to ignore.
This is why learning and remembering those “boring” axioms is so important. They are the building blocks in which we base our structures on, so it’s vital that you remember them. Additionally, you should frequently pose yourself questions such as, “Why do we require this axiom? Can I get all of the results I want without it?” Remember that we are frequently interested in generalizing notions in mathematics, so we don’t want to carry along “extra baggage”, if you will. We want enough axioms to give us some constraint and structure (or else we won’t be able to generate results), but we don’t want to assume anything that we aren’t absolutely forced to. It’s about minimality, and only adding structure when you need it. So all of those axioms you learned in various classes, were important, because there are other settings where these aren’t taken for granted.
It’s a mathematical jungle out there, so best to keep in mind your foundational axioms while learning a new topic.
Misaligned Incentives
There’s a saying among students regarding preparing for an exam. In short, it goes like this: Study a lot before the test, and then you can forget most of what you know.
I think of tests as barriers that I have to get past. For example, I took seven classes this past semester, and so I had many tests and final exams. The back half of my semester involved a lot of studying in order to get over these barriers of tests. This often involved studying a lot before the test, and much less after.
For a while now, I’ve been quite disillusioned about the nature of tests and grades. I know I come from a place of a lot of privilege, since I’ve benefitted quite a bit from having good grades (through scholarships), but I can’t help but notice the dismal state of affairs with regards to this system. I think of myself as someone who loves to learn, and I use a good chunk of my day to study and do homework. Yet, I can’t say that I don’t share the mindset that I wrote above concerning tests. I will study a lot before a test, and then let a lot of that specific knowledge drift away after the test. Of course, I know that I retain some of the skills and knowledge that I’ve learned, but I’m not going to delusion myself into thinking that I don’t see tests as annoying barriers to get by. Tests (and the proxy of getting good grades) is the current “carrot” in the system.
However, I’ve found that this is completely misaligned with some other, more important, incentives. For one, I would ideally want to be capable of learning a subject and explore any particular rabbit hole that catches my intention. Instead, we have to go through a certain course outline and check off a bunch of things before the end of the semester. There’s also the fact that tests often aren’t a good reflection of learning the content. Instead, I would argue that they show that one can write solutions to problems in a limited amount of time in a highstress environment. This is the easy choice if we want a system capable of comparing students and giving them “rankings”, but I would argue that it isn’t useful to learning the subject.
The other thing that bugs me more and more is how we often say that science should be a collaborative process that is better off when people work together, yet we will give examinations where there are very tight restrictions on what is allowed. For example, it’s almost a nobrainer that students can’t talk amongst themselves during an exam to discuss the problems. Furthermore, and suggestion of being able to use electronic devices would be met with strong opposition. But why is that?
Of course, some will argue that this would lead to easy cheating. Student’s will be able to “get through” the filter of a test without actually knowing the subject. This is true, but would that really be a problem? If a student is motivated enough to cheat, then they will pass the course. But down the road, this lack of knowledge will catch up with them. Once we get to postsecondary education, I’d say that those who are there should be motivated enough to learn such that they wouldn’t cheat^{1}.
Additionally, I don’t think this would magically raise everyone’s grades to perfect marks. My argument here is based on the way assignments are done in classes. What I’ve observed in my own classes is that there is a spectrum of students. Some don’t really care about how much effort they put into assignments, while others will spend hours on them. (I recently spent nearly forty minutes going over one part of a problem in which I couldn’t find my error. I’m certain that not everyone is as patient as myself.) This is reflected in the marks students receive on these assignments.
But here’s the funny thing: it’s not like students are banned from discussing the problems with others, using online resources, or even asking the professor for help. If a student really wanted to, they could easily go online and find the answers to just about any textbook question, with a detailed solution. Suffice to say, students aren’t exactly in a test situation while doing assignments. Yet, I always see a distribution in terms of grades when we receive our assignments back. So what gives?
One aspect admittedly is that assignments aren’t weighted as much as the final exam (and other tests). This means that some students have a lower incentive to actually complete them. However, I want to submit that this isn’t entirely what is happening. In addition to students not caring as much about assignments because of the weight, students don’t write solutions to assignments in the same manner. This might seem trivial at first glance, but it’s really a huge aspect. If you want to convince someone you are correct, the best way to do that is through a thoughtout argument. Sure, you can probably get the same answer while doing some sloppy work, but it’s not going to be as polished and strong as others. This is the important bit in education. It’s not about if you can get the right answer. Education is about being able to think critically and present your arguments in a compelling way.
I’ve talked to several of my fellow students who do marking as a side job for professors, and they will often say that the quality of a student’s work is highly variable. Some have obviously copied their answer from other places, while some only scribble the minimal amount of work without any indication as to what means what. On the other hand, you will find students who frequently write out detailed and clear solutions. Mind you, this doesn’t necessarily imply that longer is better. Being succint can be just as great, as long as one can follow the argument. The point is that we can start grading with respect to this variable, which is something that is only partially done, and with a lot of deference to the correct answer.
When I write my solutions to problems, I like to think that they are detailed and clear. However, this almost entirely goes out the window when a test arrives. Time constraints are an enormous stress on a student, and it makes the work worse accordingly. This is tragic, because the most important problems in a class are those in which the students don’t get time to mull over the presentation of their work. Think about it. The weighting of a course grade is almost always heavily skewed towards tests and exams, where time constraints are often strict and cause students to rush. I write these words hours after I’ve finished my last test before my final exams (long into the past by the time you are reading this), and the time limits definitely affected my clarity. When you are under time constraints, it’s easy to rush into solving a problem without taking a moment to fully pause and ponder what you are doing. Every moment spent idling is one moment less to write the rest of the test. This is not a good recipe to getting students to learn and understand in the long term. It might be a good method to get used to being under duress, but I would argue that this isn’t what we should be searching for.
Historically (and at our present moment), this is something we aren’t doing well. We have misaligned our incentives such that they are easy to capture and process (grades on timeconstrained tests), yet don’t give students enough time to fully think about what they are doing. One of my professors stated this misalignment in a way that stuck with me. He said, “You usually do more difficult work on the assignments because you have time and can work through the problems, but the tests are simpler.” I don’t know if he realized it at the time, but this is a crazy thing to say if our goal is to get students to understand subjects more deeply. It all comes down to honesty. Do we want students to have the required material internalized for a test, or do we want them to really think about what they are learning, past the final exam and into the future?
We are missing a huge opportunity here. I envision an alternative world where the objectives of a course aren’t measured through tests given at the end which are highly stressful on a student, but instead by evaluating students during the entire semester. It’s not that suddenly everything needs to be graded and you can’t get anything wrong. It’s about replicating the kinds of environments that reflect learning in the world, which aren’t closedbook exams under a time limit. Those are artificial constraints that serve to make students stressed, and it creates an incentive to study for that one test, versus thinking more deeply on the subject mattter.
The issue is alignment. Do we want to emphasize thoughtfulness and hard work during a course, or do we want students to prioritize one large “study period” at the end of the semester, and letting them forget a good chunk of the material after? If we ponder this question for a while, the answer becomes pretty clear. The current manner of doing things is great for efficiency, but it isn’t conducive to what we really purport to want with education. So let’s start trying to shift this trajectory, and get to where we want to be. It is possible, but we need to get past the inertia of the regular expectations.

By which I mean simply copying without actually understanding what is happening. ↩