### Jumps in Abstraction

If I ask an adult to tell me what *3-5* is, there’s a good chance that they would tell me the answer is 2 without much thought. This kind of arithmetic is simple to us, since we’ve had to do it over and over again through elementary and secondary education. Even if we haven’t used mathematics in a long time, these questions are straightforward.

But it wasn’t always this easy. Remember, we aren’t born with this innate sense of negative numbers. In fact, we wait until the end of elementary school before seeing negative numbers. Before, this, if you ask a young student what *3-5* is, chances are that they will answer “2” (because they figured you meant it the other way), or if you explain that you are indeed talking about *3-5*, they will tell you it isn’t possible.

You and I can both imagine what comes before the number zero. But to the young student, it’s not that they don’t have the imagination. **They don’t even know that this extra richness is there.** This is an important insight, because it signals to us that we get comfortable within our usual mathematical spaces. Consequently, we can become blind to the generalizations that are possible, just like the young student who can’t imagine that there is even such a *thing* as a negative number.

Going up a few levels in education, a lot of secondary school concerns itself with geometry. Students learn about perimeter, then area, and then volume. But the kinds of topics that are covered during these explorations are limited in scope. Students learn about regular solids like cubes, prisms, pyramids, cylinders, cones, and spheres. These are nice, because they allow teachers to combine them to form more complex solids. The task for the student then becomes figuring out how to separate a large, composite object into a bunch of smaller objects and add their corresponding volumes.

This is a great exercise for a student. However, by focusing on these core solids and only dealing with different combinations of them, the students don’t get to see the richness *beyond* those solids. The world isn’t only made up of those solids! We have plenty of other interesting forms that we can find in nature, from a sprawling tree to a curvy egg. These aren’t the simple solids that students are used to. Furthermore, what about the amazing objects that mathematicians have come up with, such as Torricelli’s Trumpet? I can just imagine the interest that would be generated when showing students how this particular object has an infinite surface area, yet somehow has a finite volume! Of course, one would have to work within the constraints of limited calculus knowledge, but I’m certain that this could work.

Sometimes, we have to get out of the thicket of working through particular problems, and figuring out where we are on our mathematical journey. By doing this, students get better at understanding the context that surrounds what they are learning, rather than simply keeping their heads down and working on problem after problem. That strategy may be “productive”, but it will ultimately hamper students’ awareness about the wider mathematical world.

I’m not advocating here for a radical change in how one teaches (that’s a different story). What I’m arguing for here is to give students a broader idea of what the results they see *mean*. It can be as simple as sowing the seeds for deeper connections for students to think about. The goal should be to make sure the students know that there is always more to uncover if they so choose. I absolutely *don’t* want students thinking that they have learned all that exists in mathematics by the time they are done secondary school.

It’s also good to note that this isn’t only important within secondary and elementary schools. This is something that should be done in *all* levels. Throughout this past semester, I’ve been pushed to consider many more mathematical spaces in my abstract algebra class. While many spaces take cues from spaces like the integers, the rational numbers, or the real numbers, there are many other spaces which have similar attributes, and can be studied beside our usual settings. The essence of algebra is preserved, even though we don’t know which explicit space we are talking about. This is both neat and difficult to wrap one’s mind around. It means that you have to step away from the comfort of the familiar spaces and explore new ones. It means opening up your perspective to a vast new world of possibilities.

I want students to get a sense of *that* while studying, to show them that there is always more to learn, if they are so inclined. Let’s do our best to get students out of their comfort zones, and be surprised and delighted every once in a while.

### Why Can't We Give An Answer to 0/0?

*Student*: What’s the answer to *0/0*?

*Teacher*: It’s undefined.

*Student*: Then why can’t *we* define it?

*Teacher*: Because that’s just how division works.

The exchange above is one that happens often when students start learning division. It’s a simple question, arising since we can divide by any *other* number, so why not this one? Unfortunately, the answers given to this question don’t attack the consequences of defining *0/0*, but explains it as something given by authority.

I often like to remind students of a very important lesson when they start wondering about the definitions and restrictions that come up. **We get to make up the definitions used in mathematics.** We aren’t *forced* to use a certain definition for a concept if we don’t want to! Mathematics is about defining concepts and building from them, but we get to choose those starting points.

One specific example where things can get controversial is when we start considering an expression like 0/0. In this case, what is the answer?

As the teacher said above, we leave this expression as undefined. In other words, we say that the statement just doesn’t make sense, and we move on. However, if the definitions are up to us, why can’t we define these to take on certain values? What’s the harm in that?

This is a good question, and answering it will give us insight into the notion of divisibility. What does it mean for a number to “divide” another? If you ponder this question, you will find that we can divide *b* by *a* if we can write *b=ac*, where *c* is another element. Note that I’m using the word “element” here because we don’t necessarily have to be working with integers, though that is the setting which is most familiar.

There’s another property that we would like, even though you may have implicity assumed this. It’s that *if* we can indeed write *b=ac*, the element *c* is unique. If we take the example of 20/5, we know that the only way to write this is *20=(5)(4)*. Four is the unique number that we get when performing the division.

So what about zero? If we want to find an answer to *0/0*, we need to find a number such that *0=0a*. What kind of numbers fit the bill? One works, since *(0)(1)=0*, but two works as well. Actually, you will quite quickly realize that *all* numbers work. That’s because you’re multiplying by zero, which “collapses” every single number to zero when you multiply them together.

Now we’re faced with a bit of a dilemma. Which number should we choose to be the answer? Remember, we have all the freedom in the world to create our own definition of things! Let’s say we choose *0/0=5*. Then, what happens if we consider *(0/0)(0/0)*? On the one hand, we know that each term in the parentheses is 5, so we should get a result of 25. However, if we do the multiplication in our usual way, we also get that *(0/0)(0/0)=(0/0)=5*. As such, we would conclude that *5=25*. This is clearly not a very good number system, since any time I owe you twenty-five dollars, I’ll only give you back five. We *know* that those two numbers should be different, so it’s a bit of a concern when we manage to say that they are equal to each other.

The lesson we learn from this is that, while we *do* have the freedom to define concepts however we wish, we also want to be consistent. If we can transform five into twenty-five, it turns out that we can make basically any number we want equal to five. This is not a system in which the usual arithmetic rules apply. That’s not inherently bad, but we need to ask ourselves if the tradeoffs are worth it. Is it a good idea to have 0/0 being equal to *any* number we want as soon as we say it’s equal to a specific number, or is it better to leave it as undefined? As a community, mathematicians have obviously chosen the path of not defining terms like 0/0, and it’s precisely for this reason. Our common sense goes out the window once we start allowing these kinds of expressions.

In fact, you may have seen “pseudo-proofs” that *1=2*, and these proofs rely on the fact that they are sneaking in a “divide by zero” operation at some point. Of course, these “proofs” won’t mention that, but that is the trick that is being played.

While it is easy to define these expressions to take on a certain value, there’s a *reason* why this isn’t done. We aren’t just lazy and don’t want to divide by zero. It’s that zero is a fundamentally different sort of number, and dividing by it doesn’t give us a unique answer.

However, it is *much* more fascinating to delve into why we don’t divide by zero, rather than simply forcing you to memorize this in class, don’t you think?

### The Limits of Life

*Note: I received a copy of this book as an ARC from NetGalley. It comes out next week. I tried to stick to the concepts he describes, but I’m sorry if you see any bias.*

If you ever read a piece describing evolution or biology, chances are you will read something about how life admits infinite variety. Words like *boundless* and *limitless* get thrown around quite a bit. These make for nice narratives, but the truth is that life isn’t *quite* so free to do what it wants.

To be clear, I’m not saying that life isn’t diverse or that evolution is wrong. What I’m trying to get across is that one needs to be careful about what “infinite variety” means.

If you’re talking about the fact that the exact colour of one’s eyes can take on a gradient of values, then sure, there’s a functionally infinite amount of choice. (Eye colour isn’t anything special, it’s just an example.) In that sense, there’s an infinite amount of variety to be found in nature.

However, this doesn’t imply that life can take *any* form. Biology is beholden to something, and that something is the collection of equations of physics.

Evolution cannot just roam free and explore any form of life it wants. Evolution must comply with the equations and principles from physics. This immediately implies that life has limits, which means life cannot have infinite variety.

In his new book, *The Equations of Life*, Charles Cockell explores how physics can inform biological questions. It’s a great book that takes simple physical principles and shows how they have consequences for life. He links equations to the qualitative aspects of different life forms in a way that is easy to understand and shows how physics and biology do work together.

An analogy that I found useful was how he explains that the variety of life is more like a zoo than an infinite expanse. While there is room for a lot of variety, life is contained within a sharp perimeter, given by physical principles.

He writes:

No evolutionary roll of the dice can overcome a lack of a solvent within which to do biochemistry or the energetic extremes of high temperatures. The details, the temperature sensitivity of this and that protein, may well modify the exact transition between the living and the dead, particularly for individual life forms, maybe for life as a whole. But in broad scope, life’s boundaries, the insuperable laws of physics, establish a solid wall that bounds us all together.

One particular characteristic he explores is the notion of temperature. What kind of temperature limits are imposed on life? Is there an environment that is just too cold or too hot? This turns out to be an interesting question where the nature of atomic bonds in molecules plays a large part.

For high temperatures, there is a point in which the temperature of the environment will break the bonds of the atoms that make up the living organism. For carbon-carbon bonds, this limit is about 450 degrees Celsius. After that, the bond breaks. In other words, no more living organism. The important part isn’t the particular value. Rather, it’s the fact that we *know* we can’t have carbon-based life at a temperature of 1000 degrees Celsius. It just won’t work. There are similar constraints at the high end of temperature for other atoms as well. There’s no getting around them because they are physical principles that apply to *everything*.

There’s a similar story for the lower end of the range. As Cockell writes, the issue here is that molecules don’t move fast when it’s cold. As a result, any sort of radiation can kill an organism because it will not be able to repair itself fast enough. Imagine you’re building a house and someone is removing bricks faster than you’re putting them on. Even though you keep on working, eventually there is no house left.

Cockell tackles similar ranges in the book, such as pH level. During these excursions, his point isn’t to derive a value and say, “There we go! Life can’t pass this barrier.” Rather, it’s to acknowledge that there *does* exist a point in which life cannot cross a certain barrier. These barriers can’t be jumped over by clever tricks from evolution. They are a fact of our universe.

This was one of the key lessons I got from *The Equations of Life*. As a physics student, I found myself interested in the relationship between physics and other sciences. Cockell does a great job of illuminating that relationship. Even if you’re not brushed up on your classical physics, this is a book that will reel you in. Plus, he has a wealth of citations at the end, if you ever want to explore the topics in more detail!

### Hidden Assumptions

I remember when I first took a class in linear algebra, and we were talking about vector spaces. In addition to the definition of a vector space, we were also given multiple axioms that define what the structure of a vector space looks like. This included a bunch of boring things, like the fact that if you have a vector *v*, you should have a corresponding vector *-v* such that *v+(-v)=0*, where 0 is the zero vector. There are eight of these axioms, and together they describe exactly what can be called a vector space.

If you look at the axiom above, you might think that’s obvious. Of *course* I can go backwards from my vector *v* to get to zero. Similarly, you would probably balk at the notion of most of the other axioms, such as commutativity (*a+b=b+a*) or that there is a zero vector such that *v+0=v*. These properties seem more than obvious to you. They are *ingrained*. Why do these define a vector space specifically? Doesn’t this always hold for any kind of space?

Well, not quite. If we just focus on commutativity, this is something that isn’t always true. To take an example that isn’t part of mathematics, but is something else we do in our everyday lives. Imagine you were eating a bowl of cereal in the morning. After taking out your bowl and putting cereal in it, you then pour your milk, followed by eating the cereal. But what it you did those last two steps in the opposite order? Now, you eat your cereal, and *then* you pour your milk in the bowl. Evidently, your cereal-eating experience won’t be the same in both cases. This illustrates something we don’t often appreciate in mathematics. Order *is* important. It’s just that we tend to work in spaces that have this nice and special property, so we start to think that *all* spaces do.

If you want a more mathematical example of commutativity not holding, the classic example is that of matrices. This is another part of linear algebra, and while taking the course you quite quickly figure out that the order *does* matter when multiplying matrices. Suffice to say, bad things happen when you start assuming things will work out without *knowing* if your assumption is correct.

Why is this important? We often go through a lot of our mathematical lives without considering our assumptions. Instead, we focus on solving particular problems, and then comparing with a known answer. Furthermore, we tend to get used to thinking about the particular spaces we work in, and assuming that these nice things will hold for all other spaces. This lack of diversity creates bad habits in our mental models, creating prejudice for one specific way of things working.

Remember, axioms are the foundation for basically everything you do in mathematics. If you keep on digging, you should always hit a floor of axioms. This is simply because the goal of mathematics is to discover truth *from* those axioms. These axioms can’t be proven. They are taken for granted. But the surprising thing is that the rules and elements *we* take for granted on a day-to-day basis within mathematics *aren’t* axioms. They are actually statements that can be proved using other axioms.

For example, take an innocent-looking statement that almost no one will blink an eye at while reading:

If *a ≠0* and *ab=ac*, then *b=c*.

This is known as the law of cancellation. We get quite good at doing this in secondary school, where we learn how to simplify fractions and other expressions. This naturally comes in handy, because it lets us avoid working with larger numbers than we have to do. If *2x=8*, we can “cancel” a factor of two from both sides of the equation and obtain *x=4*.

So far, so good. This is harmless, right? How could a simpe rule like this possibly go wrong?

The problem is that you’re not thinking about this in general. The subtlety here is that **it matters where the elements a and b come from.** If they are part of the real numbers, then fine, that’s not a problem. But it takes a particular type of “setting” to have this cancellation rule. In particular, you need to be in an

*integral domain*, which is a space that is basically defined by this property.

And no, there are very real examples that *aren’t* integral domains. For example, let’s do some arithmetic in modulo eight. This just means that we reduce any number until it’s the remainder upon division by eight. In this setting, we then have that *6 ≡ 14*. But look what happens here. We know that *2⋅ 3 = 6*, and *2⋅ 7 = 14*, but even though *2⋅ 3 ≡ 2⋅ 7*, we *certainly* don’t conclude that *3 ≡ 7*. When you divide both of these numbers by eight, the remainders are three and seven respectively, so they are not the same. As such, you can immediately see that the cancellation doesn’t work all the time.

What this hints at is that we can often be surprised by our prejudices. The cancellation rule is very common and is acknowledged throughout secondary school, so you might almost feel cheated that this is suddenly not taken for granted. This is a normal reaction, and it’s one you will slowly learn to ignore.

*This* is why learning and remembering those “boring” axioms is so important. They are the building blocks in which we base our structures on, so it’s vital that you remember them. Additionally, you should frequently pose yourself questions such as, “Why do we *require* this axiom? Can I get all of the results I want without it?” Remember that we are frequently interested in generalizing notions in mathematics, so we don’t want to carry along “extra baggage”, if you will. We want enough axioms to give us some constraint and structure (or else we won’t be able to generate results), but we don’t want to assume anything that we aren’t absolutely forced to. It’s about minimality, and only adding structure when you need it. So all of those axioms you learned in various classes, *were* important, because there are other settings where these aren’t taken for granted.

It’s a mathematical jungle out there, so best to keep in mind your foundational axioms while learning a new topic.