Contradiction as Zeros
This is going to be a very quick post, but it’s something I wanted to share since I think it could give some insight into a concept that is used to make proofs in mathematics. When writing proofs, it is often difficult to show that your proof holds for every case (say, if you’re trying to show that the square root of two is not capable of being represented by a ratio of integers). Checking every single combination of integer ratios would take an infinite amount of time, so we want to come up with a strategy that is better. To do this, we try to prove our statement by contradiction. The idea is that we negate our conclusion, and from that, we need to show that there is a logical contradiction. Therefore, when we come to a contradiction, we know that our assumption of negating the conclusion was false, so that means our conclusion is true.
But why does a contradiction imply only that our conclusion is actually true? Why couldn’t it mean that something else is wrong with our proof? To see this, I like to think of it as finding the zeros of a function.
Remember that, if we have a series of terms that are multiplying each other and equal to zero, then this implies that at least one of these terms is zero. This comes from the fact that you can only get zero as a result if one of your factors is also zero. Now suppose that you know that some of these terms are never zero. Then, you can shorten your list of potential candidates that are zero, letting you solve the equation.
This closely resembles what we do in a proof by contradiction. In this case, the “terms” are our hypotheses, the things we assume are true. When we construct our proof, we always choose hypotheses that we assume to be true, so they won’t ever be false. We then add one more statement, which is the negation of our conclusion. The idea here is that we want this to be false (because we want the conclusion, and not its negation, to be true). From here, we simply use these hypotheses to generate new statements (which is the equivalent of multiplying in our analogy), and we finally come to a result. If we then see that our result is a contradiction, we know that at least one of our statements that we used is false. However, since the only statement we aren’t assuming to be true is the negation of our conclusion, we can say that the satement is false, which proves our conclusion.
This connection between finding zeros and doing a proof by contradiction is helpful to me since it shows me just how we know that a particular statement has to be false. It all comes down to the fact that we deliberately construct the group of statements such that only one will be false.
Conceptual Understanding
As a student, I know what it takes to get good grades. Essentially, you want to be able to reproduce the work that is taught in class during a test. You don’t need to be creative or original in your work. Rather, you simply need to understand the procedures and apply them (for the most part).
This is rather straightforward. After all, if you’ve worked through the homework in your class and have studied the material, it’s not too difficult to do fairly well in a given class. Questions become variations on a theme, so getting good grades is almost algorithmic.
However, one type of question in my classes that is more tricky to answer is the conceptual class. This means that the question requires some sort of explanation and reasoning, rather than a calculation. It might not seem like it, but this is by far the more difficult type of question, since it is so ambiguous. There are the usual issues of not knowing if you’re explained enough, but the real difficulty is that you can’t go through an equation to necessarily give you an answer. That’s why I (and I assume many others) dread conceptual questions.
Additionally, it’s simply not easy to conceptually understand a topic in mathematics or physics, instead of being able to reproduce it. Just because I can calculate the change in entropy for a certain physical situation does not mean I can explain why the entropy increases or decreases in that situation. In other words, I can reproduce the calculation, but I might not be able to really explain it.
You might guess that this makes me nervous. Indeed, when my goal is to get good grades in a class, conceptual questions are not what I usually want to see in a test. They are tricky and less straightforward than calculations, which means I will tend to make mistakes more frequently. As such, I try to avoid this type of question.
I think that you can also guess that this isn’t a good thing. If you’ve read anything from me, I’m sure you’ve gotten the impression that the one thing I want people to have is the conceptual understanding instead of only the computational ability. Of course, you’d be right, and that’s what I want to talk about today.
In my mind, conceptual understanding is critical^{1}, but the problem in school is one of alignment. The reward systems in school don’t favour trying to ask conceptual questions, because they punish creative thinking in favour of being “right”. However, if the students never get to test their common sense and intuition about various subjects, why should we expect them to do well on a test with these kinds of questions?
One thing that I think everyone can agree on is that having a misconception about a subject is something we want to avoid. Put another way, we don’t want students to go through a subject with an incorrect view of a phenomenon, and then proceed to carry this incorrect mental picture with them for years later. Any teacher will tell you that they don’t want this to happen to their students. But the irony is that we do allow this by simply not asking enough conceptual questions to students!
The educational model in physics in particular isn’t set up for this kind of question. As such, we seem to ask fewer conceptual questions because they aren’t easily graded and take up time. The cost is that we let students make their own conclusions regarding the phenomena they learn about, and I am certain that students don’t get it right 100% of the time. Judging from my experience, it’s not even close. Consequently, we end up going through a course thinking that we know enough about the subject, only to be stumped by a conceptual question that either has us scratching our heads or confidently saying something that is incorrect.
The solution is obviously to tackle more conceptual questions when you are learning, but this isn’t as easy as it seems. While I think this is the answer, it’s not a practical suggestion at present, since it punishes students unfairly in terms of grades for attempting a question and being incorrect in their formulation. In my mind, this isn’t something that one needs to be tested on. Instead, conceptual understanding comes from years of engaging with the topic, but this lesson isn’t being taught when students have the mentality of “remember for the test, and then forget”. I know many of my friends who go through school with this mentality, and it’s something we need to work to discourage. Instead, I think conceptual questions need to be asked more, but not necessarily graded on. I’m thinking of a weekly question that gets students thinking about a topic more deeply. Personally, I will be trying to do this with my own learning. If I come across a conceptual question I cannot answer, I will make sure I find the answer so that I can explain it easily.
I think physics is one of the few subjects where you can really dance between the rigour of mathematics and the simple explanations of intuition. As such, I think it’s useful to not be married to the former approach only, and to be able to explain topics without simply resorting to the mathematics. I think you know that I’m by no means against using mathematics, but doing the computation can sometimes evade the more difficult part of explaining^{2}.

Indeed, if you only want computational ability, than I would suggest we teach everyone how to program calculations into a computer so that we don’t have to keep on doing them by hand. ↩

Of course, this same idea can be applied to mathematics, though everything gets a bit more abstract. In mathematics, it’s the difference between saying you know something and can prove it, versus merely being able to compute it. ↩
Precision in Language
I imagine that we do this all the time: you’re talking to someone else about solving a certain equation, and then you tell them something to the effect of, “I had to bring sigma to the other side of the equation.”
My question is: what mathematical operation did you just do?
On the one hand, we could be talking about bringing the sigma over to the other side of the equation by adding/subtracting a term on both sides of the equation. Alternatively, we could also be multiplying both sides of the equation in order to bring a sigma that was in the denominator to the numerator of the other side.
Both of these correspond to “bringing the sigma over to the other side.” However, we both know that these aren’t the same thing at all. In fact, you can make huge mistakes in a calculation if you mix up these two methods of bringing a quantity over to the other side of an equation. This happens because we have two notions of an inverse when doing arithmetic. We have an additive inverse, which simply means that when you add a quantity and its inverse, the result is zero. We then have a multiplicative inverse, which means when you multiply a quantity with its inverse, the result is one.
This is great, but the use of the language of “bringing” something over an equation buries this notion and creates the skill of what I like to call “equation gymnastics”. This is what happens when students don’t know how to manipulate equations and try to simply remember rules. You then see some people master the ability to solve equations simply by following the rules, rather than actually understanding.
Related to this is the notion of “crossmultiplying”.
Now, I don’t want to give the impression that there is no use to being able to blindly apply the rules of algebra. That’s not a problem, and it can make for students who are extremely capable of solving equations. However, from my experience, it is so much easier to tackle questions (particularly when they vary from the basic ones) when one understands why the rules are what they are. That’s the great thing about learning mathematics. The rules aren’t arbitrarily there to make one’s life difficult while solving equations. They are there because these rules are required if you want to balance an equation. This is the crucial point that I find is lost on students. All of these so called “rules” of algebra follow one main principle: an equation is a balancing act, and requires the same “things” on both sides in order to maintain equality. That’s it. All of algebra that one learns in secondary school can essentially be summed up into that one sentence.
However, I don’t think enough students are taught this. Instead, they stress out about remembering different methods of solving equations, or remembering that you need to “flip the sign” when you bring a term to the other side of an equation (but only if it’s addition or subtraction!), but that you don’t do this if it’s a term that is part of a multiplication or division. Phrased this way, even I start to wonder about these rules. When you think about equations in this light, everything seems arbitrary. But that’s not because the rules are arbitrary. It’s simply that you’re looking at the concept in a way that’s not as useful.
This isn’t limited to secondary students learning algebra. In fact, I face this problem all the time in my own learning. It’s not always a simple matter to find the “right” perspective on a concept that clicks for you, but I can guarantee that imprecise language does not help. When we use words like “bringing over” to talk about terms in an equation, we need to be sure that the people we are talking to know what we mean by that. If not, we should use more precise language to talk about what we are really doing when we say that we are bringing a term to the other side of an equation.
Personally, I try to avoid using the expression “bringing over” when I talk to the students I work with who are learning about algebra and solving equations. I’ve found that it’s simply not a good way to talk about equations, and so I’ve done my best to eliminate it from my vocabulary. My students might wonder why I use such a longwinded way to solve equations, but it’s because I want them to understand what they’re doing before they start developing their own shortcuts.
Proof that there exists a 3regular graph for any graph with $k$ vertices, with $k \geq 4$ and $k$ even
Last time, we looked at some concepts in graph theory. In particular, we looked at the ideas of a simply connected graph, the degree of a particular vertex, what edges and vertices are, and some other related concepts. Here, I want to tackle a proof that has a nice way of visualizing the result.
Theorem: There exists a 3regular graph for any simply connected graph with $k$ vertices where $k$ is even and $k \geq 4$.
To show this, remember that we have to show this statement holds for any even number of vertices that is greater or equal to four. As such, we could try and show this statement on a case by case basis, but it’s going to take a lot of time. In fact, it would take infinitely long, since we need to do it for every even number. We don’t want to do that, so we will have to come up with a better method.
This method is called induction.
If you want to get a quick primer on induction, you can read my post here. Briefly though, the idea of induction is to show that the base case holds, then to assume that the $k$ holds, and finally prove that the $k+1$ case holds. If you fulfil these requirements, then you’ve shown that the statement is true for all $n$ (in relation to your base case, of course).
So let’s begin with our base case, which is when our graph has four vertices. Our goal is to construct a graph on four vertices that is 3regular. In other words, we want each of the four vertices to have three edges that are incident with it. Furthermore, the graph is simply connected, so we don’t have any loops or parallel edges. After trying a few examples, you’ll quickly find that the only possibility is what we call the complete graph on four vertices, denoted $K_4$. This simply means that all of the vertices are connected to each other.
We now have to show that the case with $k$ vertices holds, and then check the $k+1$ case. However, I want to do something a little different, that will hopefully convince you that this theorem is true.
First, draw two concentric circles. Then, we want to connect the two circles by $\frac{k}{2}$ rays that go radially outward from the centre of the two circles. However, we only care about the segment of these rays that are between the two circles, so the sketch would look something like this:
Next, since we have $\frac{k}{2}$ line segments in our sketch, we will add vertices to every point that has the line segment intersecting a circle. Since each line segment intersects the circles at two places, we will have a total of $k$ vertices. Furthermore, you can look at any of the vertices to confirm that each one does indeed have three edges that are incident with it, and the graph is also simply connected.
And voilà! That’s our proof. If you look at the sketch, the reason why it’s important that $k$ be even is that we have to always add an extra line segment connecting the two circles, which means you’re adding two vertices. Then, as long as you keep on adding the two vertices like I’ve described, you can create a 3regular graph for any even number of vertices.
Unfortunately, you can only use this nice sketch for $k \geq 6$. For $k=4$, the graph becomes one that has parallel edges, which is not allowed under our theorem. Therefore, we need to draw the graph $K_4$ for the base case. After that, we can go back to these circles to get the required other graphs.
Hopefully, this proof is convincing. I love visual proofs, and I think this one is particularly simple. It’s an easy inductive example, and it constructs the required 3regular graphs with a simple algorithm.