### Precision in Language

I imagine that we do this all the time: you’re talking to someone else about solving a certain equation, and then you tell them something to the effect of, “I had to bring sigma to the other side of the equation.”

My question is: what mathematical operation did you just do?

On the one hand, we could be talking about bringing the sigma over to the other side of the equation by adding/subtracting a term on both sides of the equation. Alternatively, we could also be multiplying both sides of the equation in order to bring a sigma that was in the denominator to the numerator of the other side.

Both of these correspond to “bringing the sigma over to the other side.” However, we both know that these aren’t the same thing at all. In fact, you can make *huge* mistakes in a calculation if you mix up these two methods of bringing a quantity over to the other side of an equation. This happens because we have two notions of an inverse when doing arithmetic. We have an additive inverse, which simply means that when you add a quantity and its inverse, the result is zero. We then have a multiplicative inverse, which means when you multiply a quantity with its inverse, the result is one.

This is great, but the use of the language of “bringing” something over an equation buries this notion and creates the skill of what I like to call “equation gymnastics”. This is what happens when students don’t know how to manipulate equations and try to simply remember rules. You then see some people master the ability to solve equations simply by following the rules, rather than actually understanding.

Related to this is the notion of “cross-multiplying”.

Now, I don’t want to give the impression that there is *no* use to being able to blindly apply the rules of algebra. That’s not a problem, and it can make for students who are extremely capable of solving equations. However, from my experience, it is so much easier to tackle questions (particularly when they vary from the basic ones) when one *understands* why the rules are what they are. That’s the great thing about learning mathematics. The rules aren’t arbitrarily there to make one’s life difficult while solving equations. They are there because these rules are required if you want to balance an equation. This is the crucial point that I find is lost on students. All of these so called “rules” of algebra follow one main principle: an equation is a balancing act, and requires the same “things” on both sides in order to maintain equality. That’s it. All of algebra that one learns in secondary school can essentially be summed up into that one sentence.

However, I don’t think enough students are taught this. Instead, they stress out about remembering different methods of solving equations, or remembering that you need to “flip the sign” when you bring a term to the other side of an equation (but only if it’s addition or subtraction!), but that you don’t do this if it’s a term that is part of a multiplication or division. Phrased this way, even I start to wonder about these rules. When you think about equations in this light, everything seems arbitrary. But that’s not because the rules *are* arbitrary. It’s simply that you’re looking at the concept in a way that’s not as useful.

This isn’t limited to secondary students learning algebra. In fact, I face this problem all the time in my own learning. It’s not always a simple matter to find the “right” perspective on a concept that clicks for you, but I can guarantee that imprecise language does not help. When we use words like “bringing over” to talk about terms in an equation, we need to be sure that the people we are talking to *know* what we mean by that. If not, we should use more precise language to talk about what we are *really* doing when we say that we are bringing a term to the other side of an equation.

Personally, I try to avoid using the expression “bringing over” when I talk to the students I work with who are learning about algebra and solving equations. I’ve found that it’s simply not a good way to talk about equations, and so I’ve done my best to eliminate it from my vocabulary. My students might wonder why I use such a long-winded way to solve equations, but it’s because I want them to understand what they’re doing before they start developing their own shortcuts.

### Proof that there exists a 3-regular graph for any graph with $k$ vertices, with $k \geq 4$ and $k$ even

Last time, we looked at some concepts in graph theory. In particular, we looked at the ideas of a simply connected graph, the degree of a particular vertex, what edges and vertices are, and some other related concepts. Here, I want to tackle a proof that has a nice way of visualizing the result.

**Theorem**: *There exists a 3-regular graph for any simply connected graph with $k$ vertices where $k$ is even and $k \geq 4$.*

To show this, remember that we have to show this statement holds for *any* even number of vertices that is greater or equal to four. As such, we could try and show this statement on a case by case basis, but it’s going to take a lot of time. In fact, it would take infinitely long, since we need to do it for every even number. We don’t want to do that, so we will have to come up with a better method.

This method is called induction.

If you want to get a quick primer on induction, you can read my post here. Briefly though, the idea of induction is to show that the base case holds, then to assume that the $k$ holds, and finally prove that the $k+1$ case holds. If you fulfil these requirements, then you’ve shown that the statement is true for all $n$ (in relation to your base case, of course).

So let’s begin with our base case, which is when our graph has four vertices. Our goal is to construct a graph on four vertices that is 3-regular. In other words, we want each of the four vertices to have three edges that are incident with it. Furthermore, the graph is simply connected, so we don’t have any loops or parallel edges. After trying a few examples, you’ll quickly find that the only possibility is what we call the *complete* graph on four vertices, denoted $K_4$. This simply means that all of the vertices are connected to each other.

We now have to show that the case with $k$ vertices holds, and then check the $k+1$ case. However, I want to do something a little different, that will hopefully convince you that this theorem is true.

First, draw two concentric circles. Then, we want to connect the two circles by $\frac{k}{2}$ rays that go radially outward from the centre of the two circles. However, we only care about the segment of these rays that are *between* the two circles, so the sketch would look something like this:

Next, since we have $\frac{k}{2}$ line segments in our sketch, we will add vertices to every point that has the line segment intersecting a circle. Since each line segment intersects the circles at two places, we will have a total of $k$ vertices. Furthermore, you can look at any of the vertices to confirm that each one does indeed have three edges that are incident with it, and the graph is also simply connected.

And voilà! That’s our proof. If you look at the sketch, the reason why it’s important that $k$ be even is that we have to always add an extra line segment connecting the two circles, which means you’re adding *two* vertices. Then, as long as you keep on adding the two vertices like I’ve described, you can create a 3-regular graph for any even number of vertices.

Unfortunately, you can only use this nice sketch for $k \geq 6$. For $k=4$, the graph becomes one that has parallel edges, which is not allowed under our theorem. Therefore, we need to draw the graph $K_4$ for the base case. After that, we can go back to these circles to get the required other graphs.

Hopefully, this proof is convincing. I love visual proofs, and I think this one is particularly simple. It’s an easy inductive example, and it constructs the required 3-regular graphs with a simple algorithm.

### Vertices and Edges (An introduction to graph theory)

An interesting area of mathematics is graph theory, and it deals with a simple question: how do things connect together? In graph theory, we’re interested in vertices (also called nodes) that have relationships with other nodes. These relationships are called edges (or branches), and with these two ideas, we can explore many different and interesting problems. However, the reason I’m bringing this up now is that I’ve been watching a superb series of videos that explain the workings of graph theory, and I wanted to both share the series and comment on some of the problems. To do this, we’ll have to go through a bit of the theory to ground ourselves comfortably, though we won’t go into enormous depth.

But first, the series is called Bits of Graph Theory, and it’s created by Dr. Sarada Herke. I’m barely past the beginning, and I’ve already found the videos to be immensely useful. So if you wanted to go into more depth, I really recommend that you check out her videos. You won’t regret it.

With that out of the way, let’s begin.

So what’s a graph? At it’s most bare form, a graph is a collection of vertices and edges. If you want to go *really* minimalist, a graph could be simply a collection of vertices. From there, we can connect vertices in any way that is pertinent to the situation at hand.

What I love about graph theory over a lot of other mathematics is that it’s *so* visual. This is true for other subjects within mathematics, but you usually have to build up a lot of theory and background before you start seeing those visual connections. For graph theory, on the other hand, things are immediately visual (and can even be helpful in solving the problem), as you can see from a few examples.

These are all examples of perfectly valid graphs. Some look strange, but they all represent relationships between the vertices. One thing to note is that we don’t care *how* a graph looks, as long as the structure is the same. As such, these two graphs are equivalent, if perhaps a little ugly.

So if you want to make a graph, you just need to draw some vertices and connect them with various lines. One thing you *cannot* do is have a “hanging” edge. Put differently, every edge has to be capped by a vertex on each side.

Next, a useful property of vertices is what is called the *degree* of a vertex. It’s the idea that each vertex has a certain number of connections to it, marked by the edges that go to and from the vertex. Here’s a graph with the degree of each vertex.

From this, we have a special case. What if *every* vertex has the same degree? More specifically, if any vertex in a specific graph G has a degree of *k*, then we call the graph *k*-regular. I’m introducing this because I want us to tackle a nice problem involving regular graphs.

Lastly, I want to introduce the notion of a *simple* graph. This just means we don’t have any loops or “multiple connections” in the graph. I could go on and on about them, but I think a sketch will be more useful.

So that’s the basics of graph theory, in a *very* brief nutshell. Obviously, there’s a lot more to cover, but I wanted to bring these ideas up because we will use them next time to tackle a problem that Dr. Herke in her video series poses, and there’s a nice way to visualize the problem that I have found. Stay tuned for that!

### First Principles

When I was younger and first going through the “jump” between secondary mathematics and physics to that of CÉGEP and university, I always got frustrated when teachers would just shrug their shoulders when we grumbled about having too many things to remember for the test. Their advice was to simply remember the fundamentals, and rederive any result that was needed afterward.

I still think that this isn’t the most useful advice for exam that only last for fifty minutes. During this time, the tests are usually a mad scramble to make sure one can answer all the questions in the allotted time. If you have to pause and waste five to ten minutes on a question, it can be difficult to finish the rest of the test.

However, I’m come to really understand my teachers’ advice for learning in general. As my mathematics professors keep on telling me, “Mathematicians are lazy. We try to remember as little as possible, while secure in the knowledge that we can recover the result if we want to.”

Does that mean I forget what the real number line is, or work through a delta-epsilon limit proof every time I run into a new problem? Of course not. For most applications and situations, this would be overkill. But what it means is that I try to remember the fundamentals of concepts, and I avoid trying to remember special scenarios. This is effective because, if you deeply understand how the concept works, it’s not too difficult to extend to certain special cases. The converse is not true. If you are able to remember the special cases, it doesn’t mean that you understand what’s going on, it means you know what the formula is for that situation.

In my experience with tutoring students in secondary school, this is most manifest in the manipulating of algebraic equations. This is arguably the basis of most of the mathematics that the average student will encounter for most of their lives. Being able to solve and manipulate equations is important for statistics, probability, calculus, and linear algebra (not to mention any physics or most other science courses). Not being able to manipulate equations introduces a *huge* crutch into one’s mathematical ability (at least, at the level of secondary school and introductory classes as I mentioned above). It’s therefore extremely important that students find themselves comfortable with the manipulation of equations.

Unfortunately, this doesn’t seem to be a skill that is easily acquired, and it seems to stem from a core issue: students don’t seem to be taught that manipulating equations *requires* that you do the same action on *both* sides of an equation.

It’s a simple enough concept, but a lot of a student’s mathematical education rests on the shoulders of understanding this concept, so it *needs* to be understood. Therefore, if there is one fundamental idea about solving equations that needs to be remembered, it’s this.

In a similar vein, when students solve equations, they learn of multiple methods: comparison, substitution, and elimination. These are presented as “different” ways to solve equations, but what seems to often be missed on the students is that they are *all* essentially the same, albeit in special scenarios. Substitution and comparison are basically identical, and elimination is simply adding terms on both sides of an equation. As such, instead of remembering all of the methods, you simply need to remember two ideas:

- You have to apply an operation to
*both*sides of an equation (as I said above). - When you have an equality between two expressions, you can swap one for the other.

Of course, these two ideas aren’t necessarily obvious to the student who is first learning algebra. However, with a good number of examples and practice, I’m confident that a student armed with these two principles can begin to understand equations on a deeper level, without thinking about “bringing X over to the other side of an equation”.

This is the kind of thinking I try to apply when I learn new concepts. As I go further down the mathematical and physical roads, I’ve learned so many things that it’s difficult to remember *everything*. Therefore, I keep note of important concepts. This lets me retain the crux of many concepts, without necessary needing the details. Then, if I find myself in a situation where the details become more important, I can always refresh my knowledge by looking at my notes or in textbooks. I think this method is easier on the mind and allows one to search for the deeper connections that various subjects have with each other, because you’re not focusing as much on the details.