### Machines and Processes

When you first learned about algebra, chances are you learned about something called a function, typically one that looks like this: This is nothing more than the equation of a straight line. You probably also learned how this could be represented as a graph (which is why you know it’s a straight line). This was simple enough, and you soon learned how to deal with different kinds of functions. These include quadratics (parabolas), exponentials, rationals, and a host of other functions. You learned what these looked like when graphed, and how to find various properties of these functions. This includes finding the roots of the equation (when $f(x)=0$), finding the domain and range, and characteristics of when the graph is increasing or decreasing.

To find some of these properties, you actually had to interact with the function. That meant working through the algebra and manipulating some equations. Depending on how comfortable you were in mathematics, this could be easy or difficult. However, assuming you got past this stage and still understood what was happening, you then got to more exotic functions, such as logarithmic or trigonometric functions. These have the form $f(x) = \log(x)$ and $f(x) = \sin(x)$, and aren’t your usual functions. This is where you start seeing students making the mistake of dividing by $\sin$ or $\log$.

Why does this happen? From my experience, it’s due to the sense that everything in mathematics is *linear*. To illustrate this, let’s look at an easy example. Suppose we have the equation $\log(x+1) = 2$. You might be tempted to say that this is the same as $\log(x) + \log(1) = 2$, but this would be incorrect! What we actually have to do is convert the equation into exponential form, giving us $10^2 = x+1$. The answer itself isn’t really important. What’s important is that the logarithm *isn’t* linear, and so you can’t simple distribute it onto the term $(x+1)$.

From what I can gather, the reason this happens is that $\log(x+1)$ is very similar in notation to $4(x+1)$. We know that the latter is equivalent to $4x+4$, which we get by multiplying each term inside the parentheses by $4$. As such, it isn’t so surprising when students think that the same should apply to these new things called logarithms. The same is true for the expression $\sin(x+1)$ and $(x+1)^2$. We feel like it’s completely natural to write $\sin(x)+\sin(1)$ and $(x^2+1^2)$, but this is incorrect.

Instead, what we should think of the expressions like $\sin(x)$ and $\log(x)$ as *functions* or *machines*. When you insert a number $x$ into them, the machine runs, and spits out a number back. The crucial part is that the machine is *highly* dependent on the initial number you put in. Said differently: if I put $(2+5)$ into $\sin(x)$, or if I do $\sin(2)+\sin(5)$, I get to very different answers. For comparison, the former gives approximately $0.065699$, and the latter gives $0.909297-0.958924=-0.049627$. Obviously, these two numbers aren’t the same. Also, for those who have studied trigonometric functions, you know that $\sin(x)$ varies from $-1$ to $1$, which means that $\sin(x+y)$ must also vary from $-1$ to $1$, while $\sin(x)+\sin(y)$ could vary from $-2$ to $2$, which means these two expressions can’t be the same.

This is also true for logarithms, and many other mathematical functions you may encounter. The difficult at this point is to deal with this while working with algebraic equations. You can’t simply add, subtract, multiply, and divide your way to a solution. You have to know what functions you’re dealing with, and how to work with them.

Last example: consider the function $f(x) = 4\sin(x^2-2) +1$. This function has a lot going on, but the way I think about it is that you have a function within a function within a function. To make this explicit, label $g(x) = \sin(x)$, and $h(x) = x^2-2$. Remember that the variable $x$ in both of these equations is just a placeholder. You can stick *anything* there. If we think back to our analogy of a machine, think of $x$ as where you input your value into the machine. Now, if we write $f(x)$ in terms pf $g(x)$ and $h(x)$, we get the following:
This equation may look a bit busy, but that’s the point! I want you to really see where $x$ is located. It’s nestled into the “deepest” function, $h(x)$. What I want you also see is that instead of writing $g(x)$, I wrote $g(h(x))$. This means that instead of sticking $x$ into the input of $g(x)$, I stuck $h(x)$ instead! There is nothing wrong with this, and it actually has a fancy name. It’s called a *composition* of functions. Think of it like a machine within a machine. The output of the first machine is connected to the input of the first machine, so that when you give the first machine an initial value, it passes through *both* machines. As such, you can’t exactly “split up” the machine into two different parts and expect to get the same answer as doing both together. It depends on the nature of the machine.

Hopefully, this gives a bit of intuition into the idea of more complex functions such as the logarithmic or trigonometric functions. They *don’t* distribute linearly, and so you can’t apply your usual rules of algebra to them. Additionally, it’s important to remember that expressions like $\log$ and $\sin$ by themselves have *no* meaning. They aren’t numbers. Typing these into your calculator and pressing the “=” buttom gives an error, and rightly so! Remember, they are like machines or processes, so asking what the output of a machine is without any input doesn’t make sense.

By keeping in mind the analogy of functions as machines, you should have a better conceptual understanding of *why* logarithms and trigonometric functions don’t simply distribute, and this will translate to understanding how to manipulate them without mistakes.

### Notation

In mathematics, notation is simultaneously everything and nothing. It isn’t difficult to imagine another alien species havig the same notions of calculus as we do, but without the symbols of integration or differentiation. It might *seem* so natural now to see the expression $\partial x$, but that’s only because we’ve spent years working with these symbols, forging a connection between concepts and notation. Due to this, it can seem entirely natural to look at notation and instantly understand what it’s about as a *concept*, rather than just symbols. This is quite similar to our experience with foreign languages, where the words and characters look alien to us, yet our own languages seem so obvious.

I’ve been thinking about notation after working with some students who seemed to be struggling with certain concepts. I was wondering why they couldn’t just *see* the same things as I could on the paper. I know that some struggle with seeing the connection between $f(x)$ and $y$ in an equation, even though of course they mean the same thing in this setting. Another variation to this issue is when a text or a problem refers to multiple functions, and names them $f(x)$, $g(x)$, and $h(x)$. It might seem natural to *us* to have these be the names for arbitrary functions, but this sudden spring of notation onto students can be deeply confusing when they aren’t used to it. The consequence is that it seems random and without explanation, so then students start believing that part of mathematics is just like that: innately confusing.

In my mind, unless we’re talking about probability, mathematics should *not* seem random.

In fact, mathematics is all about investigating structure. To do this, however, we absolutely require definitions and notation. *But*, the thing that is often lost on students is that *we* create these definitions and notation! It’s there because they’ve (mostly) stood the test of time of being good for problems. As such, I think it’s critical that we get students on board with the notation, and to be capable of moving fluidly between notations and concepts.

For students in their secondary education, this means being familiar with the idea of an equation, and *not* being tied to the notation itself. As an explicit example, this means having students being aware of parabolic functions apart from the “It’s the one with $x^2$ in it!” I want students to be comfortable with saying that $d(t) = t^2$ is just as much of a parabola as $y = x^2$ is. The notation is important, but the *specific* symbols aren’t.

Here’s another example that I find really tests whether a student understands the concept of what a function is. If they’re given the function $f(x) = x + 2$ and are asked for the value of the function when $x=2$, many will write $f(x) = 2 + 2=4$. Of course, the function value is correct, but the problem is that this isn’t $f(x)$, it’s $f(2)$. This isn’t a huge deal, but it teases out the apparent weakness with the notation $f(x)$. From what I’ve seen with many secondary students, they don’t translate finding the value of a function to putting that value *into* the notation of $f(x)$. I’d argue that this implies they aren’t fully grasping what $f(x)$ means, but I also think it could simply be a lack of explanation. To remedy this, we need to put more emphasis on explaining the concepts *behind* the notation, so that the students will be on board with using it. If we don’t do this, we create more problems for ourself down the road when more complex notation comes along and students aren’t ready to fluidly jump from one set of notation to the next.

Overall, notation isn’t important in and of itself. But to do mathematics and learn new topics, it’s crucial to be able to understand what a certain notation means and how to use it.

### The Importance of Factoring

When you’re trying to solve a simple algebraic expression like $ab = 5b$ for the variable $a$, it quickly becomes second-nature to divide both sides of the equation by $b$, yielding $a = 5$. This makes complete sense, and it’s what most people would do right off, without even thinking. I mean, look at both sides of that equation! If there’s a $b$ on both sides, then the other value on each side of the equation should be equal to each other, giving us $a = 5$.

But not so fast.

What if I were to tell you that this wasn’t the only solution? What if there was another solution to your equation that you missed?

To prove this, let’s look at the original equation again. We have $ab = 5b$, so let’s subtract $5b$ from both sides of the equation. Doing so gives us:
As you can see, what we did during the second equality was factor out the $b$ from both terms, giving us a product of two terms that equals zero. Once we have that, we know how to solve a product of terms giving zero. At least one of those terms in the product must be equal to zero. Looking at this, we see that $a = 5$ is a solution, like we said before. However, there’s a *second* solution to this equation, which is $b=0$.

So what happened? How did we miss a solution when we first solved the problem?

The issue was that we *divided* the equation by $b$, but as we just saw, a solution to the equation is $b = 0$. This means that we were potentially dividing by zero! As most readers know, this is a big problem. We can’t divide an equation by zero, and so by doing this, we were in effect saying that $b \neq 0$. This meant that the solution we found was only valid when $b$ was not zero. As a result, we neglected to think about what happened to the equation when $b$ *was* zero, and so we lost a solution. By factoring the equation instead of dividing by zero, we can avoid losing the $b=0$ solution and get both in one go.

When we are working through a problem that involves algebra, we tend to push forward without necessarily thinking about the technicalities of what we are doing. Is there the same variable on both sides of the equation? Great, I can cancel them! It’s almost a reflexive habit, but it’s one we need to try and actively resist. By factoring instead of dividing, you create a product that equals zero, allowing you to be sure that you capture *all* of the solutions to a problem.

Of course, *sometimes* it’s fine to divide terms out of an equation. However, you need to make sure that you aren’t potentially dividing by zero.

### Serving the Results

As a student in both mathematics and physics, I often see the differences in mindset between the two fields, and how these mindsets change the way classes are taught. The former is usually about structure and patters, while the latter is about modeling the world using mathematics. The problem is that belonging only in the camp of physics seems to be a dangerous thing to do, in terms of building one’s foundational understanding.

For a small example, take quantum mechanics. There’s no way to get around it: quantum mechanics requires an understanding of mathematical probability. From finding expectation values to normalizing the wavefunction $\Psi$, it’s important to understand how probability relates to ideas of qunatum mechanics. Unfortunately, due to the progression of courses that students take, some students are seeing these ideas of probability density functions and other aspects of probability *for the first time* within the quantum mechanics course. And, since the course is primarily physics related and not mathematical in nature, the ideas behind probability are skipped in favour of the results.

This isn’t always a bad thing, but what it does sacrifice is the knowledge of the structure behind the physics. *Why* do these ideas of probability work? Why does the wavefunction have to asymptotically approach zero at infinity? Why is the probability of finding the particle at a specific point zero? These kinds of questions make a lot more sense when thinking of them through the lens of probability. Therefore, those without that kind of background are at a disadvantage.

## Structure and Purpose

I sometimes wonder about what my classmates (who are only in physics) think about mathematics. Is mathematics simply a tool to get to the physics behind systems? Is mathematics more of a nuissance than anything else?

I don’t simply make these claims out of the blue. I’ve heard my fair share of people within physics talk about mathematics in a way that suggests that they only use it because they have to. It’s this mindset that partially worries me, because it completely misses the point of what mathematics is about, and only serves to further the dogma that mathematics is about calculating quantities.

Mathematics has been an important part of my education precisely because it is the *opposite* of doing calculations. If one only takes science or other quantitative courses, one misses the side of mathematics which is why mathematicians continue to work in the first place. To them, it’s not about calculating an inner product or evaluating an integral. It’s about noticing structure, finding patterns, and forming convincing arguments as to why things logically *have* to be this way. This is the hidden veil behind mathematics that students don’t see until much too late, and by then only a handful have stuck around.

The bottom line is that we have mathematical *tools* and we have a mathematical *mindset*. The former is what we use to serve all of science and other quantitative fields. This is computational in nature, and comes from the results of mathematics. But the latter is whole different paradigm. It’s about *producing* results (but not necessarily ones that are deemed immediately useful). The latter is about studying structure and generalizing, which requires a playful mind that is open to new possibilities.

The problem is that this other part is *difficult* to train, and it’s not immediately productive. It’s easy to know if you’re making progress while evaluating an integral. At one point, you have nothing, and then after some work, you get an answer. But cultivating a mathematical mindset is more about struggling for hours on end on a problem, only to realize that you were in a dead end. It’s about trying things that you aren’t sure will work, and having that rush of adrenaline when things *do* work.

Of course, I’m well aware that this happens on the front lines of all of science, but this isn’t the aspect that is apparent to students either while they are studying. As a student, I don’t see that much work on what we would call the front lines. Instead, I’m still behind, learning about concepts that were devloped many years ago. This is the natural progression of science students. What it also means is that most problems I complete are computational in nature. However, in my mathematics courses, I’ve moved away from computation and more towards abstraction. As I mentioned above, it’s a different mindset, but one that is very helpful to have as I work through problems both in physics and mathematics. It’s something that I wish more science students would engage with, because it’s a fantastic skill to have.