# Jay's Blog

### Recent Posts:

Subscribe via RSS

Twitter: @ProfJayDaigle

## The difference between science and engineering

I wrote this essay a few years back elsewhere on the internet. It still seems relevant, so I’m posting this updated and lightly edited version.

I’ve noticed that people regularly get confused, on a number of subjects, by the difference between science and engineering.
In summary: science is sensitive and finds facts; engineering is robust and gives praxis. Many problems happen when we confuse science for engineering and completely modify our praxis based on the results of a couple of studies in an unsettled area.

(Thanks to Cowbirds in Love for the perfect comic strip)

### The difference between science and engineering

As a rough definition, science is a system of techniques for finding out facts about the world. Engineering, in contrast, is the technique of using science to produce tools we can consistently use in the world. Engineering produces things that have useful effects. (And I’ll also point to a third category, of “folk traditions,” which are tools we use in the world that are not particularly founded in science.)

These things are importantly different. Science depends on a large number of people putting together a lot of little pieces, and building up an edifice of facts that together give us a good picture of how things work. It’s fine if any one experiment or study is flawed, because in the limit of infinite experiments we figure out what’s going on. (See for example Scott Alexander’s essay Beware the Man of One Study for excellent commentary on this problem).

Similarly, it’s fine if any one experiment holds in only very restricted cases, or detects a subtle effect that can only be seen with delicate machinery. The point is to build up a large number of data points and use them to generate a model of the world.

Engineering, in contrast, has to be robust. If I want to detect the Higgs Boson once, to find out if it exists, I can do that in a giant machine that costs billions of dollars and requires hundreds of hours of analysis. If I want to build a Higgs Boson detector into a cell phone, that doesn’t work.

This means two things. First is that we need to understand things much better for engineering than for science. In science it’s fine to say “The true effect is between +3 and -7 with 95% probability”. If that’s what we know, then that’s what we know. And an experiment that shrinks the bell curve by half a unit is useful. For engineering, we generally need to have a much better idea of what the true effect is. (Imagine trying to build an airplane based on the knowledge that acceleration due to gravity is probably between 9 and 13 m/s^2).

Second is that science in general cares about much smaller effects than engineering does. It was a very long time before engineering needed relativistic corrections due to gravity, say. A fact can be true but not (yet) useful or relevant, and then it’s in the domain of science but not engineering.

### Why does this matter?

The distinction is, I think fairly clear when we talk about physics. In particular, we understand the science of physics quite well, at least on every-day scales. And our practice of the engineering of physics is also quite well-developed, enough so that people rarely use folk traditions in place of engineering any more. (“I don’t know why this bridge stays up, but this is how daddy built them.”)

But people get much more confused when we move over to, say, psychology, or sociology, or nutrition. Researchers are doing a lot of science on these subjects, and doing good work. So there’s a ton of papers out there saying that eggs are good, or eggs are bad, or eggs are good for you but only until next Monday, or whatever.

And people often have one of two reactions to this situation. The first is to read one study and say “See, here’s the scientific study. It says eggs are bad for you. Why are you still eating eggs? Are you denying the science?” And the second reaction is to say that obviously the scientists can’t agree, and so we don’t know anything and maybe the whole scientific approach is flawed.

But the real situation is that we’re struggling to develop a science of nutrition. And that’s hard. We’ve put in a lot of work, and we know some things. But we don’t really have enough information to do engineering—to say “Okay, to optimize cardiovascular health you need to cut your simple carbs by 7%, eat an extra 10g of monounsaturated fats every day, and eat 200g of protein every Wednesday”, or whatever. We just don’t know enough.

And this is where folk traditions come in. Folk traditions are attempts to answer questions that we need decent answers to, that have been developed over time, and that are presumably non-horrible because they haven’t failed obviously and spectacularly yet. A person who eats “like grandma did” is probably on average at least as healthy as a person who tried to follow every trendy bit of scientistic nutrition advice from the past thirty years.

### Trendy teaching as confusing science for engineering

So where do I see this coming up other than nutrition? Well, the subject that really got me thinking about it was “scientific” teaching practices. I’ve attended a few workshops on “modern” teaching techniques like the use of clickers, and when I tell people about them I often get comments disparaging cargo cult teaching methods.

In general there’s a big split among university professors between people who want to teach in a more “traditional” way and people who want to teach in a more “scientific” way. With bad blood on both sides.

And my biggest problem with the “scientific” side is that some of their studies are so bad. I’d like good studies on teaching methods. I’d like a good engineering of teaching. But we don’t have one yet, and acting like “we have three studies, now we know the best thing to do” is just silly.

(Which shouldn’t be read as full-throated support for the “traditionalists”! The science is good enough to tell us some things about some things, and I do try to engage in judicious supplementation of folk teaching traditions with information from recent research. But the research is not in a good enough state to be dispositive, or produce an engineering discipline, or completely displace the folk tradition).

### Other examples

A few of my friends have complained about the sad state of excercise science; but I think they’re really complaining about the lack of exercise engineering. We are doing basic research that tells us about how the body responds to exercise. We don’t know enough to give advice that improves much on “do the things people have been doing for a while that seem to work”.

A lot of “lifehacks” boil down to “We read a study, and based on this study, here are three simple things you can do to accomplish X.” But a study is science, not engineering. Sometimes helpful, but easy to overinterpret. Don’t take any one study too seriously, and if what you’re doing works, don’t totally overhaul it because you read a study.

Similarly, any comment about how you can be more effective socially by doing this one trick is usually science, not engineering.

Lots of economics and public policy debates sound like this. “This study shows that raising the minimum wage (increases/decreases/has no effect on) unemployment.” All three of those statements can be true! There are a lot of studies with a lot of different results. We’re starting to develop an engineering practice of economics policy, but it’s in its infancy at best.

Or see this essay’s account of scientifically studying the most effective way for police to respond to domestic violence charges, for a good example of confusing science and engineering. Bonus points for the following quote:

Reflection upon these results led Sherman to formulate his “defiance” theory of the criminal sanction, beginning with the inauspicious generalization that, according to the evidence, “legal punishment either reduces, increases, or has no effect on future crimes, depending on the type of offenders, offenses, social settings and levels of analysis.” This is a fancy way of saying “we don’t know what works.”

### Marketing: engineering versus folk traditions

The field of marketing presents a good contrast between engineering and folk traditions. We have a mental image of a sleazy salesman, who has a whole host of interpersonal tactics that have been honed through centuries and millenia of sleazy sales tactics. And this works.

And there’s an entirely different field of marketing research and focus groups. And this shows what’s necessary to turn science into engineering. There’s a whole bunch of basic research about psychology that goes into designing marketing campaigns. But people also do focus groups, to gather a ton of data on how people respont to minute differences.

And, more importantly, they do A/B testing, which gives pretty good data on how actual people respond to actual differences. And by iterating a ton of A/B testing, you have a pretty good idea that people will buy 5% more if you use the green packaging, or whatever.

## An easier approach to partial fractions decomposition

I always found partial fraction decomposition incredibly annoying and tedious. But it turns out there’s a much easier way to compute it. (I learned this a couple years ago from Chris Towse).

Suppose we want to find a partial fraction decomposition for $\frac{7x+2}{(x+2)^2 (x-1)}$. The normal method is to take your fraction and write it as a sum of real numbers over your polynomial denominators:

(For this reason, my high school calculus teacher called this the “ABC method”). Then we clear denonminators: %

and we get a system of linear equations:

This is a system of linear equations, so we can solve it by any of the usual methods, and we get $A = -1, B = 4, C = 1$, so

And now we can integrate or do whatever else we needed to do with our fraction.

This process can get super tedious. In particular, solving the linear system at the end isn’t difficult but it is really annoying and easy to screw up if you do it by hand. (I used NumPy instead. Computer algebra systems are your friend).

It turns out there’s a much easier way to do this. It’s motivated by complex analysis residue integrals, but you can do it without actually knowing any complex analysis.

Let’s go back to our equation from earlier:

Instead of clearing all the denominators, let’s just clear one. If we multiply by $(x-1)$ on both sides, we get

This doesn’t look much nicer at first, but look at what happens if we evaluate at $x=1$. The left hand side becomes $1$. On the right-hand side, the $A$ and $B$ terms go away completely, and we’re just left with $C$. So we immediately see that $C = 1$.

We can find $A$ and $B$ the same way, with a bit more care. Multiplying our equation by $x+2$ doesn’t help, because we’ll still have a factor of $x+2$ in the denominator. But if we multiply by $(x+2)^2$ we get

and evaluating at $x = -2$ gives $4 = B$. To get $A$, we need to do a little bit of work and subtract off the $B$ term:

so

That might feel like it took longer, but that’s mostly because I actually worked through all the algebra with the new version. No NumPy here! I actually suspect the first way is more efficient if you’re doing a really big decomposition, because it paralellizes a bunch of stuff, and linear equation solvers are pretty efficient.

But for reasonable-sized problems I’d much rather do the second method, no question. And this makes me almost want to actually teach partial fraction decomposition next time I teach calc 2.

## A Neat Argument For the Uniqueness of $e^x$

In my advanced Calculus 1 class I teach a quick unit on differential equations. We don’t have the tools to solve them since we haven’t done integrals, but I talk about what differential equations are and how you can check whether you have a solution.

And then I spend a day in lab discussing exponential growth, and how the differential equation $y’ = ry$ implies that $y = Ce^{rt}$ for some constants $C$ and $r$. I’ve been telling my students that while it’s easy to check that this is a solution, we don’t have the tools to prove it’s the only family of solutions.

But today thanks to reddit, I discovered that that isn’t quite true. You can prove that $Ce^x$ is the only solution to this differential equation with a simple argument.

Suppose $f(x)$ is a function that satisfies $y’ = r y$, that is, suppose $f’(x) = r f(x)$. Then consider the derivative of $f(x) e^{-rx}$. By the product rule, we have

Thus we see that $f(x)/e^{rx}$ must be a constant; and thus $f(x) = C e^{rx}$. So this family of solutions is unique.

## Working Backwards

I teach a lot of students who are still learning the basics of proof-writing. My calculus students are seeing their first college math, and often my number theory class is the first really proof-heavy class that a lot of the students take. So I spend a lot of time helping students figure out how to write good proofs. The single best piece advice I’ve come up with is to get comfortable working backwards.

Math students are generally comfortable working forwards. At the beginning of our education this is the only way of working we have: we’re given a problem, like “add these two numbers together”, and we use some algorithm to work out the answer. I like the way Jordan Ellenberg describes this:

[U]ntil algebra shows up, you’re doing numerical computations in a straightforwardly algorithmic way. You dump some numbers into the addition box, or the multiplication box, or even, in traditionally minded schools, the long-division box, you turn the crank, and you report what comes out the other side. Algebra is different. It’s computation backward.

But even once we get to high-school algebra, we don’t really tend to feel like we’re working backwards. In practice we develop an algorithm for solving equations and turn the crank; instead of the addition box or the long-division box we now have the quadratic-equation box. This effect is strong enough that when I give my calculus students problems where I explicitly tell them to guess and check1, they feel very confused and want me to tell them what steps to follow to finish the problem.

But this wanting to know “the steps” is a trap—it leads to treating problems like a black-box request for an algorithm, rather than thinking about what’s actually going on. And as soon as problems require any sort of creative engagement, the search for steps fails.

This first hits students hard when they learn to do integrals. When I taught Calculus 2, I spent a couple lab periods having my students do integral worksheets while I answered questions and gave advice. Fairly frequently, they would spend five or six minutes staring at an integral, before giving up and asking me how to start; I sometimes pointed out that we only had four or five things to try, and if they’d just tried all of them they’d have found one that worked already. But they were uncomfortable with the idea that there wasn’t one correct thing to try.

But all of this becomes especially important when you start doing proofs, because in proofs there is no straightforward algorithm—and often you can’t even really work forward.

Working forward in proofs can be quite useful, to be fair. Given a set of hypotheses, you can start listing off things that the hypotheses obviously imply. Especially in the context of a class, you can often see “okay, if I know these three things, it looks like I should try applying that theorem and see what I get.” But this has really serious limits. You can’t plausibly write down all the implications of your hypotheses.

Instead, you need some idea of where to go. So you should start by looking at the goal, the thing you want to prove. Figure out what you could know that would be enough to finish the proof.

If you can see how to do that, then great, you win. If not, now go look at the hypotheses, and figure out what you can reasonably and easily conclude from them. Can you see how to get from there to the things you wanted?

You develop a sort of push-pull dynamic: work backwards from the goal, then forwards from the premises, then backwards from the goal again. And hopefully, eventually things will meet in the middle.

This seems pretty mundane. But a lot of students are comfortable working forward, and deeply uncomfortable working backwards. They can draw conclusions, but are bad at staying focused on the ultimate purpose of whatever they’re doing. So just prompting them to think about their goals in the middle of proofs can be really helpful.

I do this a lot while lecturing. In the middle of a proof, stop and ask the class to remind me what I’m actually trying to do. They find this surprisingly hard. Sometimes they even struggle to remember what the actual theorem we’re trying to prove is, despite the fact that it’s still written on the board; it’s easy to get lost in the weeds of whatever you’re doing right now and forget the broader context. But without that context, everything you’re doing is kind of pointless, and it’s really difficult to decide what to do next.

And that idea of the importance of context, of focusing on your actual goals, is just as true outside of math class as it is inside. I see this a lot when I give feedback on people’s writing and speaking: they will keep saying things, and the things will be correct, but they won’t do a good job of saying things that are relevant to their goals and message, and of telling us why the things are relevant.

But you can also see this in a lot of bad planning and management. If you lose track of why you’re doing what you’re doing, you’re much less likely to actually achieve your goals. You’ve forgotten what they are, so if you meet them it’s essentially by luck!

This is one of the dangers, for instance, of getting to reliant on management-by-metric. Originally you create a metric to measure how well you’re achieving some goal. But over time, people forget about the goal and remember the metric—and do things that improve the metric, but in ways that don’t advance the original goal.

Staying focused on your actual goal, and working backwards, is really helpful for learning to write proofs. So if you’re teaching people how to write proofs, this is worth explaining explicitly, and then actively training. Keep asking people why they’re doing what they’re doing, and how it gets them closer to the conclusion they want to prove!

But it’s also a really transferable skill, one that can help you in almost all aspects of life. Another example of general thinking skills that studying math helps develop, but are helpful everywhere.

Do you have any suggestions for how to help students get comfortable working backwards? Any other tips for teaching students to write proofs more fluently? Please share them in the comments!

1. This mostly comes up in the context of inverse functions. One of my favorite questions in Stewart is: Let $f(x) = x^5+x^3+1$. What is $f^{-1}(3)$? You can’t find an explicit formula for $f^{-1}$, but you don’t need to: just plugging in numbers makes it clear the answer is 1.

## Asking the Right Question

I’ve been poking around on math reddit lately, and in particular /r/askmath and /r/learnmath. And one thing I’ve really noticed is that many of the posters are really bad at asking questions.

Learning to ask good questions is really important, for a couple reasons. First and most obviously, it’s easier to get help with things and learn things if you can ask better questions. Second, and maybe more importantly, framing questions well is a lot of what makes you a good mathematician.

The most boring way to ask a bad question is just to not include enough information. Sometimes this is just laziness (“I don’t understand how to do this problem, please help”). And I’ve definitely seen questions asked that are thin disguises over “I don’t want to do my homework; can someone do it for me?”

But more often, badly-phrased questions result from deep confusion on the part of the asker. If they understood the material well enough to ask their question clearly and correctly, they wouldn’t need to ask it in the first place.

A lot of math is less about answering questions than about figuring out exactly what question you should be asking, and how to make it precise. We tend to sweep this under the rug a bit when teaching, in a way that I suspect probably leads to a certain amount of confusion.

When we teach, we often ask a question, and then demonstrate a tool to answer it, without necessarily stopping to explain why that question is a good one, or how people settled on asking exactly that question. And this often leaves our students with the sense that what they’re doing doesn’t really mean anything.

This is a major reason students fall back on figuring out what “type of problem” they’re working on, and then following “the steps” to get the answer. They see math as a sort of opaque box, and a question asks them to perform the correct magical ritual to get the answer. Because if the words you’re using—and your questions—don’t have a meaning, that’s all you can really do.

And that’s how you get questions like “how do I find solutions to $f(x) = \sin(x)+ \tfrac{1}{2}$.” I can tell what the original question probably was. But because the student doesn’t really understand what a “function” is, they ask a question that is, read literally, completely nonsensical.

There’s a different type of bad question that comes up: questions of the form “prove to me that this definition is correct”. Or, relatedly: “what is the true meaning of this ambiguous statement”.

I saw one question asking whether $-3$ was a prime number. And that’s actually a tricky question, but for a basically dumb reason. In elementary number theory, we usually define prime numbers to be positive integers. That’s part of the definition, so $3$ is prime and $-3$ is not. Why? Because we said so.

And then when you start doing algebraic number theory, you define primes as a factorization property in a ring, and you see that $-3$ is in fact a prime in the ring of integers. So is $-3$ a prime? Yes, by the definition of prime elements in a ring. Apparently the same question, two different answers—determined by the definition we happened to pick at the time.

Another great example of this problem is questions about order of operations. What’s the value of $6/2 \cdot 3$? It could be 1; it could be 9. I can’t tell you what the original author meant. I can only tell you that they’re a sloppy writer.

This happens in a lot of cases where we’re just discussing a notational convention. There are no underlying facts that make notation right or wrong; screwing up order of operations isn’t really the same type of error as adding numbers incorrectly. It’s just an aid to communication1.

My favorite type of badly written question does this exactly right. Sometimes, you see a question that’s basically “this one thing feels kind of like this other thing, but I can’t tell you how. Can you tell me?”

These people are doing good math. They’re noticing a pattern, and trying to put it into words. They’re maybe not quite there, and sometimes it’s hard to answer the question clearly. But it shows great instincts.

And this is how math tends to actually get done! When we teach, we tend to define terms, then state theorems about them, and then prove the theorems. But this is exactly backwards from how math is often actually done. First we understand what’s going on; then we figure out what the rule is and write it down; and finally understand what conditions are important and give those conditions names. That is, we formulate a proof, then state the theorem, and then define the terms.

These questions are working on step 1. I want to encourage them.

Have any thoughts about questions students ask? Any good techniques for helping your students write better questions? I’d love to hear about them in the comments below!

1. Aids to communication are useful and important. Math can be hard enough when it’s written well and clearly! But as a reader, if you can tell the writer isn’t following your notational conventions, you shouldn’t hold him to them. The goal of a reader is to figure out what the writer actually meant. And the goal of a writer is to make this as easy as possible.