Jay's Blog

Math, Teaching, Literature, and Life


Recent Posts:

Subscribe via RSS

Twitter: @ProfJayDaigle

Paradigms and Priors

January 15, 2019

Scott Alexander at Slate Star Codex has been blogging lately about Thomas Kuhn and the idea of paradigm shifts in science. This is a topic near and dear to my heart, so I wanted to take the opportunity to share some of my thoughts and answer some questions that Scott asked in his posts.

The Big Idea

I’m going to start with my own rough summary of what I take from Kuhn’s work. But since this is all in response to Scott’s book review of The Structure of Scientific Revolutions, you may want to read his post first.

The main idea I draw from Kuhn’s work is that science and knowledge aren’t only, or even primarily, of a collection of facts. Observing the world and incorporating evidence is important to learning about the world, but evidence can’t really be interpreted or used without a prior framework or model through which to interpret it. For example, check out this Twitter thread: researchers were able to draw thousands of different and often mutually contradictory conclusions from a single data set by varying the theoretical assumptions they used to analyze it.

Kuhn also provided a response to Popperian falsificationism. No theory can ever truly be falsified by observation, because you can force almost any observation to match most theories with enough special cases and extra rules added in. And it’s often quite difficult to tell whether a given extra rule is an important development in scientific knowledge, or merely motivated reasoning to protect a familiar theory. After all, if you claim that objects with different weights fall at the same speed, you then have to explain why that doesn’t apply to bowling balls and feathers.

This is often described as the theory-ladenness of observation. Even when we think directly perceiving things, those perceptions are always mediated by our theories of how the world works and can’t be fully separated from them. This is most obvious when engaging in a complicated indirect experiment: there’s a lot of work going on between “I’m hearing a clicking sound from this thing I’m holding in my hand” and “a bunch of atoms just ejected alpha particles from their nuclei”.

But even in more straightforward scenarios, any inference comes with a lot of theory behind it. I drop two things that weigh different amounts, and see that the heavier one falls faster—proof that Galileo was wrong!

Or even more mundanely: I look through my window when I wake up, see a puddle, and conclude that it rained overnight. Of course I’m relying on the assumption that when I look through my window I actually see what’s on the other side of it, and not, say, a clever science-fiction style holoscreen. But more importantly, my conclusion that it rained depends on a lot of assumptions I normally wouldn’t explicitly mention—that rain would leave a puddle, and that my patio would be dry if it hadn’t rained.

(In fact, I discovered several months after moving in that my air conditioner condensation tray overflows on hot days. So the presence of puddles doesn’t actually tell me that it rained overnight).

Even direct perception, what we can see right in front of us, is mediated by internal modeling our brains do to put our observations into some comprehensible context. This is why optical illusions work so well; they hijack the modeling assumptions of your perceptual system to make you “see” things that aren’t there.

An example of the Scintillating Grid illusion.

There are no black dots in this picture.
Who are you going to believe: me, or your own eyes?


What does this tell us about science?

Kuhn divides scientific practice into three categories. The first he calls pre-science, where there is no generally accepted model to interpret observations. Most of life falls into this category—which makes sense, because most of life isn’t “science”. Subjects like history and psychology with multiple competing “schools” of thought are pre-scientific, because while there are a number of useful and informative models that we can use to understand parts of the subject, no single model provides a coherent shared context for all of our evidence. There is no unifying consensus perspective that basically explains everything we know.

A model that does achieve such a coherent consensus is called a paradigm. A paradigm is a theory that explains all the known evidence in a reasonable and satisfactory way. When there is a consensus paradigm, Kuhn says that we have “normal science”. And in normal science, the idea that scientists are just collecting more facts actually makes sense. Everyone is using the same underlying theory, so no one needs to spend time arguing about it; the work of science is just to collect more data to interpret within that theory.

But sometimes during the course of normal science you find anomalies, evidence that your paradigm can’t readily explain. If you have one or two anomalies, the best response is to assume that they really are anomalies—there’s something weird going on there, but it isn’t a problem for the paradigm.

A great example of an unimportant anomaly is the OPERA experiment from a few years ago that measured neutrinos traveling faster than the speed of light. This meant one of two things: either special relativity, a key component of the modern physics paradigm, was wrong; or there was an error somewhere in a delicate measurement process. Pretty much everyone assumed that the measurement was flawed, and pretty much everyone was right.

In contrast, sometimes the anomalies aren’t so easy to resolve. Scientists find more and more anomalies, more results that the dominant paradigm can’t explain. It becomes clear the paradigm is flawed, and can’t provide a satisfying explanation for the evidence. At this point people start experimenting with other models, and with luck, eventually find something new and different that explains all the evidence, old and new, normal and anomalous. A new paradigm takes over, and normal science returns.

(Notice that the old paradigm was never falsified, since you can always add epicycles to make the new data fit. In fact, the proverbial “epicycles” were added to the Ptolemaic model of the solar system to make it fit astronomical observations. In the early days of the Copernican model, it actually fit the evidence worse than the Ptolemaic model did—but it didn’t require the convoluted epicycles that made the Ptolemaic model work. Sabine Hossenfelder describes this process as, not falsification, but “implausification”: “a continuously adapted theory becomes increasingly difficult and arcane—not to say ugly—and eventually practitioners lose interest.”)

Importantly, Kuhn argued that two different paradigms would be incommensurable, so different from each other that communication between them is effectively impossible. I think this is sometimes overblown, but also often underestimated. Imagine trying to explain a modern medical diagnosis to someone who believes in four humors theory. Or remember how difficult it is to have conversations with someone whose politics are very different from your own; the background assumptions about how the world works are sometimes so different that it’s hard to agree even on basic facts.1

Scott’s example questions

Now I can turn to the very good questions Scott asks in section II of his book review.

For example, consider three scientific papers I’ve looked at on this blog recently….What paradigm is each of these working from?

As a preliminary note, if we’re maintaining the Kuhnian distinction between a paradigm on the one hand and a model or a school of thought on the other, it is plausible that none of these are working in true paradigms. One major difficulty in many fields, especially the social sciences is that there isn’t a paradigm that unifies all our disparate strands of knowledge. But asking what possibly-incommensurable model or theory these papers are working from is still a useful and informative exercise.

I’m going to discuss the first study Scott mentions in a fair amount of depth, because it turned out I had a lot to say about it. I’ll follow that up by making briefer comments on his other two examples.

Cipriani, Ioannidis, et al.

– Cipriani, Ioannidis, et al perform a meta-analysis of antidepressant effect sizes and find that although almost all of them seem to work, amitriptyline works best.

This is actually a great example of some of the ways paradigms and models shape science. The study is a meta-analysis of various antidepressants to assess their effectiveness. So what’s the underlying model here?

Probably the best answer is: “depression is a real thing that can be caused or alleviated by chemicals”. Think about how completely incoherent this entire study would seem to a Szasian who thinks that mental illnesses are just choices made by people with weird preferences, to a medieval farmer who thinks mental illnesses are caused by demonic possession, or to a natural-health advocate who thinks that “chemicals” are bad for you. The medical model of mental illness is powerful and influential enough that we often don’t even notice we’re relying on it, or that there are alternatives. But it’s not the only model that we could use.2


While this is the best answer Scott’s question, it’s not the only one. When Scott originally wrote about this study he compared it to one he had done himself, which got very different results. Since they’re (mostly) studying the same drugs, in the same world, they “should” get similar results. But they don’t. Why not?

I’m not in any position to actually answer that question, since I don’t know much about psychiatric medications. But I can point out one very plausible reason: the studies made different modeling assumptions. And Scott highlights some of these assumptions himself in his analysis. For instance, he looks at the way Cipriani et al. control for possible bias in studies:

I’m actually a little concerned about the exact way he did this. If a pharma company sponsored a trial, he called the pharma company’s drug’s results biased, and the comparison drugs unbiased….

But surely if Lundbeck wants to make Celexa look good [relative to clomipramine], they can either finagle the Celexa numbers upward, finagle the clomipramine numbers downward, or both. If you flag Celexa as high risk of being finagled upwards, but don’t flag clomipramine as at risk of being finagled downwards, I worry you’re likely to understate clomipramine’s case.

I make a big deal of this because about a dozen of the twenty clomipramine studies included in the analysis were very obviously pharma companies using clomipramine as the comparison for their own drug that they wanted to make look good; I suspect some of the non-obvious ones were too. If all of these are marked as “no risk of bias against clomipramine”, we’re going to have clomipramine come out looking pretty bad.

Cipriani et al. had a model for which studies were producing reliable data, and fed it into their meta-analysis. Notice they aren’t denying or ignoring the numbers that were reported, but they are interpreting them differently based on background assumptions they have about the way studies work. And Scott is disagreeing with those assumptions and suggesting a different set of assumptions instead.

(For bonus points, look at why Scott flags this specific case. Cipriani et al. rated clomipramine badly, but Scott’s experience is that clomipramine is quite good. This is one of Kuhn’s paradigm-violating anomalies: the model says you should expect one result, but you observe another. Sometimes this causes you to question the observation; sometimes a drug that “everyone knows” is great actually doesn’t do very much. But sometimes it causes you to question the model instead.)

Scott’s model here isn’t really incommensurable with Cipriani et al.’s in a deep sense. But the difference in models does make numbers incommensurable. An odds ratio of 1.5 means something very different if your model expects it to be biased downwards than it does if you expect it to be neutral—or biased upwards. You can’t escape this sort of assumption just by “looking at the numbers”.

And this is true even though Scott and Cipriani et al. are largely working with the same sorts of models. They both believe in the medical model of mental illness. Their paradigm does include the idea that randomized controlled trials work, as Scott suggests in his piece. A bit more subtly, their shared paradigm also includes whatever instruments they use to measure antidepressant effectiveness. Since Cipriani et al. is actually a meta-analysis, they don’t address this directly. But each study they include is probably using some sort of questionnaire to assess how depressed people are. The numbers they get are only coherent or meaningful at all if you think that questionnaire is measuring something you care about.


There’s one more paradigm choice here that I want to draw attention to, because it’s important, and because I know Scott is interested in it, and because we may be in the middle of a moderate paradigm shift right now.

Studies this one tend to assume that a given drug will work about the same for everyone. And then people find that no antidepressant works consistently for everyone, and they all have small effect sizes, and conclude that maybe antidepressants aren’t very useful. But that’s hard to square with the fact that people regularly report massive benefits from going on antidepressants. We found an anomaly!

A number of researchers, including Scott himself, have suggested that any given person will respond well to some antidepressants and poorly to others. So when a study says that bupropion (or whatever) has a small effect on average, maybe that doesn’t mean bupropion isn’t helping anyone. Maybe instead it’s helping some people quite a lot, and it’s completely useless for other people, and so on average its effect is small but positive.

But this is a completely different way of thinking clinically and scientifically about these drugs. And it potentially undermines the entire idea behind meta-analyses like Cipriani et al. If our data is useless because we’re doing too much averaging, then averaging all our averages together isn’t really going to help. Maybe we should be doing something entirely different. We just need to figure out what.

Ceballos, Ehrlich et al.

– Ceballos, Ehrlich, et al calculate whether more species have become extinct recently than would be expected based on historical background rates; after finding almost 500 extinctions since 1900, they conclude they definitely have.

I actually think Scott mostly answers his own questions here.

As for the extinction paper, surely it can be attributed to some chain of thought starting with Cuvier’s catastrophism, passing through Lyell, and continuing on to the current day, based on the idea that the world has changed dramatically over its history and new species can arise and old ones disappear. But is that “the” paradigm of biology, or ecology, or whatever field Ceballos and Lyell are working in? Doesn’t it also depend on the idea of species, a different paradigm starting with Linnaeus and developed by zoologists over the ensuing centuries? It look like it dips into a bunch of different paradigms, but is not wholly within any.

The paper is using a model where

(You can in fact see a lot of their model/paradigm come through pretty clearly in the “Discussion” section of the paper— which is good writing practice.)

Scott seems concerned that it might dip a whole bunch of paradigms, but I don’t think that’s really a problem. Any true unifying paradigm will include more than one big idea; on the other hand, if there isn’t a true paradigm, you’d expect research to sometimes dip into multiple models or schools of thought. My impression is that biology is closer to having a real paradigm than not, but I can’t say for sure.

Terrell et al.

– Terrell et al examine contributions to open source projects and find that men are more likely to be accepted than women when adjusted for some measure of competence they believe is appropriate, suggesting a gender bias.

Social science tends to be less paradigm-y than the physical sciences, and this sort of politically-charged sociological question is probably the least paradigm-y of all, in that there’s no well-developed overarching framework that can be used to explain and understand data. If you can look at a study and know that people will immediately start arguing about what it “really means”, there’s probably no paradigm.

There is, however, a model underlying any study like this, as there is for any sort of research. Here I’d summarize it something like:

Basically, any time you get to do some comparisons and not others, or report some numbers and not others, you have to fall back on a model or paradigm to tell you which comparisons are actually important. Without some guiding model, you’d just have to report every number you measured in a giant table.

Now, sometimes people actually do this. They measure a whole bunch of data, and then they try to correlate everything with everything else, and see what pops up. This is not usually good research practice.

If you had exactly this same paper except, instead of “men and women” it covered “blondes and brunettes”, you’d probably be able to communicate the content of the paper to other people; but they’d probably look at you kind of funny, because why would that possibly matter?

Anomalies and Bayes

Possibly the most interesting thing Scott has posted is his Grand Unified Chart relating Kuhnian theories to related ideas in other disciplines. The chart takes the Kuhnian ideas of “paradigm”, “data”, and “anomaly” and identifies equivalents from other fields. (I’ve flipped the order of the second and third columns here). In political discourse Scott relates them to “ideology”, “facts”, and “cognitive dissonance”; in psychology he relates them to “prediction”, “sense data”, and “surprisal”.

In the original version of the chart, several entries in the “anomalies” column were left blank. He has since filled some of them in, and removed a couple of other rows. I think his answer for the “Bayesian probability” row is wrong; but I think it’s interestingly wrong, in a way that effectively illuminates some of the philosophical and practical issues with Bayesian reasoning.

A quick informal refresher: in Bayesian inference, we start with some prior probability that describes what we originally believe the world is like, by specifying the probabilities of various things happening. Then we make observations of the world, and update our beliefs, giving our conclusion as a posterior probability.

The rule we use to update our beliefs is called Bayes’s Theorem (hence the name “Bayesian inference”). Specifically, we use the mathematical formula \[ P(H |E) = \frac{ P(E|H) P(H)}{P(E)}, \] where $P$ is the probability function, $H$ is some hypothesis, and $E$ is our new evidence.

I have often drawn the same comparison Scott draws between a Kuhnian paradigm and a Bayesian prior. (They’re not exactly the same, and I’ll come back to this in a bit). And certainly Kuhnian “data” and Bayesian “evidence” correspond pretty well. But the Bayesian equivalent of the Kuhnian anomaly isn’t really the KL-divergence that Scott suggests.

KL-divergence is mathematical way to measure how far apart two probability distributions are. So it’s an appropriate way to look at two priors and tell how different they are. But you never directly observe a probability distribution—just a collection of data points—so KL-divergence doesn’t tell you how surprising your data is. (Your prior does that on its own).

But “surprising evidence” isn’t the same thing as an anomaly. If you make a new observation that was likely under your prior, you get an updated posterior probability and everything is fine. And if you make a new observation that was unlikely under your prior, you get an updated posterior probability and everything is fine. As long as the true3 hypothesis is in your prior at all, you’ll converge to it with enough evidence; that’s one of the great strengths of Bayesian inference. So even a very surprising observation doesn’t force you to rethink your model.

In contrast, if you make a new observation that was impossible under your prior, you hit a literal divide-by-zero error. If your prior says that $E$ can’t happen, then you can’t actually carry out the Bayesian update calculation, because Bayes’s rule tells you to divide by $P(E)$—which is zero. And this is the Bayesian equivalent of a Kuhnian anomaly.

We can imagine a robot in an Asimov short story encountering this situation, trying to divide by zero, and crashing fatally. But people aren’t quite so easy to crash, and an intelligently designed AI wouldn’t be either. We can do something that a simple Bayesian inference algorithm doesn’t allow: we can invent a new prior and start over from the beginning. We can shift paradigms.


A theoretically perfect Bayesian inference algorithm would start with a universal prior—a prior that gives positive probability to every conceivable hypothesis and every describable piece of evidence. No observation would ever be impossible under the universal prior, so no update would require division by zero.

But it’s easier to talk about such a prior than it is to actually come up with one. The usual example I hear is the Solomonoff prior, but it is known to be uncomputable. I would guess that any useful universal prior would be similarly uncomputable. But even if I’m wrong and a theoretically computable universal prior exists, there’s definitely no way we could actually carry out the infinitely many computations it would require.

Any practical use of Bayesian inference, or really any sort of analysis, has to restrict itself to considering only a few classes of hypotheses. And that means that sometimes, the “true” hypothesis won’t be in your prior. Your prior gives it a zero probability. And that means that as you run more experiments and collect more evidence, your results will look weirder and weirder. Eventually you might get one of those zero-probability results, those anomalies. And then you have to start over.

A lot of the work of science—the “normal” work—is accumulating more evidence and feeding it to the (metaphorical) Bayesian machine. But the most difficult and creative part is coming up with better hypotheses to include in the prior. Once the “true” hypothesis is in your prior, collecting more evidence will drive its probability up. But you need to add the hypothesis to your prior first. And that’s what a paradigm shift looks like.


It’s important to remember that this is an analogy; a paradigm isn’t exactly the same thing as a prior. Just as “surprising evidence” isn’t an anomaly, two priors with slightly different probabilities put on some hypotheses aren’t operating in different paradigms.

Instead, a paradigm comes before your prior. Your paradigm tells you what counts as a hypothesis, what you should include in your prior and what you should leave out. You can have two different priors in the same paradigm; you can’t have the same prior in two different paradigms. Which is kind of what it means to say that different paradigms are incommensurable.

This is probably the biggest weakness of Bayesian inference, in practice. Bayes gives you a systematic way of evaluating the hypotheses you have based on the evidence you see. But it doesn’t help you figure out what sort of hypotheses you should be considering in the first place; you need some theoretical foundation to do that.

You need a paradigm.


Have questions about philosophy of science? Questions about Bayesian inference? Want to tell me I got Kuhn completely wrong? Tweet me @ProfJayDaigle or leave a comment below, and let me know!

  1. If you’re interested in the political angle on this more than the scientific, check out the talk I gave at TedxOccidentalCollege last year

  2. In fact, this was my third or fourth answer in the first draft of this section. Then I looked at it again and realized it was by far the best answer. That’s how paradigms work: as long as everything is working normally, you don’t even have to think about the fact that they’re there. 

  3. "True" isn’t really the most accurate word to use here, but it works well enough and I want to avoid another thousand-word digression on the subject of metaphysics. 

Comments


Numerical Semigroups and Delta Sets

January 05, 2019

In this post I want to outline my main research project, which involves non-unique factorization in numerical semigroups. I’m going to define semigroups and numerical semigroups; explain what non-unique factorization means; define the invariant I study, called the delta set; and talk about some of the specific questions I’m interested in.

Semigroups

A semigroup is a set $S$ with one associative operation. This really just means we have a set of things, and some way of combining any two of them to get another. Semigroups generalize the more common idea of a group, which has an identity and inverses in addition to the associative operation. Every group is also a semigroup, but not every semigroup is a group.1

The simplest example of a semigroup is the natural numbers $\mathbb{N}$, with the operation of addition: we can add any two natural numbers together, but without negative numbers we don’t have any way to subtract, which would be an inverse. This is the free semigroup on one generator, which means we can get every element by starting with $1$ and adding it to itself some number of times.

Other examples of semigroups are:

Numerical semigroups, which are the main object I study, are formally defined as sub-semigroups of the natural numbers but that phrase doesn’t actually explain a lot if you’re not already familiar with the field. However, I can explain what they actually are them much less technically and more simply.

Numerical Semigroups

We can define the numerical semigroup generated by $a_1, \dots, a_k$ to be the set of integers \[ \langle a_1, \dots, a_k \rangle = {n_1 a_1 + \dots + n_k a_k : n_i \in \mathbb{Z}_{\geq 0} }. \] In other words, our semigroup is the set of all the numbers you can get by adding up the generators some number of times, but without allowing subtraction.

I like to think about the Chicken McNugget semigroup to explain this. When I was a kid, at McDonald’s you could get a 4-piece, 6-piece, or 9-piece order of Chicken McNuggets.2 And then we can ask: which numbers of nuggets is it possible to order?

You certainly can’t order one, two, or three nuggets. You can order four, but not five. You can order six, but not seven. You can get eight by ordering two 4-pieces, nine by ordering one 9-piece, and ten by ordering a 4-piece and a 6-piece. There’s no way to order exactly eleven nuggets, and it turns out we can get any number of nuggets past that exactly. (This makes eleven the Frobenius number for this semigroup). We can summarize all this in the table below:

\[ \begin{array}{cc} 1 & \text{not possible} \\\
2 & \text{not possible} \\\
3 & \text{not possible} \\\
4 & = 1 \cdot 4 + 0 \cdot 6 + 0 \cdot 9 \\\
5 & \text{not possible} \\\
6 & = 0 \cdot 4 + 1 \cdot 6 + 0 \cdot 9 \\\
7 & \text{not possible} \\\
8 & = 2 \cdot 4 + 0 \cdot 6 + 0 \cdot 9 \\\
9 & = 0 \cdot 4 + 0 \cdot 6 + 1 \cdot 9 \\\
10 & = 1 \cdot 4 + 1 \cdot 6 + 0 \cdot 9 \\\
11 & \text{not possible} \\\
12 & = 3 \cdot 4 + 0 \cdot 6 + 0 \cdot 9 \\\
& = 0 \cdot 4 + 2 \cdot 6 + 0 \cdot 9 \\\
13 & = 1 \cdot 4 + 0 \cdot 6 + 1 \cdot 9 \end{array} \]

Looking at this table you might notice something else: there are two rows for the number 12, because we can order 12 nuggets in two different ways: we can order three 4-piece orders, or two 6-piece orders. We call each of these ways of ordering twelve nuggets a factorization of 12 with respect to the generators $4,6,9$. And not only do we have two different factorizations of 12; they actually have different numbers of factors!

If we look at larger numbers, the variety in factorizations becomes far greater. Consider this table of ways to factor 36: \[ \begin{array}{cc} \text{factorization} & \text{length} \\\
9 \cdot 4 + 0 \cdot 6 + 0 \cdot 9 & 9 \\\
6 \cdot 4 + 2 \cdot 6 + 0 \cdot 9 & 8 \\\
3 \cdot 4 + 4 \cdot 6 + 0 \cdot 9 & 7 \\\
3 \cdot 4 + 1 \cdot 6 + 2 \cdot 9 & 6 \\\
0 \cdot 4 + 6 \cdot 6 + 0 \cdot 9 & 6 \\\
0 \cdot 4 + 3 \cdot 6 + 2 \cdot 9 & 5 \\\
0 \cdot 4 + 0 \cdot 6 + 4 \cdot 9 & 4 \end{array} \] We have seven distinct ways we can factor 12. The shortest has four factors and the longest has nine; every length in between is represented.

From here we can ask a number of questions. How many ways can we order a given number of chicken nuggets? How many different lengths can these factorizations have? What patterns can we find?

All this is very different from what we’re used to. When we factor integers into prime numbers, the Fundamental Theorem of Arithmetic tells us that there is a unique way to do this. We generally learn this in grade school, and so from a very young age we’re used to having only one way to factor things. But this unique factorization property isn’t universal, and it doesn’t apply here.

Numerical semigroups essentially never have unique factorization. But we want to find ways to measure how not-unique their factorization is.

The Delta Set

In my research I study something called the delta set of a semigroup. The delta set is a way of measuring how complicated the relationships among different factorizations can get.

For an element $x$ in a semigroup, we can look at all the factorizations of $x$, and then we can look at all the possible lengths of these factorizations. (In our example above, we had $\mathbf{L}(36) = \{4,5,6,7,8,9\}$; we don’t repeat the $6$ because we only care about which lengths are possible, and not how many times they occur). Then we can ask a bunch of questions about these sets of lengths.

A simple thing to compute is the elasticity of an element, which is just the ratio of the longest factorization to the shortest, and tells you how much the lengths can vary. (The elasticity of $36$ is $9/4$). A good exercise is to convince yourself that the largest elasticity of any element in a semigroup is the ratio of the largest generator to the smallest generator. (And thus that $36$ has the maximum possible elasticity for $\langle 4, 6, 9 \rangle$).

The delta set is a bit more complicated. The delta set of $x$ is the set of successive differences in lengths. So instead of looking at the shortest and longest factorizations, we look at all of them, and see what sort of gaps show up. (For our example, the delta set is just $\Delta(36) = \{1\}$, since there’s a factorization of each length between $4$ and $9$. If the set of lengths were $\{3,5,8,15\}$ then the delta set would be $\{2,3,7\}$).

We want to understand the whole semigroup, not just individual elements. So we often want to talk about the delta set of an entire semigroup, which is just the union of the delta sets of all the elements. So $\Delta(S)$ tells us what kind of gaps can appear in any set of lengths for any element of the semigroup. It turns out that for the Chicken McNugget semigroup $S = \langle 4,6,9 \rangle$, the delta set is just $\Delta(S) = \{1\}$. This means that the delta set of any element is just $\{1\}$, and thus that every set of lengths is a set of consecutive integers $\{n,n+1, \dots, n+k \}$.

What Do We Know?

Delta sets can be a little tricky to compute. It’s fairly easy to show a number is in the delta set of a semigroup: find an element, calculate all the factorization lengths, and see that you have a gap of the desired size. But to show that a number is not in the delta set of the semigroup, you have to show that it isn’t in the delta set of any element, which is much trickier.

However, there are a few things we do know.

But this is nearly everything that we really know about delta sets. There are a lot of open questions left, which primarily fall into two categories:

  1. For some nice category of semigroup, compute the delta set. We’ve already seen this question answered for semigroups generated by arithmetic sequences; we also have complete or partial answers for semigroups generated by generalized arithmetic sequences, geometric sequences, and compound sequences.

  2. The realization problem: given a set of natural numbers, is it the delta set of some numerical semigroup? We don’t actually know a lot about this. About the only thing that we know can’t happen is a minimum element that isn’t the GCD of the set. But to show that something can happen, about all we can do is find a specific semigroup that has that delta set. There’s a lot of room to explore here.

Non-Minimal Generating Sets

In my research I introduce one more complication. Earlier we talked about the Chicken McNugget semigroup, of all the ways we can build orders out of 4, 6, or 9 chicken nuggets. But McDonald’s also offers a 20 piece order of chicken nuggets. 4

From a purely algebraic perspective, this doesn’t change anything. Anything we can get with 20 piece orders, we can get with a combination of 4 and 6 pieces, so we have the same set and the same operation, and thus the same semigroup. (We say that 20 isn’t “irreducible” because we can factor it into other simpler elements). So in this sense, nothing should change.

But the set of factorizations does change. If we replicate our earlier table of factorizations of 36 but now allow $20$ as a factor, we get \[ \begin{array}{cc} \text{factorization} & \text{length} \\\
9 \cdot 4 + 0 \cdot 6 + 0 \cdot 9 + 0 \cdot 20 & 9 \\\
6 \cdot 4 + 2 \cdot 6 + 0 \cdot 9 + 0 \cdot 20 & 8 \\\
3 \cdot 4 + 4 \cdot 6 + 0 \cdot 9 + 0 \cdot 20 & 7 \\\
3 \cdot 4 + 1 \cdot 6 + 2 \cdot 9 + 0 \cdot 20 & 6 \\\
0 \cdot 4 + 6 \cdot 6 + 0 \cdot 9 + 0 \cdot 20 & 6 \\\
0 \cdot 4 + 3 \cdot 6 + 2 \cdot 9 + 0 \cdot 20 & 5 \\\
\color{blue}{4 \cdot 4 + 0 \cdot 6 + 0 \cdot 9 + 1 \cdot 20} & \color{blue}{5} \\\
0 \cdot 4 + 0 \cdot 6 + 4 \cdot 9 + 0 \cdot 20 & 4 \\\
\color{blue}{1 \cdot 4 + 2 \cdot 6 + 0 \cdot 9 + 1 \cdot 20 } & \color{blue}{4} \end{array} \] The extra generator gives us the two additional factorizations in blue.

Now every question we asked about factorizations in numerical semigroups, we can ask again for factorizations with respect to our non-minimal generating set. For instance, we can ask for the delta set with respect to our generating set. For 36 above, we see that the delta set is still 1, just as it was before; nothing has changed.

But let’s look instead at the element 20. With our old generating set of $4,6,9$, we can only get 20 nuggets in two ways. But with our non-minimal generating set, we have three different ways to order 20 nuggets: $20 = 5 \cdot 4 = 2 \cdot 4 + 2 \cdot 6 = 1 \cdot 20$. These three “factorizations” have lengths 5, 4, and 1, and a little experimentation will convince you that they’re the only possible factorizations. Therefore our set of lengths is $\mathbf{L}(20) = \{1,4,5\}$ and the delta set is $\Delta(20) = \{1,3\}$.

This is a big change! With the original, minimal generating set, the delta set of the entire semigroup was ${1}$. There was no element with a length gap larger than 1. But by adding a new generator in, we can get an element whose delta set is ${1,3}$. And a little experimentation shows us that \[ 26 = 5 \cdot 4 + 1 \cdot 6 = 2 \cdot 4 + 3 \cdot 6 = 2 \cdot 4 + 2 \cdot 9 = 1 \cdot 6 + 1 \cdot 20 \] and thus $\mathbf{L}(26) = \{2,4,5,6\}$ and $\Delta(26) = \{1,2\}$. So the delta set for the entire semigroup is $\{1,2,3\}$.5 We’ve gotten a different delta set for the exact same semigroup, but using a different set of generators.

This raises a number of questions for us to study. We can start with our previous two questions: given a semigroup (and a non-minimal set of generators), what is the delta set? And given a set, is it the delta set of some semigroup and non-minimal generating set? But we also have a new question: what happens to the delta set of a semigroup as we continually add things to the generating set? Can we make the delta set bigger? Can we make it smaller? What ways of adding generators produce interesting patterns?

There’s a lot of fertile ground here. A few questions have been answered already, in a paper I cowrote with Scott Chapman, Rolf Hoyer, and Nathan Kaplan in 2010. For instance, it is always possible to force the delta set to be $\{1\}$ by adding more elements to the generating set. A couple other groups have done some work since then, but as far as I know, nothing else has been published.

But hopefully I’ve convinced you that there are quite a few interesting and unanswered questions in this field. Many of the answers should be accessible with a bit of work, and I hope to be able to provide some of them soon.


Have a question about numerical semigroups? Factorization theory? My research? Tweet me @ProfJayDaigle or leave a comment below.


  1. There is also something called a “monoid”, which has an identity element but no inverses; thus every group is a monoid and every monoid is a semigroup. The presence of an identity element doesn’t actually matter for any of the questions we’re asking, so researchers use the terms “semigroup” and “monoid” more or less interchangeably. 

  2. For some reason, they switched over to 4-, 6-, and 10-piece orders when I was a teenager. That semigroup is much less interesting, so I’m going to pretend that never happened. 

  3. This result was originally proven by Scott Chapman, Rolf Hoyer, and Nathan Kaplan in 2008, during an undergraduate REU research program I was also participating in. But the original result had an unfortunately large bound, so using this to compute delta sets wasn’t really practically feasible. In 2014, a paper by J. I. García-García, M. A. Moreno-Frías, and A. Vigneron-Tenorio improved the bound dramatically and made computation of delta sets feasible on personal computers. 

  4. My parents would never let me order this when I was a child, and I’m still bitter. 

  5. I haven’t actually shown that you can’t get a gap bigger than $3$. But it’s true. 

Comments


The difference between science and engineering

August 29, 2018

I wrote this essay a few years back elsewhere on the internet. It still seems relevant, so I’m posting this updated and lightly edited version.

I’ve noticed that people regularly get confused, on a number of subjects, by the difference between science and engineering.
In summary: science is sensitive and finds facts; engineering is robust and gives praxis. Many problems happen when we confuse science for engineering and completely modify our praxis based on the results of a couple of studies in an unsettled area.

Sad truth: Most "mad scientists" are actually just mad engineers

(Thanks to Cowbirds in Love for the perfect comic strip)

The difference between science and engineering

As a rough definition, science is a system of techniques for finding out facts about the world. Engineering, in contrast, is the technique of using science to produce tools we can consistently use in the world. Engineering produces things that have useful effects. (And I’ll also point to a third category, of “folk traditions,” which are tools we use in the world that are not particularly founded in science.)

These things are importantly different. Science depends on a large number of people putting together a lot of little pieces, and building up an edifice of facts that together give us a good picture of how things work. It’s fine if any one experiment or study is flawed, because in the limit of infinite experiments we figure out what’s going on. (See for example Scott Alexander’s essay Beware the Man of One Study for excellent commentary on this problem).

Similarly, it’s fine if any one experiment holds in only very restricted cases, or detects a subtle effect that can only be seen with delicate machinery. The point is to build up a large number of data points and use them to generate a model of the world.

Engineering, in contrast, has to be robust. If I want to detect the Higgs Boson once, to find out if it exists, I can do that in a giant machine that costs billions of dollars and requires hundreds of hours of analysis. If I want to build a Higgs Boson detector into a cell phone, that doesn’t work.

This means two things. First is that we need to understand things much better for engineering than for science. In science it’s fine to say “The true effect is between +3 and -7 with 95% probability”. If that’s what we know, then that’s what we know. And an experiment that shrinks the bell curve by half a unit is useful. For engineering, we generally need to have a much better idea of what the true effect is. (Imagine trying to build an airplane based on the knowledge that acceleration due to gravity is probably between 9 and 13 m/s^2).

Second is that science in general cares about much smaller effects than engineering does. It was a very long time before engineering needed relativistic corrections due to gravity, say. A fact can be true but not (yet) useful or relevant, and then it’s in the domain of science but not engineering.

Why does this matter?

The distinction is, I think fairly clear when we talk about physics. In particular, we understand the science of physics quite well, at least on every-day scales. And our practice of the engineering of physics is also quite well-developed, enough so that people rarely use folk traditions in place of engineering any more. (“I don’t know why this bridge stays up, but this is how daddy built them.”)

But people get much more confused when we move over to, say, psychology, or sociology, or nutrition. Researchers are doing a lot of science on these subjects, and doing good work. So there’s a ton of papers out there saying that eggs are good, or eggs are bad, or eggs are good for you but only until next Monday, or whatever.

And people often have one of two reactions to this situation. The first is to read one study and say “See, here’s the scientific study. It says eggs are bad for you. Why are you still eating eggs? Are you denying the science?” And the second reaction is to say that obviously the scientists can’t agree, and so we don’t know anything and maybe the whole scientific approach is flawed.

But the real situation is that we’re struggling to develop a science of nutrition. And that’s hard. We’ve put in a lot of work, and we know some things. But we don’t really have enough information to do engineering—to say “Okay, to optimize cardiovascular health you need to cut your simple carbs by 7%, eat an extra 10g of monounsaturated fats every day, and eat 200g of protein every Wednesday”, or whatever. We just don’t know enough.

And this is where folk traditions come in. Folk traditions are attempts to answer questions that we need decent answers to, that have been developed over time, and that are presumably non-horrible because they haven’t failed obviously and spectacularly yet. A person who eats “like grandma did” is probably on average at least as healthy as a person who tried to follow every trendy bit of scientistic nutrition advice from the past thirty years.

Trendy teaching as confusing science for engineering

So where do I see this coming up other than nutrition? Well, the subject that really got me thinking about it was “scientific” teaching practices. I’ve attended a few workshops on “modern” teaching techniques like the use of clickers, and when I tell people about them I often get comments disparaging cargo cult teaching methods.

In general there’s a big split among university professors between people who want to teach in a more “traditional” way and people who want to teach in a more “scientific” way. With bad blood on both sides.

And my biggest problem with the “scientific” side is that some of their studies are so bad. I’d like good studies on teaching methods. I’d like a good engineering of teaching. But we don’t have one yet, and acting like “we have three studies, now we know the best thing to do” is just silly.

(Which shouldn’t be read as full-throated support for the “traditionalists”! The science is good enough to tell us some things about some things, and I do try to engage in judicious supplementation of folk teaching traditions with information from recent research. But the research is not in a good enough state to be dispositive, or produce an engineering discipline, or completely displace the folk tradition).

Other examples

A few of my friends have complained about the sad state of excercise science; but I think they’re really complaining about the lack of exercise engineering. We are doing basic research that tells us about how the body responds to exercise. We don’t know enough to give advice that improves much on “do the things people have been doing for a while that seem to work”.

A lot of “lifehacks” boil down to “We read a study, and based on this study, here are three simple things you can do to accomplish X.” But a study is science, not engineering. Sometimes helpful, but easy to overinterpret. Don’t take any one study too seriously, and if what you’re doing works, don’t totally overhaul it because you read a study.

Similarly, any comment about how you can be more effective socially by doing this one trick is usually science, not engineering.

Lots of economics and public policy debates sound like this. “This study shows that raising the minimum wage (increases/decreases/has no effect on) unemployment.” All three of those statements can be true! There are a lot of studies with a lot of different results. We’re starting to develop an engineering practice of economics policy, but it’s in its infancy at best.

Or see this essay’s account of scientifically studying the most effective way for police to respond to domestic violence charges, for a good example of confusing science and engineering. Bonus points for the following quote:

Reflection upon these results led Sherman to formulate his “defiance” theory of the criminal sanction, beginning with the inauspicious generalization that, according to the evidence, “legal punishment either reduces, increases, or has no effect on future crimes, depending on the type of offenders, offenses, social settings and levels of analysis.” This is a fancy way of saying “we don’t know what works.”

Marketing: engineering versus folk traditions

The field of marketing presents a good contrast between engineering and folk traditions. We have a mental image of a sleazy salesman, who has a whole host of interpersonal tactics that have been honed through centuries and millenia of sleazy sales tactics. And this works.

And there’s an entirely different field of marketing research and focus groups. And this shows what’s necessary to turn science into engineering. There’s a whole bunch of basic research about psychology that goes into designing marketing campaigns. But people also do focus groups, to gather a ton of data on how people respont to minute differences.

And, more importantly, they do A/B testing, which gives pretty good data on how actual people respond to actual differences. And by iterating a ton of A/B testing, you have a pretty good idea that people will buy 5% more if you use the green packaging, or whatever.

Comments


An easier approach to partial fractions decomposition

August 16, 2018

I always found partial fraction decomposition incredibly annoying and tedious. But it turns out there’s a much easier way to compute it. (I learned this a couple years ago from Chris Towse).

Suppose we want to find a partial fraction decomposition for $\frac{7x+2}{(x+2)^2 (x-1)}$. The normal method is to take your fraction and write it as a sum of real numbers over your polynomial denominators:

(For this reason, my high school calculus teacher called this the “ABC method”). Then we clear denonminators:

and we get a system of linear equations:

This is a system of linear equations, so we can solve it by any of the usual methods, and we get $A = -1, B = 4, C = 1$, so

And now we can integrate or do whatever else we needed to do with our fraction.


This process can get super tedious. In particular, solving the linear system at the end isn’t difficult but it is really annoying and easy to screw up if you do it by hand. (I used NumPy instead. Computer algebra systems are your friend).

It turns out there’s a much easier way to do this. It’s motivated by complex analysis residue integrals, but you can do it without actually knowing any complex analysis.

Let’s go back to our equation from earlier:

Instead of clearing all the denominators, let’s just clear one. If we multiply by $(x-1)$ on both sides, we get

This doesn’t look much nicer at first, but look at what happens if we evaluate at $x=1$. The left hand side becomes $1$. On the right-hand side, the $A$ and $B$ terms go away completely, and we’re just left with $C$. So we immediately see that $C = 1$.


We can find $A$ and $B$ the same way, with a bit more care. Multiplying our equation by $x+2$ doesn’t help, because we’ll still have a factor of $x+2$ in the denominator. But if we multiply by $(x+2)^2$ we get

and evaluating at $x = -2$ gives $4 = B$. To get $A$, we need to do a little bit of work and subtract off the $B$ term:

so


That might feel like it took longer, but that’s mostly because I actually worked through all the algebra with the new version. No NumPy here! I actually suspect the first way is more efficient if you’re doing a really big decomposition, because it paralellizes a bunch of stuff, and linear equation solvers are pretty efficient.

But for reasonable-sized problems I’d much rather do the second method, no question. And this makes me almost want to actually teach partial fraction decomposition next time I teach calc 2.

Comments


A Neat Argument For the Uniqueness of $e^x$

August 09, 2018

In my advanced Calculus 1 class I teach a quick unit on differential equations. We don’t have the tools to solve them since we haven’t done integrals, but I talk about what differential equations are and how you can check whether you have a solution.

And then I spend a day in lab discussing exponential growth, and how the differential equation $y’ = ry$ implies that $y = Ce^{rt}$ for some constants $C$ and $r$. I’ve been telling my students that while it’s easy to check that this is a solution, we don’t have the tools to prove it’s the only family of solutions.

But today thanks to reddit, I discovered that that isn’t quite true. You can prove that $Ce^x$ is the only solution to this differential equation with a simple argument.

Suppose $f(x)$ is a function that satisfies $y’ = r y$, that is, suppose $f’(x) = r f(x)$. Then consider the derivative of $f(x) e^{-rx}$. By the product rule, we have

Thus we see that $f(x)/e^{rx}$ must be a constant; and thus $f(x) = C e^{rx}$. So this family of solutions is unique.

Comments