Scientific progress is self-similar (that is, fractal): each level of abstraction, from local problem solving to big-picture science, features progress of the “normal science” type, punctuated by occasional revolutions. The revolutions themselves have a fractal time scale, with small revolutions occurring fairly frequently (every few minutes for an exam-type problem, up to every few years or decades for a major scientific consensus).
At the largest level, human inquiry has perhaps moved from a magical to a scientific paradigm. Within science, the dominant paradigm has moved from Newtonian billiard balls, to quantum, to evolution and population genetics, to neural computation. Within, say, psychology, the paradigm has moved from behaviorism to cognitive psychology.
On smaller scales, too, we see paradigm shifts. For example, in working on a problem in applied statistics or social science, we typically will start in a certain direction, then suddenly realize we were thinking about it wrong, then move forward, and so forth. In a consulting setting, this reevaluation can happen several times in a couple of hours. At a slightly longer time scale, we might reassess our approach to an applied problem after a few months, realizing there was some key feature we were misunderstanding. This normal-science and revolution pattern ties into a statistical workflow cycling between model building, inference, and model checking.
Understanding the fractal nature of scientific revolutions—a framework that combines ideas from the philosophers-of-science Popper, Kuhn, and Lakatos—has been helpful, not just in the comforting reassurance that setbacks at all time scales are to be expected, but also in giving me a model for research progress that oscillates between modeling, deductive inference, and evaluating on data. Before gathering any data I always find it useful to clearly state my expectations, and any theories I have are always tentative and subject to change—which makes sense, given that much of my work is on political opinions and voting, where patterns are always in flux.
Professor of statistics and political science at Columbia University.
He has received the Outstanding Statistical Application award three times from the American Statistical Association, the award for best article published in the American Political Science Review, the Mitchell and DeGroot prizes from the International Society of Bayesian Analysis, and the Council of Presidents of Statistical Societies award.