The Living Thing / Notebooks :

Soft methodology of science

The course of science

Course of science, source unknown.

See also: knowledge topology

In which I collect tips from esteemed and eminent minds about how to go about pro-actively discovering stuff. More meta-tips than detailed agendas of discovery.

Sidling up to the truth.

As a researcher more-than-usually motivated by big picture ideas (How can we survive on planet earth without being consumed in battles over dwindling resources and environmental crises?) as much as aesthetic ideas, I am considered damaged goods. A fine scientist, consensus claims, is safely myopic, the job of discovery being detailed piecework.

On one hand, the various research initiatives that pay my way are tied to various real world goals (“Predict financial crises!”, “Tell us the future of the climate!”). On the other, researchers involved tell me that it is useless to try and solve these large issues wholesale, but that one must identify small retail questions that one can hope to make progress on. On the other hand, they’ve just agreed to take a lot of money to solve big problems. In this school, then, the presumed logic is that one takes a large research grant to strike a light on lots of small problems that lie in the penumbra of the large issue, in the hope that one is flares up to illuminate the shade. Or burns the lot to the ground. The example given by the Oxonian scholar who most recently expounded this to me was Paul David and the path dependence of the QWERTY keyboard. Deep issue of the contingency of the world, seen through the tiny window opened by substandard keyboard design.

Truth, in these formulations, is a cat: don’t look at it directly or it will perversely slope off to rub against someone else’s leg. Your acceptance is all in the sidling-up, the feigning disinterest, and waiting for truth to come up and show you its belly.

I’m not sure I’m am persuaded by this. It’s the kind of science that would be expounded by an education film directed by Alejandro Jodorowsky.

On the other hand, I’m not sure that I buy the grant-maker’s side of this story either, at least the story that grant-makers seem to expound in Australia, which is that they give out money to go out and find something out. There are productivity outcomes on the application form where you fill out the goals that your research will fulfill; This rules out much of the research done, by restricting you largely to marginally refining a known-good idea rather than trying something new. I romantically imagine that in much research, you would not know what you were discovering in advance.

The compromise is that we meet in the middle and swap platitudes. We will “improve our understanding of X”, we will “find strategies to better manage Y”. We certainly don’t mention that we might spend a while pondering keyboard layouts when the folks ask us to work out how to manage a complex non-linear economy.

Outsiders and revolutionary ideas

Is it just stirring the pot? How many, to choose an example, physicists, can get published by ignoring everyone else’s advances?

How do you know that your left field idea is a radically simple left-field idea that causes the entire field to advance? And how do you know that it is not the crazed ramblings of someone missing the advances of the last several decades, an asylum inmate wandering out of the walled disciplinary asylum in a dressing gown, railing against the Vietnam War?

OK, let’s just simulate earth then

Global climate simulations, EpiSIM and so on seem to have ideas about building massive, micro-founded models of the earth, or at least large subsets of it. Are these projects worthwhile?

Have we reached the limits of science?

I don’t mean Have we surpassed the limits of the scientific method and hereafter it’s all Jonathon Livingstone Seagull and astral planes?

I mean, Have we reached the ability of the scientific community to work together on a consensus front of human knowledge?

Obviously science has always been riven by controversy. That, some might argue, is precisely the virtue of science. But, though I was not there at the 1927 Solvay conference, I feel that historical disputes in science about, e.g. the nature of the atom, shared more in agreement on methodology that the disputes I encounter in my work, as riven as that conference was by methodological disputes. I have no idea if this assertion is correct, or even quantifiable.

My colleagues’ disputes seem to often be about the validity of whole fields of knowledge. Is any economics worth reading at all, or should I make up my own models? Should I bother knowing statistics before I attempt to fit new models to data? Is there any point trying to handle problems involving people with science at all, or should we cede to Gerald Midgely and Systemic Intervention methodology possé?

My feeling is that science is worth doing, and that it’s worth doing properly, i.e. doing systematic, statistically valid experiments with knowledge of and reference to a thoroughly researched opinion of what the state-of-the-art in the field is. However, there are three important arguments against that stance.

  1. The depths of specialities these days are so vast that it is impossible to know the relevant ones to your topic. How do you even find which new statistical technique is relevant to your data set? If you wait to be well grounded in other fields, are you risking never doing anything at all?
  2. Outsiders are apparently more likely to come up with radical insights
  3. In the Realpolitik of academia, your success is to do with your publication record, and a known good strategy is to push out a paper which defiantly ignore the status quo and posits a simple provocative model which can’t be unified with the rest of the field. This is closer to hustling than brilliance in my book, but in a world full of paper reviewers who are too busy hustling themselves to be across the literature, this will get through to a journal with high probability, and certainly much less effort than an innovation which was more tightly engaged with a difficult literature.

That last point worries me - if the consensus in the scientific community is to abandon consensus, might the literature fission as much as the US media have? Might half of scientists spend our time worried about the birth certificate of the presidential truth?

Given that the famed Solvay conference was riven by methodological disputes of its own, should I accept that this kind of shambolic mutual ignorance is simply how it all works, and stop panicking about the decay of human knowledge? Or is the problem of a larger degree than before? Is it so that there is so much more of it, that knowledge stuff, that we can’t keep tabs, and consensus has been lost?