on publication practice

Update / June 2014: this wonderful blog tells you far, far more than mine on the issue.  Read it.

Update / Dec. 2013: Elsevier seems to reinforce their effort to discourage open access publishing.  A paper sharing platform,, has been forced by Elsevier to take down certain publications.  Read the full story at

Then, Elsevier contacted universities—read the story on U Calgary in Canada at—–to take down papers from their websites.  This expression of mental poverty can only be the beginning of the end.

“In 2012 I decided to step back as editor of a machine learning journal published by a large Dutch publishing house, because of their publication practice. I had been with the journal for over 15 years, serving various board roles. I had and have nothing against that particular journal itself, nor against its board; indeed, its contents is generally good and it is one of the respected (indeed the oldest) journals on machine learning. But there the behaviour of the publishing house, which I consider to be very counterproductive for research, made me take this step.”

Let me explain my reasons for not liking science publication practice (but joining anyway). Scientific results are typically published like this:

  1. published research is, in most cases, funded by public money. That means, in general, the people working on research (in universities or research centres) are usually funded through government institutions;
  2. in order to inform colleagues and—where possible—the general public about the work, and of course to have one’s name associated with exciting results, a researcher (team) writes an as complete as possible article on the work, and attempts to publish it in the most suitable/most visible forum.  In short, the researchers publish a paper;
  3. other researchers hopefully pick up on the (great!) work and use that in their own research, and refer to it in their papers.

So far, so good. But there is a catch.  Whenever my co-workers want to publish a paper with me, one of the first questions is, “where shall we publish it”?  Or,

Which is the most suitable forum? Choosing a forum which counts has a few strings attached. As a researcher, you have to optimise two parameters: (1) the visibility of the paper (and therefore, of you); and (2) the credit that you receive.

It would be simple to publish on website-based forums. Free to put it there, free to read for all. But then this kind of publication does not include quality checks: anyone can publish anything, so finding good quality is difficult. Authors would not reach their readers.

Instead, journals have installed a peer-review system. Each submitted paper is sent out to two or three reviewers who are asked to read the paper, take it apart, comment on it, and send back their comments. The goal of the journal is to get good papers, which are often read (and therefore, often cited in other papers). The better the paper, the higher the number of cites, the better the journal.

The Institute for Scientific Information (ISI), now owned by Thomson Reuters, is a highly acclaimed organisation which indexes scientific publications. It looks for the most-cited papers, the most-cited researchers, and what not. What comes out of this is, for instance, the ISI IF, which we all call the “Impact Factor”.  How much impact does a journal, a researcher, a paper, etc. have?  Some ball park figures: 5 cites to a paper is not really much; hundred+ is quite good. There are “excellent” journals with impact factors of several tens; there are “bad” journals with impact factors below 0.1. These factors are then computed into the i10-index or h-index for researchers.

Why do I need a high impact factor? Well, it is one of the few numbers that researchers compare themselves which, and which is important for finding a better (research) job, getting grants, etc. Look up a researcher, look at the impact factor, and you have something to stick to.

Now, if you are a researcher and you want to publish, where would you? Right: in the journal with the highest impact factor that accepts your paper.

Again, so far, so good.

Now there is one slight snag. Journals—with a few exceptions—belong to commercial publishing houses. They are there to make money. This financial obligation usually result in putting a price to reading a paper, much like selling books. Go to any journal, e.g., Nature. You will find many articles, with an example picture, an abstract. And a link “purchase PDF” for  getting the paper itself for a price around €30.

Research institutions normally have subscriptions to a large number of journals. A typical price for a journal is several (to tens) thousand € per year; so, a library would spend millions (some accurate figures: 19 UK universities spent £14.4 million (excluding VAT) on subscriptions to journals published by Elsevier alone. – See more at: per year on journal subscriptions.

Unlike for books, journal authors receive no money for publishing a paper. Indeed, in some cases both the authors and the readers have to pay.

Now, it is understandable that the costs related to publishing must be paid. Even though there is very little printing to be done, still there is advertisement, manuscript handling, computing and communication costs, etc. I cannot begin to put a price tag on this, and it is irrelevant how high it is.

But is it correct that the results of publicly funded research is only available at a price? Maybe the general public is not directly interested in most research papers, but what about less wealthy universities? What about researchers in developing countries? These have no access to such paywall-protected publications.

What are the alternatives? There is a recent surge of open-access journals. These journals reverse their publication charges: in their case, the publishing author (i.e., institution) pays a fee, typically between 1500 and 4000 Euro, to publish a paper; the paper can then be read free of charge. Well-known examples are PLOS and Frontiers, except that: independent researchers have no chance to publish there, nor do researchers from developing countries.

Can’t we get rid of paid journals?

how the review process plays a role

Most reviews that I am involved in are blind: I know who the authors of the paper are, but they do not know the reviewers. That allows me to speak freely and bluntly to the authors; but on the downside, I may be biased by the authors, since very often I know them personally, or have a preconceived opinion about their work.

An alternative approach, which is sometimes used, is a double-blind review: neither the reviewer nor the author see the name of the other. That may seem to solve the problem—but only seemingly, since even there one can often, based on previous publications, guess who the authors are.

A way out would be to use a double-open approach, in which reviewers are acknowledged and, better still, reviews or commentaries are published. The workload for the reviewer is higher, of course—but would that not lead to better reviews?

An improvement on the double-open approach would be to use a grading system, in which everyone—with names being named, of course—can comment on or grade papers online. This will not only lead to majority voting, but also to accreditation of accreditors. Of course, blind votes must be prevented.

Wikipedia lists such ventures under, of which Behavioral and Brain Sciences, a journal initiated by (the legendary open-access promoter) Stevan Harnad, was a brilliant example. The journal “Papers in physics” uses a comparative approach, I think, but I don’t know that journal (nor the field) so I cannot comment on it.  arXiv shows a free publication platform which is universally accepted in computer science, maths, and physics.

A suggestion would therefore be, that a “journal” would consist of an open access web platform where logins are given to accredited users (that step requires some thinking). Each user can post an article or a commentary to an article, and a rating system will lead to something like an impact factor for the paper. Can a person build a reputation on reviewing only? I think not; the effort in writing a scientific article is much higher, while its impact (=influence on others) is much more profound, and should be honoured accordingly. Yet, only publishing without engaging in discussions is not right, too.

Advantages: fair reviews; but also a better understanding of the papers, since commentaries of peers are included. And, much more impact of papers since those papers which are controversial or important will have high grades. High grades are better earned in such a way than by being referred to from papers in obscure conferences!

Can we get rid of publication paywalls?

Caveat lector: impact factors and researcher index vary tremendously per research area. For instance, in medical fields, paper turnarounds are very short, and an impact factor of 3 is very low. In engineering it is the other way around: 3 is very high, while people often publish on conferences which do not count in the ISI system—a “lost” publication w.r.t. impact.

Nature is one of the journals with the highest impact factor; near 40. Everyone wants to publish there!

This is, for instance, true for engineering conferences. If you publish a paper, you must pay the conference price, typically around €600, which typically includes conference attendance, conference catering, and paper publishing. If your reader wants to read it, again the standard €30 fee must be paid. So… the publisher earns from both sides. IEEE is a good example.

Indeed, publishing houses often have paywall exceptions for developing countries, based on bilateral agreements etc. That may include some, but leaves out others, as well as the general public!

Again waivers exist for certain developing countries, etc.

From the website it seems that Cambridge University Press seem to have decided to close its traditional open access, and now papers cost a daring $45.


nonlinearity killed the cat

or: how our brain makes movement

When you move your arm, how does your brain make it end up where it should?  How can one play tennis so accurately? A musical instrument so quickly? The principle of movement generation in the human body looks very complex but is deceptively simple. In fact, it can be deduced quite straightforward by collecting and analysing a bunch of facts. Here are some key observations.

the premises

0. the brain is there to generate movement.  This may not be immediately apparent if you think of your own brain.  But look at it this way: every thought is aimed at optimising, in some way, a current or future movement, or learn from previous movements. Non-moving species don’t have brains (or neurons). And, as an exquisite example, the sea squirt shows: after it stops swimming around and settles on a rock, it digests its own brain (its cerebral ganglion, to be precise, explained below).

1. neural signals are slow. They travel through your nerves at speeds up to 100m/s.  This means that, a fast signal from your thumb muscles to your spine takes about 10ms. Sounds fast, doesn’t it? But it isn’t: in that time, your arm may have moved quite a bit. Look at how fast a professional pitcher moves his arm: the ball has speeds up to 170km/h or about 45m/s, meaning 45cm in 10ms. A fast move of only 20% of that would still mean 9cm of movement, before information arrives at the spinal cord.

Add to that the signal going back from your spine to your muscles plus the time to process the signal in the spinal cord, it at least doubles the delay. So we are talking more than 20ms. Now you still have to add the time for the muscles to react, ball park figure: 30ms. To cut a long story short: if you move fast, the time your neurons are able to react you will be quite a bit down the road. So, there is no way for your neurons to quickly correct movement errors it made—it must be good from the beginning.

2. muscle signals are noisy. Muscle movement is coded through two types of sensors: Golgi tendon organs (“GTO”) and muscle spindles. Muscle spindles give out signals related to the length of the muscle.  GTO signal the force in the tendon. Now, both of these are rather noisy signals. There is compelling evidence that, instead of these proprioceptive sensors, our body uses sensors in the skin to code the position of our joints. And you will have observed it: if you arm feels numb waking up from an uncomfortable position, you can’t move it without looking at it. Not because the muscles don’t work, but because your body is unable to tell where your arm is! Medically speaking, deafference is a very rare condition but a few cases of deafferented patients have been reported. In all cases, they lead to severe movement disabilities.

3. our movement is never unstable.  Chances are that you have never played with low-level control of a robot. I did, but I did not enjoy it. It is a lot of fiddling to get the parameters right; and if you don’t, the robot may well blow up in your face. Literally!  If your feedback controller, which amplifies an error signal with some factor, is wrong, that amplification may go the wrong way and magnify rather than dampen a vibration. The same can happen if you try to move a robot where it should not go: if it tries to reach a position but it cannot, because there’s a wall, it will increase its error over (short!) time and oscillate into disaster. This is one of the reasons why engineers never operate robots without emergency button—they know what can happen.

Animals don’t have emergency buttons. Sure, it would be silly to determine who presses whose; but really, in an intact specimen, movement will never become unstable. Why not? The solution is damping. Somehow, our musculoskeletal system is set up in such a way that everything is “soft” and yielding enough that we can easily absorb movement energy where necessary. How and how much precisely is a topic of research for biomechanics, and of a later blog. But this also means that the system is very forgiving for control errors. If our neurons give an imprecise command, our muscles and tendons will make sure it will be right enough.

4. our movements are precise enough. I mean fast movements, meaning: such movements which you do not constantly adapt and correct by looking, feeling, etc. For instance, returning a ball with a tennis racket. Or moving your legs for the next step. Moving your fingers to the next key. Hitting a golf ball. Catching, throwing, … All of them movements where a certain amount of precision is required, and with a bit of training we can do it.

You can turn this argumentation around and say, these tasks are all set up in such a way that our precision suffices. That is just as true. We could not type on a keyboard where the keys are 0.1mm apart. But the key observation is: we need not learn each and every possible movement that we make. Rather, we can interpolate very well, and thus learn many a task very rapidly. If you know how to do something at a point a, and you can do it at b, you are pretty well off doing it in the middle between a and b.

Of course, practice (= learning) improves accuracy. But still, after only a few tries we can deal with many, apparently challenging, tasks. Challenging for robots, for sure.

a bit on brain structure

the human brain

Brain structures seem to vary considerably between species. You will find many internet sources describing brain structure, but let me summarise as follows. Mammals have a cerebral cortex, the “grey matter”, of a few mm thick. It is folded in the brain, so there’s actually quite a lot of it. The most commonly known part of that is the neocortex, that part of the brain we relate to our thinking, behaviour, etc., and having large parts dedicated to motor (movement) and sensor (feeling, seeing, …) processing. Underneath the cortex is the white matter, which mostly acts as a communication centre, intelligently transmitting signals from one part of the cortex to the other.

The neocortex is only to be found in mammals. It was quite a surprise to me to learn that experiments with decorticated animals show that the cortex is not necessary to generate movement.  If you disable the cortex, a cat or dog can move almost just as well.  It is therefore rather likely that the motor cortex models and weighs movements, to subsequently make decisions based thereon. This has not been found in humans, by the way: with the cortex gone, there is little movement but not none.

In some experiments we did and reported in Nature, we were able to listen to neurons in a human motor cortex firing away as our participant, Cathy, observed a robot moving left and right, or when she tried to move it herself by thinking about doing the movement. Some neurons

simple brain dissectionThen there is the basal ganglia and the thalamus, located between the brain stem and the cortex.  The basal ganglia seems to be responsible for making movements, or filter out unwanted movements.  The effect of Parkinson’s disease (the inability to initiate movement) and Huntington’s disease (the inability to prevent unwanted movement) on the basal ganglia is well known and clearly indicates their function.  The Thalamus gates to and fro the brain stem, which then relays the movement to the spinal cord.  Through the spinal cord the muscle fibers are controlled.

The major role of the cerebellum seems to be supervised learning of motor patterns. But also decerebellation does not lead to complete movement loss. An individual with cerebellar lesions may be able to move the arm to successfully reach a target, and to successfully adjust the hand to the size of an object. However, the action cannot be made swiftly and accurately, and the ability to coordinate the timing of the two subactions is lacking. The behaviour will thus exhibit decomposition of movement—first the hand is moved until the thumb touches the object, and only then the hand is shaped appropriately to grasp the object.

putting them together

To best predict a movement which one has never done before—remember, all your sensory states play a role: what you see, what you feel, the weight of your clothes, the temperature, …  it’s never precisely the same—one needs a model.  This model tells you that, if my sensory state is a(t), and I want to go to sensory state a(t+1), I must use muscle activation f(a(t)).

Sounds simple.  But do not forget that, at the root, a(t) contains thousands or millions of signals, and f(a(t)) must activate thousands of motor units. On top of that, muscles are very nonlinear things: if you put in twice the activation, you do get an amount of force depending on where the muscle was, how stiff, what the other muscles do, etc. All in all, a very large collection of nonlinear things.  A model describing this must be very nonlinear, right?

Wrong.  The problem is inversion. The data you obtain describes how sensory state is changed by a movement. What you need is to find which movement to make to obtain a certain sensory state change. Meaning, you need to invert your model.  Inverting nonlinear models is difficult and, in many cases, impossible or unstable. Small errors in the model or data can lead to large errors in the behaviour—as in the robotics example above.

The only way out is to have a linear model. Because it allows for the following behaviour: if I have a model f1(a) for doing something from my sensor state a, and I have a model f2(b) for doing something from my sensor state b, then I can just use [f1(a)+f2(b)]/2 to get good behaviour in the middle. In fact, it’s the only method which generalises over all sensor states.

So this would mean that, if the result of a neuromuscular activation c is a movement x, then doubling that activation gives me 2x. Well, approximately at least. How can that be? After all, we are talking about several hundreds of control variables to activate a muscle, and many thousands to move an arm.

The answer lies in the structure of the muscles, in combination with how the spinal cord controls them. My answer is, I don’t know, precisely. But I do know that movements are, in principle, linear in the neuromuscular domain.

And now one can also hypothesise the role of the cerebellum. These movement models, between which we interpolate our movement, are stored in the cerebellum. Learning a movement more accurately means, that a larger number of cerebellar microzones are allocated to that particular movement, and that the interpolation between these become finer.  After all, we are not completely linear.

I am talking about afferent, i.e., feedback, nerves. The fastest nerves, the Ia and Ib afferent nerves, feed signals from the Golgi tendon organs (GTO) cq. muscle spindles back to the spinal cord at 70–100m/s. Type II feedback signals from the muscle spindles to the spinal cord, reacting on length change are a bit slower: about 35–70m/s. Finally, skin signals are transmitted through type III afferents at 10–30m/s. See E. Kandel, J. Schwartz, T. Jessell, Principles of neural science, McGraw-Hill, 2000.

In fact, we measured delays of approximately 25ms between the activation of a wrist muscle spindle and the activation of the corresponding muscle.

Please quantify! I hear you say. That is not easy. This paper says something about that, but it does not try to find a signal-to-noise ratio. Just believe me, these sensors are too noisy. And on top of that: please remember that there is a lot of flexible tissue between the sensors and the joint itself; they do not measure where the movement is. That is another source of error, and even a highly accurate technical sensor could not solve that.

Published in the following papers:

  1. Edin BB, Quantitative analyses of dynamic strain sensitivity in human skin mechanoreceptors. J Neurophysiol 92(6):3233–3243, 2004
  2. Johnson KO, Closing in on the neural mechanisms of finger joint angle sense. Focus on “quantitative analysis of 
dynamic strain sensitivity in human skin mechanoreceptors”. J Neurophysiol 92(6):3167–3168, 2004
  3. Edin BB, Cutaneous afferents provide information about knee joint movements in humans, J Physiology 531(1): 289–297, 2001

There will probably be more papers, but I happen to know these:

  1. Ghez C, Gordon J, Ghilardi MF (1995) Impairments of reaching movements in patients without proprioception. II. Effects of visual information on accuracy. J Neurophysiol 73:361-372
  2. Gordon J, Ghilardi MF, Ghez C (1995) Impairments of reaching movements in patients without proprioception. I. Spatial errors. J Neurophysiol 73:347-360
  3. Rothwell JC, Traub MM, Day BL, Obeso JA, Thomas PK, Marsden CD (1982) Manual motor performance in a deafferented man. Brain 105 (Pt 3): 515-542

Yes, that includes humans, of course. We’re in the Animalia kingdom, our phylum is Chordata, class Mammalia, order: Primates.

Nasty business, really, to deactivate the cortex of such an animal, but there it is. These two, and some more recent, papers describe that these animals exhibit little change in their movement—except that males cannot mate anymore. Think of that.

  1. Culler E, Mettler FA (1934) Conditioned behavior in a decorticate dog. Journal of Comparative Psychology 18:291-303
  2. Bjursten LM, Norrsell K, Norrsell U (1976) Behavioural repertory of cats without cerebral cortex from infancy. Experimental Brain Research 25:115-130

Doya K (2000). “Complementary roles of basal ganglia and cerebellum in learning and motor control”. Curr. Op. Neurobiology 10 (6): 732–739. doi:10.1016/S0959-4388(00)00153-7. PMID 11240282.

The scientifically interesting paper Holmes, G (1939). The cerebellum of man. Brain, 62, 1–30 describes a few clinical cases related to shot wounds.

One counts in the order of 250.000 muscle fibers in the biceps muscle, for instance. See this publication, which you can read online, or others which unfortunately are behind a paywall.  But then, one motor unit—i.e., one alpha motor neuron and the muscle fibers it controls—has between 10 (eye) and 1000 (thigh) muscle fibers.  So, you can count a few hundred motor units, therefore a few hundred motor neurons, per large muscle.

This is, in fact, very much but not precisely in line with the old models described in Marr DA (1969), A theory of cerebellar cortex, J. Physiol. 202, 437–470 and Albus JS (1971), A theory of cerebellar function, Math. Biosci. 10, 25–61.