Sunday 14 April 2013

Don't confuse Science and Technology.

Having read Robin Ince's post "The Fascism of Knowing Stuff" I felt he was confusing Science and Technology and added the following comment to his post.
I agree with your definition of science but at the end you are talking about technology as if science and technology were one and the same thing. Of course the two are closely linked but what the average person sees is not “pure” science but rather technology – and they only see that technology because someone is making money out of it!

There are many problems. If an early version of a technology is commercially acceptable better versions can be blocked because people have adjusted to the original technology (which may have become an international standard) and there are more people wanting the old technology (even if science has shown it to be inferior) than would benefit in the short term if the improved technology were introduced.

A good example is the QWERTY keyboard which was used on early typewriters, then on teleprinters, which were used as early input devices for computers … Much excellent research has been done on better keyboard, using the latest scientific advances – but QWERTY is still with us, although its is being replaced in some areas by completely different forms of information input.

The problem of competing technologies is illustrated by the triumph of VHS over BetaMax (which was said to be technically better) because the real battle was who would get the biggest market share – as people would buy the system with the biggest collection of recordings.

This raises a potential trap – if a new technology comes along and is extremely successful because there was no competition its total domination of the market would make it almost impossible to develop and market improved versions – and as a result it could be difficult to fund blue sky scientific research which questions the foundations of the technology.

Let me suggest where this may have already happened. The stored program computer emerged in the 1940s and was soon seen was a money spinner – with many companies rushing to get a foothold in the market. The rat race to capitalise on the invention has resulted in systems which dominate everyday life in much of the world, where the technology is taught in schools and everyone knows something about how computers work – if only in the form of an inferiority complex because “they are too difficult for me”.

In fact it is considered as an unavoidable truth that computers are black boxes where the internal workings are incomprehensible to the computer user. But the stored program computer is incomprehensible because computers were originally designed to process mathematical algorithms carrying out tasks which the average person would also find incomprehensible. The problems computers were designed to solve are about as far from the problems faced by early hunter-gathers as it is possible to imagine.

There must be an alternative. It is well know that nature has produced information processing systems (called brains) which start by knowing nothing (at birth) and can boot-strap themselves up to tackle a wide range of messy real world tasks. In the case of humans their brains can exchange information and people can work together symbiotically.

So which scientists in the 1940s was saying that blue sky research into whether a “human friendly computer” that worked like a brain would be possible?. … or in the 1950s? … or in the 1960s? … …

If you look through the literature virtually everyone who ever though about the problem was taking the stored program computer for granted. You will search the old literature in vain – and when people started to worry about the human user interface it was about writing programs to hide the inner black box from the human user. No-one was going right back to first principles to see if there was an avoidable weakness in the use of the stored program computer. And – because they were thinking of analogies with the stored program computer – it was taken for granted that the brains “computer” must be so clever it was very difficult to understand because it was “obviously” difficult to program. In effect the very successful technology was beginning to influence the way that scientists were thinking about research into how the brain works.

In fact in 1968, backed by the team which built the Leo Computer (the world’s first commercial computer), work started on early studies with the purpose of designing a fundamentally human friendly “white box” information processor. I was the project leader and the project ended up under the name CODIL. The problem we faced (which has got worse over the years) is that even if it had been successful (and results with software prototypes were very promising) it would have to battle with the established stored program computer market. Look at the investment in hardware, applications, data bases, trained staff, public understanding, etc. etc. of conventional systems and the inertia against possible change is probably valued in trillions of dollars.

To conclude I suggest that, because the computer revolution was technology led, key blue sky research was never done – and anyone proposing such blue sky research now is more likely to be greeted with hostility rather than adequate research funding.
~~~~~

Nullifidian replied - and the relevevant part of his reply was: 
Finally, I didn’t use the phrase “anonymous scientists” to invite people who thought that peer review had done them wrong to submit their tales of woe. Frankly, I don’t care. The point I was making there was to say that there are plenty of ways to get information out to the scientific world, and publication is actually the least efficient of these and arguably mostly irrelevant. Conferences, preprints, presentations before other university departments, etc. are where the scientific action is. However, all these means of getting around the peer review process require that your work actually be as interesting to your colleagues as you think it is.

In your own case, you haven’t demonstrated that the peer review system has suppressed a scientifically worthy idea. You cite the absence of people “go[ing] in [your] direction” as evidence that these views have been “crushed by the establishment at an early stage”, but an equally potent hypothesis is that your ideas are unworkable and nobody wants to spend their time trying to make the unworkable work. While I can’t say without seeing your ideas in full, the notion that you can just switch from computation to talking about the brain without any apparent background in neuroscience is another indication that you’re a crank. So is the use of coined terms and irrelevant jargon. In what way is a brain similar to an “ideal gas”? An ideal gas is hypothetical state in which the molecules all randomly moving small, hard spheres that have perfectly elastic and frictionless collisions with no attractive or repulsive forces between them and where the intermolecular spaces are much larger than the molecules themselves. None of these things are true in practice, of course, but they’re close enough to the model in most cases that it makes no difference. Now, neurons are not small hard balls, they don’t move in random directions and collide elastically, the synapses are not vastly larger than the neurons, and there’s no way the concept of an ideal gas appears to work even as a metaphor. So I’m not convinced that the rejection of your ideas by an unfriendly peer review system is evidence that the “establishment” is wrong.
I have now replied:
First let me thank you for your critical comments – as the enemy of good science is confirmation bias – and what is needed to explore controvercial ideas is open no-holds barred debate on the issues. I have now posted a discussion draft “From the Neuron to Human Intelligence: Part 1: The ‘Ideal Brain’ Model” (http://trapped-by-the-box.blogspot.co.uk/p/blog-page.html) and have added a section on nomenclature specifically because you raised the subject. 
Now responding to your specific comments let me start by reminding you that I said “despite enormous efforts in many different specialist field, there is no theory which provides a viable evolutionary pathway between the activity of individual neurons and human intelligence.”

If you think this statement is wrong I would be very grateful for a reference to a paper which describes such a model. If you can’t provide evidence of such research why are you so hostile to the suggestion that someone thinks that they might have a possible answer?

For instance you introduce a straw man argument relating to the analogy between my “ideal brain” model and an “ideal gas.” Of course I would be a crank if I thought neurons were little balls bouncing around in the brain – as you are suggesting. The whole point of the “ideal gas” model is to strip everything down to the bare essentials. You start with an infinite brain filled with identical neurons (cf. An infinite container filled with identical molecules). Interactions between neurons are not by collisions but by electrical connections which carry signals of variable strength. (In theory every neuron is connected to every other one – but in the vast majority of cases the strength of the interaction is zero.) In an ideal gas the three properties of interest at pressure, volume and temperature, while in the ideal brain we are interested at the ability to store patterns, recognise them, and use them to make decisions. Another similarity is that both models work pretty well in some cases – for instance the ideal brain model suggests one reason why humans are prone to confirmation bias – and when the models start to fail the models can be used to explain the differences.

Your comment about switching between computation and talking abut the brain is interesting for two reasons.

Any research model which attempts to link the neurons to human intelligence will involve many different disciplines in fields such as psychology, childhood learning, animal behaviour, linguistics, artificial intelligence, and neuroscience, and in addition will undoubtedly involve modelling on a computer. I would argue that what is needed is the ability to stand back and be able to see the wood from the trees – and that have too much mental commitment in any one speciality could be a liability. You seem to be suggesting that neuroscientists are some kind of super-scientists who have a monopoly on holistic approaches to how the brain works.

However the comment is interesting because it pin-points the problem I have had. My ideas became trapped between a rock and a hard place. I worked as an information scientist (in the librarian sense) before entering the computer field and was used to seeing how people handled complex information processing tasks. I then moved to computers and concluded that there were serious flaws in the design of stored program computers – suggesting a fundamentally different model that reflected how people handled information. I could not get adequate support from the computer establishment because computers were so successful that there couldn’t be any serious flaw in their design, and even if there were problems there was so much money to be made ploughing on regardless that any time spent on blue-sky-research into work that questioned the ideas of people like Turing was a waste of time.

At the same time I was getting comments from other fields that I could not be modelling how people think because the standard computer model was wrong and as I was a computer scientist I must also be wrong! I am sure your critical comment was based on a stereotyped view that tars all computer scientists with the same brush.

No comments:

Post a Comment