Thursday 14 September 2017

My personal battle between complex and complicated systems

I recently decided to drop into a FutureLearn course "Decision Making in a Complex and Uncertain World" by the University of Groningen. The opening section really made me sit up as I realized that I had never seriously thought about a formal definition that clearly distinguished between complex and uncertain systems and complicated  but predictable ones. Of course I was well aware of the difference in practice but having a definition clarified a number of issues relating to how my research into a human-friendly computer (CODIL) started, why the research came to be abandoned, and why there is now renewed interest in the subject.
Fossil Elephant Tooth



Temperamentally I am a scientist who is attracted to trying to get an understanding of complex and uncertain "real world" systems and complicated predictable technology-based systems bore me. After six years studying Chemistry, and ending up with a Ph. D. in Theoretical Chemistry the last thing I wanted to be was a narrow-minded specialist knowing an enormous amount about very little. In fact while I was studying Chemistry my spare-time was spent on a complex task - trying to understand the geomorphology of some nearby limestone caves.  This involved the interaction of various agents - which operated over timescales varying from a day (such as a major flash flood) to hundreds of thousands of years (such as the Ice Ages).  As the landscape changed the key areas were typically those that were eroded away - so that the detailed evidence that would be needed for any precise reconstruction did not exist.

My first job was a complex task and involved monitoring (and also indexing) the internal research and development correspondence between a UK research organisation and its overseas branches. The task involved exception reporting and ensuring that the right people in the organisation knew of significant developments. One day there might be a complaint from an Australian customer that our product didn't work - which was the first we knew of a new form of insecticide resistance. A problem with a cattle tick trial in South Africa might be followed by a rumour about a competitor's product picked up by a salesman in a bar in South America, Then the United States would bring out new regulations for pesticide residues. There were also significant problems in filing and indexing the information because we couldn't know in advance which projects would expand enormously to become new market leaders, and which would be abandoned at an early stage of the research.

In 1965 I felt it could be useful to know more about computers and whether they could help in the work. While I had not yet read Vannevar Bush's 1945 essay "As we may think" (which in effect predicted something like Wikipedia) I was definitely thinking about computers supporting a network that allowed information to be shared. My employer was not interested and I decided to switch my career and applied to a local computer centre to become a systems analyst. In fact the system run by Shell Mex & BP centre at Hemel Hempstead was probably one on the most advanced commercial batch processing systems in the UK at the time.

LEO III - and early commercial computer
The plan was that I should start by doing a short spell programming their LEO III computers- and I found I had moved from working in a complex system where the unexpected was important to a complicated system where everything was meant to be pre-defined. To me programming was a boring task made worse because of the time delays due to the (by modern standards) crude computer facilities. However a major incident on my first day demonstrated that complexity could be found when you considered the programs as part  of the whole company trading in the real world. A major update of the customer master records had just taken place and about a million new record cards had been printed. As soon as details of new sales started to come in it became clear that many thousands of active customer records had been incorrectly deleted. It turned out that sales staff had not realized how the change affected them and had failed to provide additional information on a particular class of customer. As a result of this incident I took a particular interest in why programs failed and how errors could be minimized. In nearly every case the problem could be linked to an error or misunderstanding in the communication chain between the sales department and the actual program code.
Contemporary B P Garage

Let me give one example. My duty was to write the program which created a sales data base for the garages which sold or used our petrol and lubricating oils. Other people in the team wrote programs which used this data to produce sales reports for senior management and for each individual salesman. The first live run resulted in two errors being reported. One salesman reported that one of his customers would never have brought so much of a particular brand of lubricating oil, while Head Office reported that credit sales for one brand of diesel fuel were 17% too high. There was no problem with identifying the program errors - but for me these two reports demonstrated a major weakness in the system. Both these errors should have generated a large number of error reports. A large number of sales staff had either not spotted the errors, or at least not reported them. It was also clear that the trial data supplied for testing purposes had failed to represent the true variation between customers that occurred in the real world.

After about a year on the programming floor I was moved to Systems as the company was to acquire a "next generation" computer which would have direct access files and at least some user "glass teletype" style terminals. I was asked to look at all aspects of a vital pricing program which included many individual customer contracts - to see what was needed to be done to convert it to the new system.  As far as I know no-one had tried to move such a complicated application in this way - and I had no guidelines to work to. The examination of the program code (in assembly code for the existing computer) , the systems documents (not up to date), the clerical user manual (hard to understand) and actual computerized customer contracts showed there were many serious problems. A complete re-write would almost certainly be needed.

How things can go wrong


I took the view that the sales management staff (who were in touch with the complex real world problems of the market place) should be in control using terminals, cutting out most of the Chinese whispers paper trail required to specify the existing batch system.  To do this one needed a program which could establish a symbiotic relationship with the sales staff and in order for such an approach to work  they needed to trust the system to do what they wanted. Not only must it be easy for them to instruct, but its should also be able to tell them what it was doing in terms they could easily understand. It also needed to be robust, fail safe, and efficient in terms of computer resources.


When I submitted a draft proposal I had no idea that I was suggesting anything unusual. It was vetoed on the ground that "salesmen don't understand computers" and my argument that "I agree - so we should design computer programs which understand salesmen. - allowing them to handle complex and unexpected real world situations" was dismissed. I was firmly told that the job of a systems analyst was to generate a complicated global model which included a precise predefinition of everything type of sales transaction that might be required. Despite this objection I understand that parts of my proposal were actually implemented after I had left the company.

As soon as English Electric discovered they had not won the Shell Mex & BP contract I was head-hunted to do market research on the requirements for top-of-the-range next generation commercial computers. This was the kind of complex task I enjoyed and involved talking about current problems and future expectations with people ranging from computer engineers responsible for processor design to senior management in large companies who has problems with complex human computer interfaces. After a few months I realized it might be possible to generalize my Shell-Mex & BP idea and that it could be possible to design a computer (using available components) which was fundamentally human friendly and could handle a wide category of hard to define information processing problems.
Designing a human-friendly computer

Such a system was considered to have very considerable commercial potential and within weeks I was project leader of a small research team working on a simulation program to demonstrate that the idea worked. Two years later all the project goals had been met - but there were two serious problems. The first was that, at a time when the idea really needed a lot of creative discussion at the complex application, human psychology and computer theory levels I was told to told I must talk with no-one until patents had been taken out. The other problem was that a major Government-inspired re-organisation of the UK computer industry was now underway, the result being the company merger to form ICL. Virtually all research projects were closed down unless they were immediately relevant to the proposed new  computer 2900 series.

Using computers to help track planes by radar
So I found myself being made redundant - and working on software to track enemy planes on a massive military computer system - ending up as a kind of "trouble shooter" for the software manager. However I had obtained permission to publish (references) and continue the research if I could find a suitable university place. The problem was that I had no idea how controversial my idea would prove to be, or what support facilities I would need to get my ideas accepted.
The problems of
unconventional
 research

Unconventional research ideas really need a supportive environment to get started and in the circumstances Brunel University was the wrong place to go. It was a brand new university, upgraded from a technical college, with very little experience of research - and no experience of controversial research. As a technological university its main aim was to train students to use the existing technology This was particularly true of the Computer Science Department, which was very poorly equipped at the time. To be fair to Brunel the project made significant technical advances over the years, information was published on the application of the idea in a wide variety of fields (see publication list) and by 1986 a small but powerful demonstration package, MicroCODIL, was trial marketed and received very favourable reviews. However, almost certainly due to my limitations as a project manager, little real work had been done on the underlying theory, and not enough thought had been devoted to the economics of the project or its commercial implications.

In a Tasmanian Forest


Two things happened, which led to the project closing. Years of underfunded working on the project had left me exhausted and I was also suffering from post traumatic stress disorder as the result of a family death. At the same time a new professor considered that, because the university was basically there to train students to work in industry the research was inappropriate and made it very clear that I should leave or do something more productive. I decided to take a break with academic life and started with an informal sabbatical in Australia. Here I found myself working on complex environmental issues including possible effect of climate change. I had intended to resume the research on my return, and planned to develop MicroCODIL to run on PC computers.

On returning from Australia I decided that I could be more use to society if I worked at the local and national level on improving the provision of services for the mentally ill. In addition I could do local history research because no-one complained when I said that historical research was often complex and could involve considerable uncertainty.

Some twenty years later I decided to retire from the mental health field and began to re-assess my earlier research in 21st century terms, using the World Wide Web as an information source. This blog was set up as a result - and in addition to earlier postings two more postings should shortly appear here

The first will look at the changes in relevant research in the 50 years since the work on CODIL started, and the second will look at CODIL as an "experimental symbolic assembly language" for describing the evolution of intelligence.

No comments:

Post a Comment