I was particularly pleased to be asked to write this item for the Economics arena page on Routledge’s website, in connection with the new editions of both my Statistics Tables and Elementary Statistics Tables. For, although not an economist, I had some exceptionally strong links with the Economics Department at the University of Nottingham back in the 1970s which is when the original edition of my Statistics Tables was being developed. And those links influenced some of the content of the book, as I’ll describe later.
In fact, the earliest link with Economics goes right back to my first year as an undergraduate in the Mathematics Department at Nottingham. Nottingham’s students generally had two-thirds of their first year prescribed by their type of course, with the remaining third being a relatively free choice from a wide range of options. I chose Statistics which, so long ago, was still a relatively new subject on the curriculum. That substantial course (some 60–70 lectures) was taught by Dr (later Professor Sir) Clive W J Granger (1934–2009), a name which will be known to many economists. In particular, jointly with Robert F Engle III, Clive was awarded the Nobel Prize in Economics in 2003 for developing “methods of analyzing economic time series with time-varying volatility or common trends”. Clive also pioneered the concepts of cointegration and “Granger-causality”, which I believe are now considered fundamentals of econometrics. He was knighted in 2005.
It will thus be of no surprise that, three years later when I was encouraged to stay on at Nottingham to study for a PhD, I chose Statistics and asked Clive to be my supervisor. It will also be no surprise that my area of study turned out to be the spectral analysis of time series. And subsequently I could hardly avoid becoming familiar with the Box–Jenkins time-series models, seeing that (a) the external examiner for my PhD was Prof Gwilym M Jenkins and, immediately after obtaining the PhD, I spent a year as Visiting Assistant Professor in the Department of Statistics at the University of Wisconsin, whose Chairman was Prof George E P Box! On my return to Nottingham I became Lecturer in Statistics in the Mathematics Department—and Clive immediately handed over the whole 60–70 lectures of that first-year Statistics course to me, and I continued teaching it for many years.
And so my book of Statistics Tables was born …
The course was available to students from a wide variety of disciplines, the prerequisite being a pass in Mathematics in the national pre-university examinations. Most of the attendees were reading Mathematics, Economics or Psychology, but there would usually also be several from further afield, such as Biology, Geography, Philosophy, History, Music, French, etc. Naturally the course needed to contain the traditional stuff of a shorter introduction but, in view of the extra time available and the breadth of specialities in my audience, it was appropriate to include much more, particularly of a practical rather than merely mathematical nature. Amongst the additional topics I chose were nonparametric / distribution-free methods (for even then I was already uneasy about having to “assume normalcy” so much of the time), some quality control tools and techniques, and various areas of operational research such as queueing theory and simulation studies (so-called “Monte Carlo” methods).
Of course, almost all of both the standard and non-standard topics required the use of tables but, hardly surprisingly, I was unable to discover any published set of tables suitable for the course. So there really was no alternative but to develop such a set myself! That set of tables, along with a small number of additions and a ten-page introduction to their use, became the original edition of Statistics Tables, published by George Allen & Unwin in 1978 and later taken over by Routledge. (The complete title is Statistics Tables for Mathematicians, Engineers, Economists, and the Behavioural and Management Sciences.)
During part of the time that this book was being developed, the Mathematics Department became rather overcrowded and I happily accepted an invitation to move to a vacant office in the Economics Department. I confess that I found the company there to be rather more congenial than some of my colleagues in the Maths Department! While finalising the content of Statistics Tables, my Economics friends recommended I include a couple of tables which they would find extremely useful, tables of the von Neumann ratio and the Durbin–Watson statistics. They are still there in the new edition.
Following some extensive market research, Allen & Unwin subsequently also invited me to produce a second book of tables which would be more suitable for shorter courses and for users lacking the mathematical background of my own students. Elementary Statistics Tables was published in 1981. It covered fewer topics, but several of those that remained were tabulated more fully than in Statistics Tables, and there was considerably more user-friendly explanatory text and simple worked examples throughout. A particular improvement was that, wherever possible, such helpful material was placed on the same pages as the relevant table or, if not, very close by—compared with the single slab of explanation at the start of Statistics Tables.
Looking back, it is amusing that some of my colleagues in the Mathematics Department told me they thought I was wasting my time in publishing these books because before long all such information would be easily available on computers and calculators. Why amusing? Because, over a quarter of a century later, a letter arrived out of the blue from Rob Langham, the Senior Publisher for Economics and Finance at Routledge, saying that, adding up the sales over those 25+ years, I was now one of their “leading academic authors”, and would I consider producing new editions?
But what could I put into new editions (other than some general improvement and tidying up of what was already there)? Before long, I knew what I wanted to do, but imagined it would not be possible. To show you why, let me fill out something of the quarter-century gap between the first editions of the books and Rob’s invitation to produce new editions.
During the 1980s I was privileged to start working with Dr W Edwards Deming (1900–1993). Dr Deming (see left of photo above) is oftentimes described as the American statistician who, in the early 1950s, taught the Japanese about both quality and management. His work in Japan is still commemorated through awards of that country’s Deming Prize to both individuals and organisations worldwide for major contributions to, and success with, improved quality. Dr Deming remained substantially unrecognized in the West until the late 1970s. In 1980 I was fortunate enough to be appointed Statistical Quality Advisor to British subsidiaries of the first American company to become seriously interested in his work. Then, beginning in 1985, I served as Dr Deming’s primary assistant on all of his visits to this country to present his celebrated four-day management seminars and to lead study and research sessions, etc. My life was destined never to be the same again!
Earlier, you may have been a little surprised, in view of my background, to note that I didn’t say anything about including time-series analysis in that first-year course at Nottingham. The reason was this. Although obviously respectful of the work produced by Profs Granger, Jenkins and Box, I felt those approaches required a higher level of mathematical ability than was true of my students. There were some more elementary approaches to time-series analysis around (e.g. using simple additive and multiplicative models) but they seemed too crude to be of much practical use. So I didn’t know of anything suitable to include in the course beyond the standard work on regression techniques which was of course already there.
Dr Deming’s work is to do with management, with quality, with process improvement, etc, and so analysis of time series obviously had to be an important requirement. So what did he use? Answer: none of the above! The only tool he ever mentioned in his four-day seminars etc was something which I, and many others, had only come across previously (if at all) in connection specifically with manufacturing processes: and that was the control chart. But Deming’s work was, and is, primarily aimed at management—and it’s not just manufacturing companies that need good management! Initially to me there were two big mysteries here. All I understood about control charts (as evidenced by the single page on the topic in the original editions of both my books of tables) was that (a) you need to get a small sample of values at each time-point to be charted and (b) the underlying theory depended on the data being normally distributed. As already implied, these requirements seemed pretty reasonable with many manufacturing processes. But, to put it mildly, they appeared rather dubious in the much wider range of applications where managers and others need to analyze time series: most processes have only a single value available at any time-point and (though mathematical statisticians wish and oftentimes seem to assume it’s true) not everything in this world is normally distributed!
My two-part dilemma fortunately began to be solved within weeks of my first four-day seminar with Dr Deming in 1985 by meeting up with Dr Donald Wheeler at the first American event I attended which had Dr Deming at the helm. First, Don made me aware that Dr Walter Shewhart, who invented the control chart back in the 1920s, did not require data to be normally distributed. In fact, in some delightful language from his famous book of 1931 (despite its title of Economic Control of Quality of Manufactured Product) he stated:
“the fact that the criterion which we happen to use has a fine ancestry of highbrow statistical theorems does not justify its use. Such justification must come from empirical evidence that it works … the proof of the pudding is in the eating.”
And the second part of the dilemma was solved by Don’s popularising a variant of the manufacturing version of the control chart which only requires a single value at any time-point rather than a sample. And it is a beautifully simple tool. At the time he was referring to it by the rather off-putting name of the “XmR chart”; however some time later he started calling it the “process behavior chart—more syllables but also more descriptive and friendly!
To put you in the picture about the process behavior chart, I can do no better than quote an introductory paragraph from the new edition of Elementary Statistics Tables:
“So what’s new? The majority of the added material focuses on a remark¬able statistical technique which, to all intents and pur¬poses, was unknown at the time of the original edition. At this time of writing it is still largely unknown, especially in academia—it hasn’t yet reached most of the introductory Statistics books and courses. But, during the second half of my career (mostly spent outside academia, unlike the first half), I found the process behavior chart unbelievably useful due to its (I believe) unique combination of simplicity and effectiveness. Of course, you won’t find it on examination papers. But if you want to analyze and under¬stand data out there in the “real world”, I believe you’ll find it invaluable. Many delegates, sent by their boss, would arrive at my public and in-house seminars in fear and trepidation: they’d never been able to “do Statistics”—they hated the subject! By the end of the day they could understand the process behavior chart, and they could use it, and they could communicate with it.”
So why did I say earlier, concerning the new editions, “I knew what I wanted to do, but imagined it would not be possible.” Obviously I wanted to introduce the process behavior chart to the users of my books. Now these are books of Statistics Tables—but construction and interpretation of the process behavior chart doesn’t even require the user to refer to any tables!
To overcome the problem, I drafted some material and showed it to Rob Langham. After a little discussion he agreed the technique was so important that it should indeed be included. And subsequently, bless him, he successfully steered through Routledge’s Editorial Board my proposal to include a substantial “teach-yourself” section on process behavior charts in the new editions of both books.
In the final, say, 15 years before I retired in 2004, in my work with companies and in my public and in-house seminars, I used the process behavior chart far more than all other statistical techniques put together. Try it. When you retire you might find yourself saying much the same thing!