Design and the Digital Divide. Alan F. Newell

Читать онлайн книгу.

Design and the Digital Divide - Alan F. Newell


Скачать книгу
words of storage, a keyboard and a 10-character per second printer, paper tape reader and punch, and led to me obtaining on-the-job training in software engineering. As a laboratory tool, programming in assembly language was essential, and many hours were spent trying to produce very efficient programmes which fitted into the 4K of storage that was available. Thus, I was again fortunate in being an early user of laboratory computers—initially for developing laboratory tools and simulating electronic circuits and latterly as prototype electronic systems in their own right.

      The other major advantage of my time in these laboratories was that I was able to read widely in the general area of speech and to be an early investigator of human computer interfaces. Up until that time the majority of work on human factors had been supported by the military with a focus on knobs and dials. As will be seen, the background that this gave me proved to be essential to many of the research issues that I subsequently investigated.

       Opportunism—a valid research strategy.

      What, in retrospect, was the turning point in my career path occurred by chance. It was becoming increasingly clear that the automatic speech recognition project—like the vast majority of such projects at that time—was not going to achieve its aim. One day, my immediate boss, almost as an aside, suggested that our technology might be beneficial to people with disabilities. As it was not possible at that time to recognise speech, I developed “VOTEM”, a Voice Operated Typewriter employing Morse Code. The idea being that a disabled person could speak Morse Code (dots and dashes), to spell out what they wanted to type.

       Technology push is sometimes a useful driver.

      VOTEM was licensed to a manufacturer of systems for disabled people but it was never made commercially available. This particular piece of research did not have a commercial outcome, but it did interest me in developing systems for disabled people. There are many and varied reasons why researchers move into this field—some because of a relationship either with disabled people directly, or via discussions with clinicians, but in my case it was because of a potential use for technology. Technologically led research, particularly in this field, can be problematic and, although disabled people were not involved in the development of VOTEM, they should have been. This experience gave me an interest in, and a particular perspective on, this area of research, and much of my subsequent work has promoted the idea of “user-centred design”.

      I was appointed to a lectureship in Electronics at Southampton University and decided that developing systems for disabled people would be one of my major research interests. In comparison to the organisation of many universities in the 21st Century, the choice of research area was entirely left up to me—there was no institutional pressure for or against such a choice. The research in this, and subsequent sections of this chapter, will be described very briefly, with the details and rationale being expanded upon in later chapters.

      Designing VOTEM had opened my eyes to the communication problems of people without speech, and my readings in psychology had made me aware of the many and varied characteristics of speech, including speech being more that just the words spoken, and the importance of body language. I was struck by the fact that all the systems that had been developed for non-speaking people required the non-speaking person and their communication partner to look at a single screen or printer—which meant that eye contact, which is very important in face-to-face communication between speaking people, was not possible. The “Talking Brooch” [Newell, A., 1974a] consisted of a small “rolling” alphanumeric display worn on the lapel and operated via a hand-held keyboard, and was designed to provide eye contact for non-speaking people. I wrote a simulation of this on a PDP12 (the successor to the PDP8 as a laboratory computer), and my team subsequently developed a dedicated electronic version. (In the early 1980’s, even “small computers” were very large.)

       Encourage serendipity.

      The Talking Brooch was a major factor in my being awarded a Winston Churchill Travel Fellowship in 1976 to visit researchers in the U.S. This was an immensely useful experience, enabling me to visit the major players in the field. Many of the people I met have remained friends and colleagues throughout my career. The Winston Churchill Travel Fellowship was excellent in that its modus operandi was to choose Fellows on the basis of their ideas and then give them freedom to plan their fellowship without having to check back to the Trust. For example, I was advised not to have a full diary so that I could follow up leads that came up during the tour. If it wasn’t for this ability I would never have met the New York-based speech pathologist, Arleen Kraat. She was not on my original itinerary, but became a very important mentor and supporter throughout my research activities in this field.

      The Travel Fellowship confirmed my view that research into, and development of, systems to assist people with disabilities was an area which was satisfying and which I would enjoy. The area of assisting human communication was particularly interesting and held some exciting technological challenges, but serendipity led to the specific projects I and my team pursued.

      Lewis Carter-Jones, a Member of Parliament was visiting the Department at Southampton and I demonstrated the Talking Brooch to him. He was a friend of Jack (now Lord) Ashley, a labour MP who had suddenly become deaf. He suggested that the Brooch would be useful to assist Jack in the House of Commons, where he had great difficulty in following debates, and he arranged for me to meet Jack in the House. This led to my team developing a transcription system for machine shorthand, in particular the Palantype Machine that had been developed in the UK. This system provided a verbatim transcript of speech on a display screen. It became the first computer system to be used in the Chamber of the House of Commons, and led the field in commercially available real-time systems for stenograph transcription.

      My investigations into the needs of deaf people led me to consider television subtitling, and we investigated ways in which “closed captions” could be transmitted. This research was superseded when the UK text services of Oracle and Ceefax were developed. We thus refocussed our research on the characteristics required for effective captioning, and developing equipment that would enable captioners to work efficiently. Although the Independent Television authorities supported the former research, they did not see any need for new equipment. So again there was no support from the potential users of such a system, but Andrew Lambourne, a research student at the time, continued this development both as a PhD topic and as a commercial venture [Lambourne et al., 1982a]. In 2011 he continues to run a successful company marketing this type of equipment.

       Research without stakeholder support can still be valuable.

      The Palantype and the Subtitling projects were brought together in our research into live subtitling. ITV supported our work, our system being used for the Charles & Diana royal wedding, whereas the BBC supported Earnest Edmunds at Leicester University. In those early days, although subtitling for deaf people added only 1/3 of one percent to the cost of programmes, it was deemed to be too high a price to pay (to assist 10% of the audience!). After much lobbying, this view changed and a large percentage of programmes in the UK are now subtitled, including 100% of the British Broadcasting Corporation’s output news being subtitled mainly by stenographers.

      In 1980, I moved to the NCR chair of Electronics and Microcomputer Systems in Dundee University’s Electrical Engineering and Electronics Department. There I founded a group investigating the uses of microcomputers with a special interest in disabled people. This again was not a “strategic” decision by the University—their aim was to expand their research and teaching in microcomputers. My research group was not a good fit with the Electrical Engineering and Electronics Department and, in 1986, the group joined Mathematics to produce a Department of Mathematics and Computer Science. Later, it became a stand-alone Department of Applied Computing and subsequently the School of


Скачать книгу