Beginning Programming with C++ For Dummies. Davis Stephen R.
Читать онлайн книгу.12 are the outer loop.
The complete program consists of the addition of similar implementations of each step in the algorithm.
Removing the wheel from a car seems like such a simple task, and yet it takes 11 instructions in a language designed specifically for tire changing just to get the lug nuts off. Once completed, this program is likely to include over 60 or 70 steps with numerous loops. Even more steps are needed if you add logic to check for error conditions like stripped or missing lug nuts.
Think of how many instructions have to be executed just to do something as mundane as moving a window about on the display screen (remember that a typical screen may have 1280 x 1024 or a little over a million pixels or more displayed). Fortunately, though stupid, a computer processor is very fast. For example, the processor that’s in your PC can likely execute several billion instructions per second. The instructions in your generic processor don’t do very much – it takes several instructions just to move one pixel – but when you can rip through a billion or so at a time, scrolling a mere million pixels becomes child’s play.
The computer will not do anything that it hasn’t already been programmed for. The creation of a Tire-Changing Language was not enough to replace my flat tire – someone had to write the program instructions to map out, step by step, what the computer will do. And writing a real-world program designed to handle all the special conditions that can arise is not an easy task. Writing an industrial-strength program is probably the most challenging enterprise you can undertake.
So the question becomes: “Why bother?” Because once the computer is properly programmed, it can perform the required function repeatedly, tirelessly, and usually at a greater speed than is possible under human control.
The Tire-Changing Language isn’t a real computer language, of course. Real computers don’t have machine instructions like grab or turn. Worse yet, computers “think” by using a series of ones and zeros. Each internal command is nothing more than a sequence of binary numbers. Real computers have instructions like 01011101, which might add 1 to a number contained in a special-purpose register. As difficult as programming in TCL might be, programming by writing long strings of numbers is even harder.
The native language of the computer is known as machine language and is usually represented as a sequence of numbers written either in binary (base 2) or hexadecimal (base 16). The following represents the first 64 bytes from the Conversion program in Chapter 3.
<main+0>: 01010101 10001001 11100101 10000011 11100100 11110000 10000011 11101100
<main+8>: 00100000 11101000 00011010 01000000 00000000 00000000 11000111 01000100
<main+16>:00100100 00000100 00100100 01110000 01000111 00000000 11000111 00000100
<main+24>:00100100 10000000 01011111 01000111 00000000 11101000 10100110 10001100
<main+32>:00000110 00000000 10001101 01000100 00100100 00010100 10001001 01000100
Fortunately, no one writes programs in machine language anymore. Very early on, someone figured out that it’s much easier for a human to understand ADD 1,REG1 as “add 1 to the value contained in register 1,” rather than 01011101. In the “post-machine-language era,” the programmer wrote her programs in this so-called assembly language and then submitted it to a program called an assembler that converted each of these instructions into its machine-language equivalent.
The programs that people write are known as source code because they are the source of all evil – just kidding – actually it’s because they are the source of the program. The ones and zeros that the computer actually executes are called object code because they are the object of so much frustration.
The following represents the first few assembler instructions from the Conversion program when compiled to run on an Intel processor executing Windows. This is the same information previously shown in binary form.
<main>: push ebp
<main+1>: mov ebp, esp
<main+3>: and esp, 0xfffffff0
<main+6>: sub esp, 0x20
<main+9>: call 0x40530c <__main>
<main+14>: movl [esp+0x04],0x477024
<main+22>: movl [esp],0x475f80
<main+29>: call 0x469fac <operator<<>
<main+34>: lea eax,[esp+0x14]
<main+38>: mov [esp+0x04],eax
This is still not very intelligible, but it’s clearly a lot better than just a bunch of ones and zeros. Don’t worry – you won’t have to write any assembly-language code in this book either.
The computer doesn’t actually ever execute the assembly-language instructions. It executes the machine instructions that result from converting the assembly instructions.
Assembly language might be easier to remember than machine language, but there’s still a lot of distance between an algorithm like the tire-changing algorithm and a sequence of MOVE and ADD instructions. In the 1950s, people started devising progressively more expressive languages that could be automatically converted into machine language by a program called a compiler. These were called high-level languages because they were written at a higher level of abstraction than assembly language.
One of the first of these languages was COBOL (Common Business-Oriented Language). The idea behind COBOL was to allow the programmer to write commands that were as much like English sentences as possible. Suddenly programmers were writing sentences like the following to convert temperature from Celsius to Fahrenheit (believe it or not, this is exactly what the machine and assembly-language snippets shown earlier do):
INPUT CELSIUS_TEMP
SET FAHRENHEIT_TEMP TO CELSIUS_TEMP * 9/5 + 32
WRITE FAHRENHEIT_TEMP
The first line of this program reads a number from the keyboard or a file and stores it into the variable CELSIUS_TEMP. The next line multiplies this number by 9/5 and adds 32 to the result to calculate the equivalent temperature in degrees Fahrenheit. The program stores this result in a variable called FAHRENHEIT_TEMP. The last line of the program writes this converted value to the display.
People continued to create different programming languages, each with its own strengths and weaknesses. Some languages, like COBOL, were very wordy but easy to read. Other languages such as database languages or the languages used to create interactive web pages, were designed for very specific jobs. These languages include powerful constructs designed to handle specific problem areas.
C++ (pronounced “C plus plus,” by the way) is a symbolically oriented high-level language. C++ started out life as simply C in the 1970s at Bell Labs. A couple of guys were working on a new idea for an operating system known as Unix (the predecessor to Linux and Mac OS and still used across industry and academia today). The original C language created at Bell Labs in the 1970s was modified slightly and adopted as a worldwide ISO standard in early 1989. C++ was created as an extension to the basic C language mostly by adding the features that I discuss in Parts V and VI of this book.
When I say that C++ is symbolic, I mean that it isn’t very wordy; it uses symbols instead of the long words in languages like COBOL. However, C++ is easy to read once you’re accustomed to what the symbols mean. The same Celsius-to-Fahrenheit conversion code shown in COBOL earlier appears as follows in C++:
cin >> celsiusTemp;
fahrenheitTemp = celsiusTemp