Beginning Programming All-in-One For Dummies. Wallace Wang
Читать онлайн книгу.an algorithm. An algorithm simply defines one of many possible ways to solve a problem.
There’s no single “best” algorithm for writing a program. The same program can be written in a million different ways, so the “best” way to write a program is any way that creates a useful, working, and reliable program as quickly as possible. Anything else is irrelevant.
Knowing what you want the computer to do is the first step. The second step is telling the computer how to do it, which is what makes programming so difficult. The more you want the computer to do, the more instructions you need to give the computer.
Think of a computer program as a recipe. It’s easy to write a recipe for making spaghetti. Just boil water, throw in the noodles until they’re soft, drain, and serve. Now consider a recipe for making butternut squash and potato pie with tomato, mint, and sheep’s milk cheese from Crete. Not as simple as boiling water to make spaghetti, is it?
The same principle holds true for computer programming. The simpler the task, the simpler the program. The harder the task, the bigger and more complicated the program. If you just want a program that displays today’s date on the screen, you won’t need to write many instructions. If you want to write a program that simulates flying a space shuttle in orbit around the Earth, you’ll need to write a lot more instructions.
The more instructions you need to write, the longer it takes and the more likely you’ll make a mistake somewhere along the way.
Ultimately, programming boils down to two tasks:
Identifying exactly what you want the computer to do
Writing step-by-step instructions that tell the computer how to do what you want
The History of Computer Programming
Although computer programming may seem like a recent invention, the idea behind writing instructions for a machine to follow has been around for over a century. One of the earliest designs for a programmable machine (in other words, a computer) came from a man named Charles Babbage way back in 1834.
That was the year Charles Babbage proposed building a mechanical, steam-driven machine dubbed the Analytical Engine. Unlike the simple calculating machines of that time that could perform only a single function, Charles Babbage’s Analytical Engine could perform a variety of tasks, depending on the instructions fed into the machine through a series of punched cards. By changing the number and type of instructions (punch cards) fed into the machine, anyone could reprogram the Analytical Engine to make it solve different problems.
The idea of a programmable machine caught the attention of Ada Lovelace, a mathematician and daughter of the poet Lord Byron. Sensing the potential of a programmable machine, Ada wrote a program to make the Analytical Engine calculate and print a sequence of numbers known as the Bernoulli numbers.
Because of her work with the Analytical Engine, Ada Lovelace is considered to be the world’s first computer programmer. In her honor, the Department of Defense named the Ada programming language after Ada Lovelace. Nvidia named a family of graphics cards after Ada Lovelace as well.Although Charles Babbage never finished building his Analytical Engine, his steam-driven mechanical machine bears a striking similarity to today’s computers. To make the Analytical Engine solve a different problem, you just had to feed it different instructions. To make a modern computer solve a different problem, you just have to run a different program.
Over a century later, the first true computer appeared in 1943 when the U.S. Army funded a computer to calculate artillery trajectories. This computer, dubbed ENIAC (short for Electronic Numerical Integrator and Computer), consisted of vacuum tubes, switches, and cables. To give ENIAC instructions, you had to physically flip its different switches and rearrange its cables.
The first ENIAC programmers were all women.
Physically rearranging cables and switches to reprogram a computer worked, but it was tedious and clumsy. Instead of having to physically rearrange the computer’s wiring, computer scientists decided it would be easier if they could leave the computer physically the same but just rearrange the type of instructions given to it. By giving the computer different instructions, they could make the computer behave in different ways.
In the old days, computers filled entire rooms and cost millions of dollars. Today, computers have shrunk so far in size that they’re essentially nothing more than a little silicon wafer, about the size of a coin.
A processor is essentially an entire computer. To tell the processor what to do, you have to give it instructions written in machine language (a language that the processor can understand).
To make faster computers, engineers combine multiple processors (called cores) together and make them work as a team. So, instead of having a single processor in your computer, the latest computers have multiple processors or cores working together.
Talking to a processor in machine language
To understand how machine language works, you have to understand how processors work. Basically, a processor consists of nothing more than millions of tiny switches that can turn on or off. By turning certain switches on or off, you can make the processor do something useful.
Instead of physically turning switches on or off, machine language lets you turn a processor’s switches on or off by using two numbers: 1 (one) and 0 (zero), where the number 1 means “turn a switch on” and the number 0 means “turn a switch off.” So a typical machine language instruction might look like this:
1011 0000 0110 0001
If the preceding instruction doesn’t make any sense, don’t worry. The point is that machine language is just a way to tell a processor what to do.
Using 1s and 0s is binary arithmetic. Because binary arithmetic can be so hard to read, programmers also represent binary numbers in hexadecimal. Where binary arithmetic uses only 2 numbers, hexadecimal uses 16 numbers and letters (0–9 and A–F). So, the binary number 1011 0000 0110 0001 could be represented as the hexadecimal number B061.
Machine language is considered the native language of CPUs, but almost no one writes a program in machine language because it’s so tedious and confusing. Mistype a single 1 or 0, and you can accidentally give the wrong instruction to the CPU. Because writing instructions in machine language can be so difficult and error-prone, computer scientists have created a somewhat simpler language: assembly language.
Using assembly language as a shortcut to machine language
The whole purpose of assembly language is to make programming easier than machine language. Basically, one assembly language command can replace a dozen or more machine language commands. So, instead of requiring you to write ten machine language commands (and risk making a mistake in all ten of those commands), assembly language lets you write one command that does the work of ten (or more) machine language commands.
Not only does this reduce the chance of mistakes, but it also makes writing a program in assembly language much faster and easier. Best of all, assembly language commands represent simple mnemonics such as MOV
(move) or JMP
(jump). These mnemonic commands make assembly language much easier to understand than a string