Algorithms For Dummies. John Paul Mueller
Читать онлайн книгу.finding patterns that match the output you require.
Finding solutions and counterexamples
The previous section introduces you to the vagaries of discovering real-world solutions, ones that consider issues that solutions found in the lab can’t take into account. However, just finding a solution — even a good one — isn’t sufficient because even good solutions fail on occasion. Playing the devil’s advocate by locating counterexamples is an important part of starting to solve a problem. The purpose of counterexamples is to
Potentially disprove the solution
Provide boundaries that define the solution better
Consider situations in which the hypothesis used as a basis for the solution remains untested
Help you understand the limits of the solution
A common scenario that illustrates a solution and counterexample is the statement that all prime numbers are odd. (Prime numbers are positive integers that can be evenly divided by themselves and 1 to produce an integer result.) Of course, the number 2 is prime, but it’s also even, which makes the original statement false. Someone making the statement could then qualify it by saying that all prime numbers are odd except 2. The partial solution to the problem of finding all the prime numbers is that you need to find odd numbers, except in the case of 2, which is even. In this second case, disproving the solution is no longer possible, but adding to the original statement provides a boundary.
By casting doubt on the original assertion, you can also consider situations in which the hypothesis, all prime numbers except 2 are odd, may not hold true. For example, 1 is an odd number but isn’t considered prime (see the discussion at https://primes.utm.edu/notes/faq/one.html
for details). So now the original statement has two boundaries, and you must restate it as follows: Prime numbers are greater than 1 and usually odd, except for 2, which is even. The boundaries for prime numbers are better defined by locating and considering counterexamples.
As the complexity of a problem grows, the potential for finding counterexamples grows as well. An essential rule to consider is that, as with reliability, having more failure points means greater potential for a failure to occur. Thinking of algorithms in this way is important. Ensembles of simple algorithms can produce better results with fewer potential counterexamples than a single complex algorithm.
Standing on the shoulders of giants
A myth that defies explanation is that the techniques currently used to process huge quantities of data are somehow new. Yes, new algorithms do appear all the time, but the basis for these algorithms is all of the algorithms that have gone before. In fact, when you think about Sir Isaac Newton, you might think of someone who invented something new, yet even he stated (using correct spelling for his time), “If I have seen further it is by standing on the sholders of Giants” (see https://en.wikiquote.org/wiki/Isaac_Newton
for additional quotes and insights).
The algorithms you use today weren’t even new in the days of Aristotle (see https://plato.stanford.edu/entries/aristotle-mathematics/
) and Plato (see https://www.storyofmathematics.com/greek_plato.html
). The origins of algorithms in use today are so hidden in history that the best anyone can say is that math relies on adaptations of knowledge from ancient times. The use of algorithms since antiquity should give you a certain feeling of comfort because the algorithms in use today are based on knowledge tested for thousands of years.
This isn’t to say that some mathematicians haven’t overturned the apple cart over the years. For example, John Nash’s theory, Nash Equilibrium, significantly changed how economics is considered today (see https://www.masterclass.com/articles/nash-equilibrium-explained
). Of course, recognition for such work comes slowly. Nash had to wait for a long time before he received much in the way of professional recognition (see the story at https://www.princeton.edu/main/news/archive/S42/72/29C63/index.xml
) despite having won a Nobel Prize in economics for his contributions. Just in case you’re interested, John Nash’s story is depicted in the movie A Beautiful Mind, which contains some much-debated scenes, including one containing a claim that the Nash Equilibrium somehow overturns some of the work of Adam Smith, another contributor to economic theories. (See one such discussion at https://www.quora.com/Was-Adam-Smith-wrong-as-claimed-by-John-Nash-in-the-movie-A-Beautiful-Mind
.)
Dividing and Conquering
If solving problems were easy, everyone would do it. However, the world is still filled with unsolved problems and the condition isn’t likely to change anytime soon, for one simple reason: Problems often appear so large that no solution is imaginable. Ancient warriors faced a similar problem. An opposing army would seem so large and their forces so small as to make the problem of winning a war unimaginably hard, perhaps impossible. Yet, by dividing the opposing army into small pieces and attacking it a little at a time, a small army could potentially defeat a much larger opponent. (The ancient Greeks, Romans, and Napoleon Bonaparte were all great users of the divide-and-conquer strategy; see Napoleon For Dummies, by J. David Markham [Wiley], for details.)
You face the same problem as those ancient warriors. Often, the resources at your disposal seem quite small and inadequate. However, by dividing a huge problem into small pieces so that you can understand each piece, you can eventually create a solution that works for the problem as a whole. Algorithms have this premise at their core: to use steps to solve problems one small piece at a time. The following sections help you understand the divide-and-conquer approach to problem solving in more detail.
Avoiding brute-force solutions
A brute-force solution is one in which you try each possible answer, one at a time, to locate the best possible answer. (This is also called an exhaustive approach, mostly because you’re so tired when you finish.) It’s thorough, this much is certain, but it also wastes time and resources in most cases. Testing every answer, even when it’s easy to prove that a particular answer has no chance of success, wastes time that an algorithm can use on answers that have a better chance of success. In addition, testing the various answers using this approach generally wastes resources, such as memory. Think of it this way: You want to break the combination for a lock, so you begin at 0, 0, 0, even though you know that this particular combination has no chance of success given the physical characteristics of combination locks. A brute-force solution would proceed with testing 0, 0, 0 anyway and then move on to the equally ridiculous 0, 0, 1.
Every solution type does come with advantages, although sometimes quite small. A brute-force solution has one such advantage. Because you test every answer, you don’t need to perform any sort of preprocessing. The time saved in skipping the preprocessing, though, is unlikely to ever pay back the time lost in trying every answer. However, you may find occasion to use a brute-force solution when
Finding a perfect solution, if one exists, is essential.
The problem size is limited.
You can use heuristics to reduce the size of the solution set.