Algorithms For Dummies. John Paul Mueller
Читать онлайн книгу.of implementation is more important than speed.
Keeping it simple, silly (KISS)
The brute-force solution, described in the previous section, has a serious drawback. It looks at the entire problem at one time. It’s sort of like going into a library and hunting book by book through the shelves without ever considering any method of making your search simpler. The divide-and-conquer approach to book searches is different. In this case, you begin by dividing the library into children’s and adults’ sections. After that, you divide the adults’ section into categories. Finally, you search just the part of the category that contains the book of interest. This is the purpose of classification systems such as the Dewey Decimal System (see https://mcpl.info/childrens/how-use-dewey-decimal-system
). The point is that divide and conquer simplifies the problem.
The divide part of divide and conquer is an essential way to understand a problem better as well. Trying to understand the layout of an entire library could prove difficult. However, knowing that the book you want to find on comparative psychology appears as part of Class 100 in Division 150 of Section 156 makes your job easier. You can understand this smaller problem because you know that every Section 156 book will contain something about the topic you want to know about. Algorithms work the same way. By making the problem simpler, you can create a set of simpler steps to finding a problem solution, which reduces the time to find the solution, reduces the number of resources used, and improves your chances of finding precisely the solution you need.
Breaking down a problem is usually better
After you have divided a problem into manageable pieces, you need to conquer the piece in question. This means creating a precise problem definition. You don’t want just any book on comparative psychology; you want one written by George Romanes. Knowing that the book you want appears in Section 156 of the Dewey Decimal System is a good start, but it doesn’t solve the problem. Now you need a process for reviewing every book in Section 156 for the specific book you need. The process might go further still and look for books with specific content. To make this process viable, you must break the problem down completely, define precisely what you need, and then, after you understand the problem thoroughly, use the correct set of steps (algorithm) to find what you need.
Learning that Greed Can Be Good
In some cases, you can’t see the end of a solution process or even know whether you’re winning the war. All you can really do is ensure that you win the individual battles to create a problem solution in hopes of also winning the war. A greedy method to problem solving uses this approach. It looks for an overall solution by choosing the best possible outcome at each problem solution stage.
It seems that winning each battle would necessarily mean winning the war as well, but sometimes the real world doesn’t work that way. A Pyrrhic victory is one in which someone wins every battle but ends up losing the war because the cost of the victory exceeds the gains of winning by such a large margin. You can read about five Pyrrhic victories at
https://www.history.com/news/5-famous-pyrrhic-victories
. These histories show that a greedy algorithm often does work, but not always, so you need to consider the best overall solution to a problem rather than become blinded by interim wins. The following sections describe how to avoid the Pyrrhic victory when working with algorithms.
Applying greedy reasoning
Greedy reasoning is often used as part of an optimization process. The algorithm views the problem one step at a time and focuses just on the step at hand. Every greedy algorithm makes two assumptions:
You can make a single optimal choice at a given step.
By choosing the optimal selection at each step, you can find an optimal solution for the overall problem.
You can find many greedy algorithms, each optimized to perform particular tasks. Here are some common examples of greedy algorithms used for graph analysis (see Chapter 9 for more about graphs) and data compression (see Chapter 14 for more about data compression) and the reason you might want to use them:
Kruskal’s Minimum Spanning Tree (MST): The algorithm chooses the edge between two nodes with the smallest value, not the greatest value as the word greedy might initially convey. This sort of algorithm might help you find the shortest path between two locations on a map or perform other graph-related tasks.
Prim’s MST: This algorithm splits an undirected graph (one in which direction isn’t considered) in half. It then selects the edge that connects the two halves to make the total weight of the two halves the smallest it can be. You might find this algorithm used in a maze game to locate the shortest distance between the start and finish of the maze.
Huffman Encoding: The algorithm assigns a code to every unique data entry in a stream of entries, with the most commonly used data entry receiving the shortest code. For example, the letter E would normally receive the shortest code when compressing English text, because you use it more often than any other letter in the alphabet. By changing the encoding technique, you can compress the text and make it considerably smaller, reducing transmission time.
Reaching a good solution
Scientists and mathematicians use greedy algorithms so often that Chapter 15 covers them in depth. However, what you really want is a good solution, not just a particular solution. In most cases, a good solution provides optimal results of the sort you can measure, but the word good can include many meanings, depending on the problem domain. You must ask what problem you want to solve and which solution solves the problem in a manner that best meets your needs. For example, when working in engineering, you might need to weigh solutions that consider weight, size, cost, or other considerations, or perhaps some combination of all these outputs that meet a specific requirement.
To put this issue in context, say that you build a coin machine that creates change for particular monetary amounts using the fewest coins possible (maybe as part of an automatic checkout at a store). The reason to use the fewest coins possible is to reduce equipment wear, the weight of coins needed, and the time required to make change (your customers are always in a hurry, after all). A greedy solution solves the problem by using the largest coins possible. For example, to output $0.16 in change, you use a dime ($0.10), a nickel ($0.05), and a penny ($0.01).
A problem occurs when you can’t use every coin type in creating a solution. The change machine might be out of nickels, for example. To provide $0.40 in change, a greedy solution would start with a quarter ($0.25) and a dime ($0.10). Unfortunately, there are no nickels, so the coin machine then outputs five pennies (5 × $0.01) for a total of seven coins. The optimal solution in this case is to use four dimes instead (4 × $0.10). As a result, the greedy algorithm provides a particular solution, but not an optimal solution. The change-making problem receives considerable attention because it’s so hard to solve. You can find additional discussions such as “Combinatorics of the Change-Making Problem,” by Anna Adamaszeka and Michal Adamaszek (
https://www.sciencedirect.com/science/article/pii/S0195669809001292
) and “Coin Change” by Mayukh Sinha (https://www.geeksforgeeks.org/coin-change-dp-7/
).