Machine Learning For Dummies. John Paul Mueller

Читать онлайн книгу.

Machine Learning For Dummies - John Paul Mueller


Скачать книгу
target="_blank" rel="nofollow" href="https://www.csee.umbc.edu/courses/471/papers/turing.pdf">https://www.csee.umbc.edu/courses/471/papers/turing.pdf). In this paper, Turing explored the idea of how to determine whether machines can think. Of course, this paper led to the Imitation Game involving three players. Player A is a computer and Player B is a human. Each must convince Player C (a human who can’t see either Player A or Player B) that they are human. If Player C can’t determine who is human and who isn’t on a consistent basis, the computer wins.

      A continuing problem with AI is too much optimism. The problem that scientists are trying to solve with AI is incredibly complex. However, the early optimism of the 1950s and 1960s led scientists to believe that the world would produce intelligent machines in as little as 20 years. After all, machines were doing all sorts of amazing things, such as playing complex games. AI currently has its greatest success in areas such as logistics, data mining, and medical diagnosis.

      Exploring what machine learning can do for AI

      Machine learning relies on algorithms to analyze huge datasets. Currently, machine learning can’t provide the sort of AI that the movies present. Even the best algorithms can’t think, feel, present any form of self-awareness, or exercise free will. What machine learning can do is perform predictive analytics far faster than any human can. As a result, machine learning can help humans work more efficiently. The current state of AI, then, is one of performing analysis, but humans must still consider the implications of that analysis — making the required moral and ethical decisions. The “Considering the Relationship between AI and Machine Learning” section of this chapter delves more deeply into precisely how machine learning contributes to AI as a whole. The essence of the matter is that machine learning provides just the learning part of AI, and that part is nowhere near ready to create an AI of the sort you see in films.

      

The main point of confusion between learning and intelligence is that people assume that simply because a machine gets better at its job (learning) it’s also aware (intelligence). Nothing supports this view of machine learning. The same phenomenon occurs when people assume that a computer is purposely causing problems for them. The computer can’t assign emotions and therefore acts only upon the input provided and the instruction contained within an application to process that input. A true AI will eventually occur when computers can finally emulate the clever combination used by nature:

       Genetics: Slow learning from one generation to the next

       Teaching: Fast learning from organized sources

       Exploration: Spontaneous learning through media and interactions with others

      Considering the goals of machine learning

Technique Machine Learning Statistics
Data handling Works with big data in the form of networks and graphs; raw data from sensors or the web text is split into training and test data. Models are used to create predictive power on small samples.
Data input The data is sampled, randomized, and transformed to maximize accuracy scoring in the prediction of out-of-sample (or completely new) examples. Parameters interpret real-world phenomena and provide a stress on magnitude.
Result Probability is taken into account for comparing what could be the best guess or decision. The output captures the variability and uncertainty of parameters.
Assumptions The scientist learns from the data. The scientist assumes a certain output and tries to prove it.
Distribution The distribution is unknown or ignored before learning from data. The scientist assumes a well-defined distribution.
Fitting The scientist creates a best fit, but generalizable, model. The result is fit to the present data distribution.

      Defining machine learning limits based on hardware

       Obtaining a useful result: As you work through the book, you discover that you need to obtain a useful result first, before you can refine it. In addition, sometimes tuning an algorithm goes too far and the result becomes quite fragile (and possibly useless outside a specific dataset).

       Asking the right question: Many people get frustrated in trying to obtain an answer from machine learning because they keep tuning their algorithm without asking a different question. To use hardware efficiently, sometimes you must step back and review the question you’re asking. The question might be wrong, which means that even the best hardware will never find the answer.

       Relying on intuition too heavily: All machine learning questions begin as a hypothesis. A scientist uses intuition to create a starting point for discovering the answer to a question. Failure is more common than success when working through a machine learning experience. Your intuition adds the art to the machine learning experience, but sometimes intuition is wrong and you have to revisit your assumptions.

      

When you begin to realize the importance of environment to machine learning, you can also begin to understand the need for the right hardware and in the right balance to obtain a desired result. The current state-of-the-art systems actually rely on Graphical Processing Units (GPUs) to perform machine learning tasks. Relying on GPUs does speed the machine learning process considerably. A full discussion of using GPUs is outside the scope of this book, but you can read more about the topic at https://devblogs.nvidia.com/parallelforall/bidmach-machine-learning-limit-gpus/ and https://towardsdatascience.com/what-is-a-gpu-and-do-you-need-one-in-deep-learning-718b9597aa0d.


Скачать книгу