Machine Learning For Dummies. John Paul Mueller

Читать онлайн книгу.

Machine Learning For Dummies - John Paul Mueller


Скачать книгу
think that they should. In short, the concepts, ideas, and technologies that you discover in this book remain viable and allow you to move forward in a career of your choice.

      Learning in the Age of Big Data

      IN THIS CHAPTER

      

Understanding and locating big data

      

Considering how statistics and big data work together in machine learning

      

Defining the role of algorithms in machine learning

      

Determining how training works with algorithms in machine learning

      This chapter provides you with essentials you need to know to perform machine learning tasks. This chapter doesn’t go into detail on the topics; rather, it offers an overview to help you make sense of the information in future chapters. Of course, learning begins with data, so the first part of this chapter tells you about data — lots of data, big data. Just as a human learns better with more input, so do machine learning applications.

      You have to have some way to organize and analyze all that data. Just as you organize pieces of information to make them easier to access and see patterns in it with greater ease, so the computer needs to organize data and then analyze it using statistics —a method of interpreting and presenting data patterns mathematically. The second part of this chapter deals with statistics as they apply to machine learning.

      After you have your data in hand and in an order that is useful and understandable, you can begin to feed it to algorithms to manipulate the data in a particular way to produce a result. The result tells you something you may or may not have surmised about the data on your own. The third part of this chapter looks at the relationship of algorithms to machine learning.

      Computers manage data through applications that perform tasks using algorithms of various sorts. A simple definition of an algorithm is a systematic set of operations to perform on a given dataset — essentially a procedure. The four basic data operations are Create, Read, Update, and Delete (CRUD). This set of operations may not seem complex, but performing these essential tasks is the basis of everything you do with a computer.

      

As the dataset becomes larger, the computer can use the algorithms found in an application to perform more work. The use of immense datasets, known as big data, enables a computer to perform work based on pattern recognition in a nondeterministic manner. Algorithms determine how a machine interprets big data. The algorithm used to perform machine learning affects the outcome of the learning process and, therefore, the results you get. In short, to create a computer setup that can learn, you need a dataset large enough for the algorithms to manage in a manner that allows for pattern recognition, and this pattern recognition needs to use a simple subset to make predictions (statistical analysis) of the dataset as a whole.

      Big data exists in many places today. Obvious sources are online databases, such as those created by vendors to track consumer purchases. However, you find many non-obvious data sources, too, and often these non-obvious sources provide the greatest resources for doing something interesting. Finding appropriate sources of big data lets you create machine learning scenarios in which a machine can learn in a specified manner and produce a desired result.

      Statistics, one of the methods of machine learning that you consider in this book, is a method of describing problems using math. By combining big data with statistics, you can create a machine learning environment in which the machine considers the probability of any given event. However, saying that statistics is the only machine learning method is incorrect. This chapter also introduces you to the other forms of machine learning currently in place.

Before an algorithm can do much in the way of machine learning, you must train it. The training process modifies how the algorithm views big data. It’s essential to understand that training is actually using a subset of the data as a method for creating the patterns that the algorithm needs to recognize specific cases from the more general cases that you provide as part of the training.

      Big data is substantially different from being just a large database. Yes, big data implies lots of data, but it also includes the idea of complexity and depth. A big data source describes something in enough detail that you can begin working with that data to solve problems for which general programming proves inadequate.

      As an example of big data complexity, consider Google’s self-driving cars (https://waymo.com/). The car must consider not only the mechanics of the car’s hardware and position with space but also the effects of human decisions, road conditions, environmental conditions, and other vehicles on the road, which is why our roads aren’t crowded with them yet (see https://www.vox.com/future-perfect/2020/2/14/21063487/self-driving-cars-autonomous-vehicles-waymo-cruise-uber). It’s not hard to imagine some of the human-specific issues that self-driving cars will need to address, such as people taking a nap when they should be watching the road even with the self-driving car in control (https://robbreport.com/motors/cars/canadian-police-arrest-sleeping-driver-tesla-autopilot-1234570071/).

      The data source for a self-driving car (or any other complex endeavor for that matter) contains many variables — all of which affect the vehicle in some way. Traditional programming might be able to crunch all the numbers, but not in real time. You don’t want the car to crash into a wall and have the computer finally decide five minutes later that the car is going to crash into a wall. The processing must prove timely so that the car can avoid the wall.

      The acquisition of big data can also prove daunting. The sheer bulk of the dataset isn’t the only problem to consider — also essential is to consider how the dataset is stored and transferred so that the system can process it. In most cases, developers try to store the dataset in memory to allow fast processing. Using a hard drive to store the data would prove too costly, time-wise.

      JUST HOW BIG IS BIG?

      Big data can really become quite big. For example, suppose that your Google self-driving car has a few HD cameras and a couple hundred sensors that provide information at a rate of 100 times/s. What you might end up with is a raw dataset with input that exceeds 100 Mbps. Processing that much data is incredibly hard.

      Part of the problem right now is determining how to control big data. Currently, the attempt is to log everything, which produces a massive, detailed dataset. However, this dataset isn’t well formatted, again making it quite hard to use. As this book progresses, you discover techniques that help control both the size and the organization of big data so that the data becomes useful in making predictions.


Скачать книгу