Elements applications of artificial intelligence in transport and logistics. Vadim Shmal

Читать онлайн книгу.

Elements applications of artificial intelligence in transport and logistics - Vadim Shmal


Скачать книгу
ents applications of artificial intelligence in transport and logistics

      Dmitry Abramov

      Alexander Korpukov

      Vadim Shmal

      Pavel Minakov

      © Dmitry Abramov, 2021

      © Alexander Korpukov, 2021

      © Vadim Shmal, 2021

      © Pavel Minakov, 2021

      ISBN 978-5-0055-6674-4

      Created with Ridero smart publishing system

      The emergence of the science of artificial intelligence

      Artificial intelligence (AI) is intelligence displayed by machines, as opposed to natural intelligence displayed by humans and animals. The study of artificial intelligence began in the 1950s, when systems could not perform tasks as well as humans. Artificial intelligence is the overall goal of building a system that exhibits intelligence, consciousness and is capable of self-learning. The most famous types of artificial intelligence known as machine learning, which is a kind of artificial intelligence, and deep learning.

      The development of artificial intelligence is a controversial area as scientists and policymakers grapple with the ethical and legal implications of creating systems that exhibit human-level intelligence. Some argue that the best way to promote artificial intelligence is through education to prevent it from bias against people and make it accessible to people from all socioeconomic backgrounds. Others fear that increased regulation and concerns over national security will hamper the development of artificial intelligence.

      Artificial intelligence (AI) originated in the 1950s, when scientists believed that machines could not exhibit intelligent behavior that the human brain could not reproduce. In 1962, a team at Carnegie Mellon University led by Terry Winograd began work on universal computing intelligence. In 1963, as part of the MAC project, Carnegie Mellon created a program called Eliza, which became the first machine to demonstrate the ability to reason and make decisions like humans.

      In 1964, IBM researcher JCR Licklider began research in the computer science and cognitive sciences with the goal of developing intelligent machines. In 1965, Licklider coined the term «artificial intelligence» to describe the entire spectrum of cognitive technologies that he studied.

      Scientist Marvin Minsky introduced the concept of artificial intelligence in the book «Society of Mind» and foresaw that the field of development of science goes through three stages: personal, interactive and practical. Personal AI, which he considered the most promising, would lead to the emergence of human-level intelligence, an intelligent entity capable of realizing its own goals and motives. Interactive AI will develop the ability to interact with the outside world. Practical AI, which he believed was most likely, would develop the ability to perform practical tasks.

      The term artificial intelligence began to appear in the late 1960s when scientists began to make strides in this area. Some scientists believed that in future, computers would take on tasks that were too complex for the human brain, thus achieving intelligence. In 1965, scientists were fascinated by an artificial intelligence problem known as the Stanford problem, in which a computer was asked to find the shortest path on a map between two cities in a given time. Despite many successful attempts, the computer was able to complete the task only 63% of the time. In 1966, Harvard professor John McCarthy stated that this problem «is as close as we can in computers to the problem of brain analysis, at least on a theoretical basis».

      In 1966, researchers at IBM, Dartmouth College, the University of Wisconsin-Madison, and Carnegie Mellon completed work on the Whirlwind I, the world’s first computer designed specifically for artificial intelligence research. In the Human Genome Project, computers were used to predict the genetic makeup of a person. In 1968, researchers at Moore’s School of Electrical Engineering published an algorithm for artificial neural networks that could potentially be much more powerful than an electronic brain.

      In 1969, Stanford graduate students Seymour Papert and Herbert A. Simon created language for children Logo. Logo was one of the first programs to use both numbers and symbols, as well as simple grammar. In 1969 Papert and Simon founded the Center for Interactive Learning, which led to the development of the logo and further research into artificial intelligence.

      In the 1970s, a number of scientists began experimenting with self-conscious systems. In 1972, Yale professor George Zbib introduced the concept of «artificial social intelligence» and suggested that these systems might one day understand human emotions, in 1972 he coined the term «emotional intelligence» and suggested that one day systems might understand emotions. In 1973, Zbib co-authored an article entitled «Natural Aspects of Human Emotional Interaction,» in which he argued that artificial intelligence could be combined with emotion recognition technology to create systems capable of understanding emotions. In 1974 Zbib founded Interaction Sciences Corporation to develop and commercialize his research.

      By the late 1960s, several groups were working on artificial intelligence. Some of the most successful researchers in this area were from the MIT Artificial Intelligence Lab, founded by Marvin Minsky and Herbert A. Simon. MIT’s success can be attributed to the diversity of individual researchers, their dedication and the group’s success in finding new solutions to important problems. By the late 1960s, most artificial intelligence systems weren’t as powerful as humans.

      Minsky and Simon envisioned a universe in which the intelligence of a machine is represented by a program or set of instructions. As the program worked, it led to a series of logical consequences called a «set of affirmative actions.» These consequences can be found in the answer dictionary, which will create a new set of explanations for the child. In this way, the child can make educated guesses about the state of affairs, creating a feedback loop that, in the right situation, can lead to a fair and useful conclusion. However, there were two problems with the system: the child had to be taught according to the program, and the program had to be perfectly detailed. No programmer could remember all the rules a child had to follow, or a set of answers that a child might have.

      To solve this problem, Minsky and Simon developed what they called the «magician’s apprentice» (later known as the Minsky rule-based thinking system). Instead of memorizing each rule, the system followed a process: the programmer wrote down the statement and identified the «reasons» for the various outcomes based on the words «explain,» «confirm,» and «deny.» If the explanation matched one of the «reasons,» then the program needed to be tested and given feedback. If this did not happen, it was necessary to develop a new one. If the program was successful in the second phase, it was allowed to create more and more rules, increasing the breadth of its theories. When faced with a problem, he could be asked to read the entire set of rules in order to re-examine the problem.

      Minsky and Simon system was incredibly powerful, because the programmer only gave several versions explanation. The researcher was not required to go through any procedures other than writing and entering the program requirements. This allowed Minsky and Simon to create more rules and, more important, learn from their mistakes. In 1979, the system was successfully demonstrated in the SAT exam. Although the system had two flaws that prevented it from answering two of the three SAT questions, it scored 82 percent for Group 2 and 3 questions and 75 percent for Group 4 and 5 questions. The system did not cope with complex issues that did not fit into the established rules. Processing large amounts of data was also slow, so any additional details were thrown away to speed up the system.

      The system also had some limitations due to the rules. Rules can only be defined based on a limited number of labels. For example, when rules are given, they should define what the labels mean. They can only be applied to positive results. However, as the system’s ability to process information has grown, it has been shown that the system can make mistakes. In particular, if it had to apply the same label to two different objects (and still detect an error), it could not make a useful distinction between the two objects, and then decide which label should be applied.

      Minsky and Simon focused on applying their system to humans. They developed a system they called a «living program» or


Скачать книгу