Artificial Intelligence and Quantum Computing for Advanced Wireless Networks. Savo G. Glisic

Читать онлайн книгу.

Artificial Intelligence and Quantum Computing for Advanced Wireless Networks - Savo G. Glisic


Скачать книгу
Weiser, U.C. (2013). TEAM: threshold adaptive memristor model. IEEE Trans. Circuits Syst. I, Reg. Papers 60 (1): 211–221.

      29 29 Strachan, J. et al. (2013). State dynamics and modeling of tantalum oxide memristors. IEEE Trans. Electron Devices 60 (7): 2194–2202.

      30 30 Saha, S. A Comprehensive Guide to Convolutional Neural Networks. https://towardsdatascience.com/a‐comprehensive‐guide‐to‐convolutional‐neural‐networks‐the‐eli5‐way‐3bd2b1164a53

      In many applications, the value of the results obtained by machine learning (ML), that is, artificial intelligence (AI), especially implemented by deep neural networks (DNN), would be significantly higher if users could understand, appropriately trust, and effectively manage AI results. This need has generated an interest in explainable AI (XAI), and in our case we would be especially interested in explainable neural networks (xNN).

      In the past, there have been multiple controversies over AI/ML enabled systems yielding biased or discriminatory results [1, 2]. That implies an increasing need for explanations to ensure that AI‐based decisions were not made erroneously. When we talk about an explanation for a decision, we generally mean the need for reasons or justifications for that particular outcome, rather than a description of the inner workings or the logic of the reasoning underlying the decision‐making process in general. XAI systems are expected to provide the required information to justify results, particularly when unexpected decisions are made. It also ensures that there is an auditable and provable way of defending algorithmic decisions as being fair and ethical, which helps build trust.

      AI needs to provide justifications in order to be in compliance with legislation, for instance, the “right to explanation,” which is a regulation included in the General Data Protection Regulation (GDPR) [3].

      Explainability can also help to prevent things from going wrong. A better understanding of system behavior provides greater visibility of unknown vulnerabilities and flaws, and helps to rapidly identify and correct errors in critical situations, which enables better control.

      Another reason for building explainable models is the need to continuously improve them. A model that can be explained and understood is one that can be more easily improved. Because users know why the system produced specific outputs, they will also know how to make it smarter. Thus, XAI could be the foundation for ongoing iterative improvements in the interaction between human and machine.

      Asking for explanations is a helpful tool to learn new facts, to gather information, and thus to gain knowledge. Only explainable systems can be useful for that. For example, if AlphaGo Zero [4] can perform much better than human players at the game of Go, it would be useful if the machine could explain its learned strategy (knowledge) to us. Following this line of thought, we may expect in the future, XAI models will teach us about new and hidden laws in biology, chemistry, and physics. In general, XAI can bring significant benefit to a large range of domains relying on AI systems.

      Health care: The medical diagnosis model is responsible for human life. How can we be confident enough to treat a patient as instructed by an artificial neural network (ANN) model? In the past, such a model was trained to predict which pneumonia patients should be admitted to hospitals and which patients should be treated as outpatients. Initial findings indicated that neural nets were far more accurate than classical statistical methods. However, after an extensive test, it turned out that the neural net had inferred that pneumonia patients with asthma have a lower risk of dying, and should not be admitted. Medically, this is counterintuitive; however, it reflected a real pattern in the training data – asthma patients with pneumonia usually were admitted not only to the hospital but directly to the intensive care unit (ICU), treated aggressively, and survived [1]. It was then decided to abandon the AI system because it was too dangerous to use it clinically. Only by interpreting the model can we discover such a crucial problem and avoid it. Recently, researchers have conducted preliminary work aiming to make clinical AI‐based systems explainable [1,7–9]. The increasing number of these works confirms the challenge of – and the interest in – applying XAI approaches in the healthcare domain.

      Legal: In criminal justice, AI has the potential to improve assessment of risks for recidivism and reduce costs associated with both crime and incarceration. However, when using a criminal decision model to predict the risk of recidivism at the court, we have to make sure the model behaves in an equitable, honest‚ and nondiscriminatory manner. Transparency of how a decision is made is a necessity in this critical domain, yet very few works have investigated making automated decision making in legal systems explainable [10–12].

      Finance: In financial services, the benefits of using AI tools include improvements related to wealth‐management activities, access to investment advice, and customer service. However, these tools also pose questions around data security and fair lending. The financial industry is highly regulated, and loan issuers are required by law to make fair decisions. Thus, one significant challenge of using AI‐based systems in credit scores and models is that it is harder to provide the needed “reason code” to borrowers – the explanation of why they were denied credit – especially when the basis for denial is the output from an ML algorithm. Some credit bureau agencies are working on promising research projects to generate automated reason codes and make AI credit‐based score decisions more explainable and auditor friendly [13].

      Military: Originally, the current famous XAI’s initiative was begun by military researchers [14], and the growing visibility of XAI today is due largely to the call for research by Defense Advanced Research Projects Agency (DARPA) and the solicitation of DARPA projects. AI in the military arena also suffers from the AI explainability problem. Some of the challenges of relying on autonomous systems for military operations are discussed in [15]. As in the healthcare domain, this often involves life and death decisions, which again leads to similar types of ethical and legal dilemmas. The academic AI research community is well represented in this application domain with the DARPA Ambitious XAI program, along with some research initiatives that study explainability in this domain [16].

      The majority of works classify the methods according to three criteria: (i) the complexity of interpretability, (ii) the scope of interpretability, and (iii) the level of dependency on the used ML model. Next, we will describe the main features of each class and give examples from current research.

      4.1.1 The Complexity and Interoperability

      The complexity of an ML model is directly related to its interpretability. In general,


Скачать книгу