AI and IoT-Based Intelligent Automation in Robotics. Группа авторов

Читать онлайн книгу.

AI and IoT-Based Intelligent Automation in Robotics - Группа авторов


Скачать книгу
VisionThe computer vision of the robots is very important. The vision helps in the extraction of the images and if needed the data which is captured by the robot will be stored in the server for recollecting what tasks are done by the robot from the start to the end of the day, which can help the user for cross-checking purposes if needed [7]. The vision of the robot may take many forms; it takes images or it records video based upon the settings made by the user. The vision mechanism is based purely on the computer sensor and electromagnetic radiation and the light rays generated are visual light or infrared light.

      14 14) ManipulationMinute manipulations are done on robots from time to time like replacing hands and legs for better moment; in other words, it is an endless effort.

      15 15) Mechanical GrippersGrippers play a major role in designing the robot for some important things like vision, sensing and responding in a particular manner. Mechanical grippers help a robot catch any object with its hands using the grippers to catch things without dropping them. Like hands, grippers also play a major role in handling objects using friction [8]. There is another type of gripper known as a “vacuum gripper,” which is simple to add in a block to the robot. Vacuum grippers are very active in nature and are mainly used in windscreens.

      These above components are needed for building an efficient robot.

      Navigation is very important to how the robot works and plays a major role in different tasks, such as locating the robot, its position, its condition, etc. There are a few advanced robots, such as ASIMO, which will automatically charge themselves based on their position.

      Humanoid robots are the majority of those used in homes and restaurants for task automation. Once the timetable of when the tasks should be done is set, the tasks are assigned to the robot and it will automatically perform the task as part of its daily routine per the schedule [14]. Not until the user makes any alteration to the existing timetable will the robot change its task. While making the schedule or adding the new task for the robot to perform on a daily basis, first we have to train the robot by giving instructions like the step-by-step procedure for performing the task, which is called an “algorithm.” The algorithm given to the robot it treated as the training set. First, while implementing the task the task should be tested by the user to confirm whether all the steps are working correctly [9]. This is the basic thing that the robot performs. There are some types of robots that have advanced features or characteristics such as speech recognition, robotic voice, gesture, facial expression, artificial emotions, personality and social intelligence.

      1.5.2 Control

      The mechanical structure of a robot must be controlled to perform errands. The control of a robot includes three distinct stages: perception, processing, and action (mechanical standards). Sensors give data about the earth or the robot itself (for example, the situation of its joints or its end effector). This data is then prepared to be stored or transmitted to ascertain the proper signals to the actuators (engines) which move the mechanical device.

      The handling stage can run intricately. At a responsive level, it might decipher crude sensor data legitimately into actuator orders. A combinations of sensors may initially be utilized to gauge boundaries of intrigue (for example, the situation of the robot’s gripper) from boisterous sensor information. A prompt undertaking (for example, moving the gripper a specific way) is deduced from these evaluations. Procedures from the control hypothesis convert the assignment into orders that drive the actuators [10].

      At longer time scales or with progressively modern undertakings, the robot may need to assemble and dissuade a “subjective” model. Subjective models attempt to speak to the robot, the world, and how they collaborate. For example, acknowledgment and PC vision can be utilized to follow objects; mapping strategies can be utilized to assemble maps of the world; lastly, movement arranging and other man-made consciousness procedures might be utilized to make sense of the proper behavior. For instance, an organizer may make sense of how to accomplish an undertaking without hitting deterrents, falling over, and so forth [11].

      This mechanism has a lot of various levels of algorithms, which are classified below along with the steps followed for performing the task.

       Direct interaction with the help of telephone or teleported devices.

       Specifying the particular position to the robot and where it should move or giving step-by-step instructions from beginning to end until it reaches its destination.

       An autonomous robot performs some tasks beyond user specified ones because some robots are capable of performing tasks and alerting the user when the robot is in trouble, etc. [12].

       There are a few types of robots which are operated by the user’s instruction via telephone.

       There are a few robots which perform specific moves based on the instructions given upon starting.

       There are a few robots which only perform the tasks specified by one person. Whichever task is specified first by the instructor is identified by the robot as the task specified, which is stored in its memory and performed as the stored task. Such types of robots are called “task level autonomous.”

       There are a few robots which do whatever task it is instructed to do by the user; such types of robots are called “fully autonomous” [13].

      1. Qin, T., Li, P., Shen, S., VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE Trans. Rob., 34, 4, 1004–1020, Aug. 2018.

      2. Pequito, S., Khorrami, F., Krishnamurthy, P., Pappas, G.J., Analysis and Design of Actuation–Sensing–Communication Interconnection Structures Toward Secured/Resilient LTI Closed-Loop Systems. IEEE Trans. Control Network Syst., 6, 2, 667–678, June 2019.

      3. Chang, X. and Yang, G., New Results on Output Feedback $H_{\infty} $ Control for Linear Discrete-Time Systems. IEEE Trans. Autom. Control, 59, 5, 1355–1359, May 2014.

      4. Li, Z., Zhang, T., Ma, C., Li, H., Li, X., Robust Passivity Control for 2-D Uncertain Markovian Jump Linear Discrete-Time Systems. IEEE Access, 5, 12176–12184, 2017.

      5. Yang, C., Ge, S.S., Xiang, C., Chai, T., Lee, T.H., Output Feedback NN Control for Two Classes of Discrete-Time Systems with Unknown Control Directions in a


Скачать книгу