Probabilistics Search for Tracking Targets: Theory and Modern Applications
Irad Ben-Gal, Eugene Kagan
Format: PDF / Kindle (mobi) / ePub
Presents a probabilistic and information-theoretic framework for a search for static or moving targets in discrete time and space.
Probabilistic Search for Tracking Targets uses an information-theoretic scheme to present a unified approach for known search methods to allow the development of new algorithms of search. The book addresses search methods under different constraints and assumptions, such as search uncertainty under incomplete information, probabilistic search scheme, observation errors, group testing, search games, distribution of search efforts, single and multiple targets and search agents, as well as online or offline search schemes. The proposed approach is associated with path planning techniques, optimal search algorithms, Markov decision models, decision trees, stochastic local search, artificial intelligence and heuristic information-seeking methods. Furthermore, this book presents novel methods of search for static and moving targets along with practical algorithms of partitioning and search and screening.
Probabilistic Search for Tracking Targets includes complete material for undergraduate and graduate courses in modern applications of probabilistic search, decision-making and group testing, and provides several directions for further research in the search theory.
• Provide a generalized information-theoretic approach to the problem of real-time search for both static and moving targets over a discrete space.
• Present a theoretical framework, which covers known information-theoretic algorithms of search, and forms a basis for development and analysis of different algorithms of search over probabilistic space.
• Use numerous examples of group testing, search and path planning algorithms to illustrate direct implementation in the form of running routines.
• Consider a relation of the suggested approach with known search theories and methods such as search and screening theory, search games, Markov decision process models of search, data mining methods, coding theory and decision trees.
• Discuss relevant search applications, such as quality-control search for nonconforming units in a batch or a military search for a hidden target.
• Provide an accompanying website featuring the algorithms discussed throughout the book, along with practical implementations procedures.
e-publ. 22. X. Zhuang (2005) The strategy entropy of reinforcement learning for mobile robot navigation in complex environments. Proceedings of IEEE Conference on Robotics and Automation, Barcelona, Spain, 2005, pp. 1742–1747. 23. M. Goldenberg, A. Kovarsky, X. Wu, and J. Schaeffer (2003) Multiple Agents Moving Target Search, Department of Computer Science, University of Alberta, Edmonton. 24. H. Lau, S. Huang, and G. Dissanayake (2005) Optimal Search for Multiple Targets in a Built
(IJCAI-95), 1660–1667. 63. S. Koenig (2001). Mini-Max Real-Time Heuristic Search. Artificial Intelligence, 129(1–2), 165–197. 64. S. Koenig, M. Likhachev (2006). Real-Time Adaptive A*. Proceedings of AAMAS'06, 2006, 281–288. 65. M. Shimbo, T. Ishida (2003). Controlling the Learning Process of Real-Time Heuristic Search. Artificial Intelligence, 146, 1–41. 66. V. Bulitko, G. Lee (2006). Learning in Real-Time Search: A Unifying Framework. J. Artificial Intelligence Research, 25, 119–157.
of the trials, the following theorem holds. Theorem 2.14 . Algorithm 2.13 (LRTA* algorithm) converges with trials to the optimal path with minimal actual evaluated cost and such that for each state it follows that . Proof. Let be a state and be its successor that, after a choice by Line 3.3 of the algorithm, would follow the state in the resulting path. Then, after updating, the cost of the current state will obtain the value . Assume that, after termination of a certain trial of the
state that represents a node, in which the target is located, is not defined and depends on the target's movements. In addition to the initial state of the searcher, in the MTS algorithm an initial state of the target is specified. It is assumed that the target starts from its initial state and at each step of the search moves one step over the set or stays in its current state. The searcher cannot control the target's movement up to termination of the algorithm, and the target is not informed of
target has been found at a certain point (a false positive error). Of these two errors, the false negative one is much more popular, and we consider a few such cases in later chapters. Another version of the incomplete-information search, which is also considered in this book, addresses the situation of side information, where the searcher obtains some (incomplete) indication during the search of where the target is located. Another natural extension to all of the above methods is obtained when