نوع مقاله : مقالة‌ مروری‌

نویسندگان

دانشکده مهندسی مکانیک، دانشگاه تربیت مدرس، تهران، ایران

چکیده

خطای سیستم ناوبری اینرسی به علت خطاهای حسگرهای آن، با افزایش زمان، زیاد می‌شود. معمولاً برای جلوگیری از رشد خطای سیستم ناوبری، این سیستم را با حسگر یا سامانه‌های کمکی تلفیق می‌کنند؛ که مهم‌ترین سامانة کمکی، سامانه ماهواره‌ای ناوبری جهانی است. به دلیل امکان قطع سامانه ماهواره‌ای ناوبری جهانی یا معتبر نبودن اطلاعات آن، از حسگرهای کمکی دیگر در زمان قطع سامانه ماهواره‌ای ناوبری جهانی برای افزایش دقت سیستم ناوبری اینرسی استفاده می‌شود. در این مقاله، به بررسی انواع روش‌های استفاده‌شده از دوربین تصویربردار برای ناوبری یا افزایش دقت سیستم ناوبری اینرسی انواع پرنده‌های بدون سرنشین، پرداخته شده است. پس از مرور مقالات در حوزه ناوبری تصویری در پرنده‌های بدون سرنشین، دسته‌بندی مناسبی برای انواع روش‌های ناوبری تصویری ارائه شده و روند توسعه این روش‌ها بررسی شده است. در پرنده‌های بدون سرنشین ناوبری تصویری بیشتر بر اساس تکنیک‌های: نقشه متریک، شار نوری، ردیابی مشخصه‌ها، ادومتری و سیستم‌های ناوبری تصویری مبتنی بر تشکیل و استفاده هم‌زمان نقشه، انجام شده است.

کلیدواژه‌ها

عنوان مقاله [English]

A Survey on Vision Navigation Methods for UAV Navigation Applications

نویسندگان [English]

  • Masoud Ebrahimi Kachoie
  • Mohammadvali Arbabmir
  • Mohammad Norouz

Department of Mechanical Engineering, Tarbiat Modares University, Tehran, Iran

چکیده [English]

Inertial navigation system error increases due to sensor errors with the increase in
time. Usually, to prevent the growth of navigation system error, inertial navigation
systems are integrated with sensors or auxiliary systems. The importantly aided system is
GNSS. Because of GNSS outage or its invalidity, the other auxiliary sensors are used to
increase the accuracy of the inertial navigation system. In this article, the types of
methods which are used by imaging camera for navigation or for the accuracy
improvement of an inertial navigation system for UAVs are discussed. After reviewing the
literature in the field of vision navigation in UAVs, the proper classification for vision
navigation methods and the development of these methods are presented. In UAVs, the
vision navigation techniques are based more on Map metric, optical flow, feature
tracking, odometers and simultaneous localization and mapping.

کلیدواژه‌ها [English]

  • Inertial Navigation System
  • Vision Navigation
  • Estimator
  • UAV
[1]     Krishnan, R. and AR, A., “A Survey On Image Matching Methods. ” International Journal of Latest Research in Engineering and Technology (IJLRET), Vol. 2, Issue 1, 2016,  pp. 58-61.
[2]     Lowe, D.G., “Distinctive Image Features from Scale-invariant Keypoints,” International Journal of Computer Vision, Vol. 60, 2004 , pp. 91-110.
[3]     Križaj, J., Štruc, V. and Pavešić, N., “Adaptation of SIFT Features for Robust Face Recognition,” International Conference Image Analysis and Recognition, 2010, pp. 394-404.
[4]     Kisku, D.R. Rattani, A. Grosso, E. and Tistarelli, M., “Face Identification by SIFT-based Complete Graph Topology,” IEEE Workshop on Automatic Identification Advanced Technologies, 2007, pp. 63-68.
[5]     Majumdar, A. and Ward, R.K., “Discriminative SIFT Features for face Recognition,” Canadian Conference on Electrical and Computer Engineering, CCECE'09., 2009, pp. 27-30.
[6]     Bay, H., Ess, A., Tuytelaars, T. and Van Gool, L., “Speeded-up Robust Features (SURF),” Computer Vision and Image Understanding, Vol. 110, No.3,  2008, pp. 346-359.
[7]     Erhard, S., Wenzel, K. E. and Zell, A. “Flyphone: Visual Sself-Localisation Using a Mobile Phone as on board Image Processor on a Quadrocopter,” in Selected Papers from the 2nd International Symposium on UAVs, Reno, Nevada, USA, 2009, 2009, pp. 451-465.
[8]     Bryson, M., Johnson-Roberson, M. and S. Sukkarieh, “Airborne smoothing and mapping using Vision and Inertial Sensors,” in IEEE International Conference on Robotics and Automation, ICRA'09., Kobe, Japan, 2009, pp. 2037-2042.
[9]     Lee, D., Kim, Y. and Bang, H., “Vision-Based Terrain Referenced Navigation for Unmanned Aerial Vehicles using Homography Relationship,” Journal of Intelligent & Robotic Systems, Vol. 69, 2013, pp. 489-497.
[10]   Yu, Q., Shang, Y., Liu, X., Lei, Z., Li, X., Zhu, X. and et al., “Full-Parameter Vision Navigation Based on Scene Matching for Aircrafts,” Science China Information Sciences, Vol. 57, 2014,  pp. 1-10.
[11]   Xu, D., Li, Y. F. and Tan, M., “A General Recursive Linear Method and Unique Solution Pattern Design for the Perspective-n-Point Problem,” Image and Vision Computing, Vol. 26, 2008, pp. 740-750.
[12]   Yu, Q. and Shang, Y., “Videometrics: Principles and Researches,” Science Process Bei-jing, China, 2009, pp. 71-71.
[13]   Fischler, M. A. and Bolles, R. C., “A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography (Reprinted in Readings in Computer Vision, ed. MA Fischler,” Comm. ACM, Vol. 24,  1981, pp. 381-395.
[14]   Gibbens, S. J. D. P. W. “Efficient Terrain-Aided Visual Horizon Based Attitude Estimation and Localization,” Journal of Intelligent & Robotic Systems, Vol. 78, Issue 2, 2014, pp. 205–221.
[15]   Dumble, S. J. and Gibbens, P. W., “Airborne Vision-Aided Navigation Using Road Intersection Features,” Journal of Intelligent & Robotic Systems, Vol. 78, 2015, pp. 185-204.
[16]   Carr, J. R. and Sobek, J. S., “Digital Scene Matching Area Correlator (DSMAC),” in 24th Annual Technical Symposium, 1980, pp. 36-41.
[17]   Abaszadeh, A., ghanbarpour asl, H. and Yaghmai, Kh., Data Fusion of Inertial Navigation Syatem and Visual Navigation Syatem, 13th  Iranian Conference on Electrical Engineering, Iran, Zanjan, Zanjan University, 2005, (In Persian)
[18]   Joglekar, J. and Gedam, S.S. “Area Based Image Matching Methods–A Survey,” International Journal of Emerging Technology and Advanced Engineering, Vol. 2, Issue 1, 2012, pp. 130-136.
[19]   Berthilsson, R. “Affine Correlation,” in Proceedings. Fourteenth International Conference on Pattern Recognition, 1998, pp. 1458-1460.
[20]   Simper, A., “Correcting general band-to-band misregistrations,” International Conference on Image Processing, 1996. Proceedings., 1996, pp. 597-600.
[21]   G. Wolberg and S. Zokai, “Image Registration for Perspective Deformation recovery, ” in AeroSense 2000, 2000, pp. 259-270.
[22]   W. K. Pratt, “Correlation techniques of image registration, ” IEEE transactions on Aerospace and Electronic Systems, 1974, pp. 353-358.
[23]   P. Van Wie and M. Stein, “A landsat digital image rectification system, ” IEEE Transactions on Geoscience Electronics, Vol. 15, 1977, pp. 130-137.
[24]   K. Mühlmann, D. Maier, J. Hesser, and R. Männer, “Calculating dense disparity maps from color stereo images, an efficient implementation, ” International Journal of Computer Vision, Vol. 47, pp. 79-88, 2002.
[25]   P. H. Torr and A. Criminisi, “Dense stereo using pivoted dynamic programming, ” Image and Vision Computing, Vol. 22, 2004, pp. 795-806,.
[26]   J. Joglekar, S. S. Gedam, and I. CSRE, “Area Based Stereo Image Matching Technique Using Hausdorff Distance and Texture Analysis, ” ISPRS-International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. 3822, 2011, pp. 109-114.
[27]   H. Durrant-Whyte and T. Bailey, 1Simultaneous Localisation and Mapping (SLAM): Part I The Essential Algorithms, 2006.
[28]   R. C. Smith and P. Cheeseman, “On the representation and estimation of spatial uncertainty, ” The international journal of Robotics Research, Vol. 5, 1986, pp. 56-68.
[29]   H. F. Durrant-Whyte, “Uncertain geometry in robotics, ” IEEE Journal on Robotics and Automation, Vol. 4,   , 1988, pp. 23-31.
[30]   N. Ayache and O. D. Faugeras, “Building, registrating, and fusing noisy visual maps, ” The International Journal of Robotics Research, Vol. 7, , 1988, pp. 45-65.
[31]   J. L. Crowley, “World modeling and position estimation for a mobile robot using ultrasonic ranging, ” in IEEE International Conference on Robotics and Automation, 1989, pp. 674-680.
[32]   R. Chatila and J.-P. Laumond, “Position referencing and consistent world modeling for mobile robots, ” in IEEE International Conference on Robotics and Automation., 1985, pp. 138-145.
[33]   R. Smith, M. Self, and P. Cheeseman, “Estimating uncertain spatial relationships in robotics, ” in Autonomous robot vehicles, ed: Springer, 1990, pp. 167-193.
[34]   J. J. Leonard and H. F. Durrant-Whyte, “Simultaneous map building and localization for an autonomous mobile robot, ” in IEEE/RSJ International Workshop on Intelligent Robots and Systems, IROS'91., 1991, pp. 1442-1447.
[35]   T. Viéville, Faugeras, O.D., “Cooperation of the inertial and visual systems, ” 1990.
[36]   A. J. Davison, Y. G. Cid, and N. Kita, “Real-time 3D SLAM with wide-angle vision, ” in Proc. IFAC/EURON Symp. Intelligent Autonomous Vehicles, 2004, pp. 31-33.
[37]   J. Kim and S. Sukkarieh, “SLAM aided GPS/INS navigation in GPS denied and unknown environments, ” in The 2004 International Symposium on GNSS/GPS, 2004.
[38]   J. Kim and S. Sukkarieh, “6DoF SLAM aided GNSS/INS navigation in GNSS denied and unknown environments, ” Positioning, Vol. 1, p. 0, 2005.
[39]   J. W. Langelaan, “State estimation for autonomous flight in cluttered environments, ” Journal of Guidance Control and Dynamics, Vol. 30, , 2007, p. 1414.
[40]   G. Klein, Murray, D., “Parallel tracking and mapping for small AR workspaces, ” 2007.
[41]   T. B. Schon, Karlsson, R., Tornqvist, D., Gustafsson, F, “A Framework For Simultaneous Localization and  Mapping Utilizing Model Structure, ” In 10th International Conference on Information Fusion, 2007, pp. 1-8.
[42]   R. Karlsson, Schon, T., Tornqvist, D., Conte, G., Gustafsson, F., “Utilizing model structure for efficient simultaneous localization and mapping for a UAV application, ” In Aerospace Conference, 2008 IEEE, pp. 1-10.
[43]   G. Bleser, Strickery, D., “Using the marginalised particle filter for real-time visual-inertial sensor fusion, ”  In 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, ISMAR 2008., pp. 3-12.
[44]   D. Törnqvist, T. B. Schön, R. Karlsson, and F. Gustafsson, “Particle filter SLAM with high dimensional vehicle model, ” Journal of Intelligent and Robotic Systems, Vol. 55, 2009, pp. 249-266.
[45]   J. Artieda, J. M. Sebastian, P. Campoy, J. F. Correa, I. F. Mondragón, C. Martínez, et al., “Visual 3-d slam from uavs, ” Journal of Intelligent and Robotic Systems, Vol. 55, 2009, pp. 299-321.
[46]   N. Aouf, A. Ollero, and J. Z. Sasiadek, “Special Issue on: Airborne Simultaneous Localisation and Map Building (A-SLAM), ” Journal of Intelligent & Robotic Systems, Vol. 55, 2009,  pp. 247-248.
[47]   F. Caballero, L. Merino, J. Ferruz, and A. Ollero, “Vision-based odometry and SLAM for medium and high altitude flying UAVs, ” Journal of Intelligent and Robotic Systems, Vol. 54, 2009, pp. 137-161.
[48]   S. Weiss and R. Siegwart, “Real-time metric state estimation for modular vision-inertial systems, ” in IEEE International Conference on Robotics and Automation (ICRA), 2011, pp. 4531-4537.
[49]   S. Weiss, D. Scaramuzza, and R. Siegwart, “Monocular‐SLAM–based navigation for autonomous micro helicopters in GPS‐denied environments, ” Journal of Field Robotics, Vol. 28, 2011, pp. 854-874.
[50]   G. Nützi, S. Weiss, D. Scaramuzza, and R. Siegwart, “Fusion of IMU and vision for absolute scale estimation in monocular SLAM, ” Journal of intelligent & robotic systems, Vol. 61, 2011, pp. 287-299,.
[51]   L. Kneip, S. Weiss, and R. Siegwart, “Deterministic initialization of metric state estimation filters for loosely-coupled monocular vision-inertial systems, ” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2011, pp. 2235-2241.
[52]   S. Sırtkaya and A. A. Alatan, “3D modeling of urban areas using plane hypotheses, ” in 2012 20th Signal Processing and Communications Applications Conference (SIU), 2012, pp. 1-4.
[53]   M. Bryson and S. Sukkarieh, “Bearing-only SLAM for an airborne vehicle, ” in Australasian Conference on Robotics and Automation, 2005, p. 18.
[54]   M. Lhuillier, “Incremental Fusion of Structure-from-Motion and GPS using constrained Bundle Adjustments, ” IEEE transactions on pattern analysis and machine intelligence 34, no. 12, 2012, pp. 2489-2495.
[55]   D. Magree and E. N. Johnson, “A Monocular Vision-aided Inertial Navigation System with Improved Numerical Stability, ” in Proceedings of the AIAA Guidance Navigation and Control Conference, Kissimmee, Florida, 2015, pp. 6.2015-0097.
[56]   A. Davison, “Real-time simultaneous localisation and mapping with a single camera, ” oct. 2003.
[57]   I. R. A. Davison, N. Molton, and O. Stasse, “Monoslam: Real-time single camera slam, ” IEEE transactions on pattern analysis and machine intelligence, 29(6), 2007,  pp.1052-1067.
[58]   C. T. a. G. Bierman, “Givens transformation techniques for kalman _ltering, ” Acta Astronautica, Vol. 4, no.78 ,1977, pp.847-863.
[59]   M. A. Atashgah, P. Gholampour, and S. Malaek, “Integration of image de-blurring in an aerial Mono-SLAM, ” Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering, Vol. 228, 2014,  pp. 1348-1362.
[60]   M. A. Atashgah and S. Malaek, “Prediction of aerial-image motion blurs due to the flying vehicle dynamics and camera characteristics in a virtual environment, ” Proceedings of the Institution of Mechanical Engineers, Part G: Journal of aerospace engineering, Vol. 227, No.7, 2012, pp.1055-1067.
[61]   J. Yang, Vision based estimation, localization, and mapping for autonomous vehicles,  University of Illinois at Urbana-Champaign, 2015.
[62]   R. Munguía, S. Urzua, Y. Bolea, and A. Grau, “Vision-Based SLAM System for Unmanned Aerial Vehicles, ” Sensors, Vol. 16, 2016, p. 372.
[63]   J. Park, S. Im, K.-H. Lee, and J.-O. Lee, “Vision-based SLAM system for small UAVs in GPS-denied environments, ” Journal of Aerospace Engineering, Vol. 25, 2011,  pp. 519-529.
[64]   T. Lemaire, C. Berger, I.-K. Jung, and S. Lacroix, “Vision-based slam: Stereo and monocular approaches, ” International Journal of Computer Vision, Vol. 74, 2007, pp. 343-364.
[65]   E. Trucco and K. Plakas, “Video tracking: a concise survey, ” IEEE Journal of Oceanic Engineering, Vol. 31, 2006, pp. 520-529.
[66]   M. V. Arbabmir, H. Ghanbarpour Asl, H. Dehghani, “Correction of INS's Error with Sequential images and without path map, ” 10th Conference of Iranian Aerospace Society, Iran, Tehran, Tarbiat Modares University, 2011. )In Persian(
[67]   M. V. Arbabmir, H. Ghanbarpour Asl, H. Dehghani, “Correction of position, velocity and orientation Errors of a UAV with use of Sequential images in an unknown environment, ” 19th  Iranian Conference on Electrical Engineering, Iran, Tehran, Amirkabir University of Technology, 2011.(In Persian)
[68]   M. V. Arbabmir, H. Ghanbarpour Asl, H. Dehghani, “Use the best matched point of  Sequential images for correction of UAV navigation, ” 7th Conference of machine vision and Image Processing, Iran, Tehran, Iran University of Science and Technology, 2011. (In Persian)
[69]   J. F. Raquet, Giebner, M., “Navigation Using Optical Measurements of Objects at Unknown Locations, ”  In Proceedings of the 59th Annual Meeting of The Institute of Navigation and CIGTF 22nd Guidance Test Symposium, 2001,  pp. 282-290.
[70]   D.-G. Sim, R.-H. Park, R.-C. Kim, S. U. Lee, and I.-C. Kim, “Integrated position estimation using aerial image sequences, ”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, 2002, pp. 1-18.
[71]   J. Kim, M. Ridley, S. Sukkarieh, and E. Nettleton, “Real-time experiment of feature tracking/mapping using a low-cost vision and GPS/INS system on an UAV platform, ” Positioning, Vol. 1, 2004, p.p. 167-172.
[72]   A. D. Wu, E. N. Johnson, and A. A. Proctor, “Vision-aided inertial navigation for flight control, ” Journal of Aerospace Computing, Information, and Communication, Vol. 2, 2005, pp. 348-360.
[73]   M. J. Veth, “Fusion of imaging and inertial sensors for navigation, ” DTIC Document2006.
[74]   J. W. Langelaan, “State estimation for autonomous flight in cluttered environments, ” Journal of guidance, control, and dynamics, Vol. 30, 2007, pp. 1414-1426.
[75]   A. E. Johnson, A. Ansar, L. H. Matthies, N. Trawny, A. I. Mourikis, and S. I. Roumeliotis, “A general approach to terrain relative navigation for planetary landing, ” in AIAA Aerospace@ Infotech Conf., Rohnert Park, CA, 2007.
[76]   S. Ebcin and M. Veth, Tightly-coupled image-aided inertial navigation using the unscented Kalman filter, AIR FORCE INST OF TECH WRIGHT-PATTERSON AFB OH, 2007.
[77]   A. I. Mourikis, Roumeliotis, S.I., “A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation, ”  In IEEE international conference on Robotics and automation, 2007, pp. 3565-3572.
[78]   G. Conte and P. Doherty, “Vision-based unmanned aerial vehicle navigation using geo-referenced information, ” EURASIP Journal on Advances in Signal Processing, Vol. 2009,  No. 10, 2009.
[79]   L. Li, Q. Yu, Y. Shang, Y. Yuan, H. Lu, and X. Liu, “A new navigation approach of terrain contour matching based on 3-D terrain reconstruction from onboard image sequence, ” Science China Technological Sciences, Vol. 53, 2010, pp. 1176-1183.
[80]   K. M. Han and G. N. DeSouza, “Geolocation of multiple targets from airborne video without terrain data, ” Journal of Intelligent & Robotic Systems, Vol. 62, 2011, pp. 159-183.
[81]   V. Indelman, P. Gurfil, E. Rivlin, and H. Rotstein, “Real-time vision-aided localization and navigation based on three-view geometry, ” IEEE Transactions on Aerospace and Electronic systems, Vol. 48, 2012, pp. 2239-2259.
[82]   A. Martinelli and R. Siegwart, “Vision and IMU Data Fusion: Closed-Form Determination of the Absolute Scale, Speed, and Attitude,” in Handbook of Intelligent Vehicles, ed: Springer, 2012, pp. 1335-1354.
[83]   A. Martinelli and C. Troiani, Vision-Aided Inertial Navigation Using Virtual Features, PhD diss, INRIA, 2012.
[84]   J. A. Hesch, D. G. Kottas, S. L. Bowman, and S. I. Roumeliotis, Observability-constrained vision-aided inertial navigation, University of Minnesota, Dept. of Comp. Sci. & Eng., MARS Lab, Tech. Rep, Vol. 1, 2012.
[85]   J. A. Hesch, D. G. Kottas, S. L. Bowman, and S. I. Roumeliotis, “Towards consistent vision-aided inertial navigation, ” in Algorithmic Foundations of Robotics X, ed: Springer, 2013, pp. 559-574.
[86]   C. Troiani, A. Martinelli, C. Laugier, and D. Scaramuzza, “1-point-based monocular motion estimation for computationally-limited micro aerial vehicles, ” in European Conference on Mobile Robots (ECMR), 2013, pp. 13-18.
[87]   D.P. Magree, Monocular vision-aided inertial navigation for unmanned aerial vehicles,  Georgia Institute of Technology, 2015.
[88]   J. A. Hesch, D. G. Kottas, S. L. Bowman, and S. I. Roumeliotis, “Camera-IMU-based localization: Observability analysis and consistency improvement, ” The International Journal of Robotics Research, Vol. 33, 2014, pp. 182-201.
[89]   J. A. Hesch, D. G. Kottas, S. L. Bowman, and S. I. Roumeliotis, “Consistency analysis and improvement of vision-aided inertial navigation, ” IEEE Transactions on Robotics, Vol. 30, pp. 158-176, 2014.
[90]   D. G. Kottas and S. I. Roumeliotis, “Efficient and consistent vision-aided inertial navigation using line observations, ” in IEEE International Conference on Robotics and Automation (ICRA), 2013, pp. 1540-1547.
[91]   Y. K. Dongjin Lee, Hyochoong Bang, “Vision-based Terrain Referenced Navigation for Unmanned Aerial Vehicles using Homography Relationship, ” Journal of Intelligent & Robotic Systems. Vol. 69, 2013, 489–497.
[92]   D. G. Kottas, K. J. Wu, and S. I. Roumeliotis, “Detecting and dealing with hovering maneuvers in vision-aided inertial navigation systems, ” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2013, pp. 3172-3179.
[93]   E. Asadi and C. L. Bottasso, “Delayed fusion for real-time vision-aided inertial navigation, ” Journal of Real-Time Image Processing, Vol. 10, 2015, pp. 633-646.
[94]   S. J. Dumble and P. W. Gibbens, “Efficient terrain-aided visual horizon based attitude estimation and localization, ” Journal of Intelligent & Robotic Systems, Vol. 78, 2015, pp. 205-221.
[95]   J. A. Hesch, D. G. Kottas, S. L. Bowman, and S. I. Roumeliotis, “Consistency analysis and improvement of vision-aided inertial navigation, ” IEEE Transactions on Robotics, Vol. 30, pp. 158-176, 2014.
[96]   G. Panahandeh, Selected Topics in Inertial and Visual Sensor Fusion: Calibration, Observability Analysis and Applications,  2014.
[97]   N. Nargess Sadaghzadeh, J. Poshtan, A. Wagner, E. Nordheimer, and E. Badreddin, “Cascaded Kalman and particle filters for photogrammetry based gyroscope drift and robot attitude estimation,” ISA transactions, Vol. 53, No, 2, 2014, pp. 524-32.
[98]   C. Troiani, A. Martinelli, C. Laugier, and D. Scaramuzza, “Low computational-complexity algorithms for vision-aided inertial navigation of micro aerial vehicles, ” Robotics and Autonomous Systems, Vol. 69, 2015, pp. 80-97, 2015.
[99]   S. Zhao, F. Lin, K. Peng, X. Dong, B. M. Chen, and T. H. Lee, “Vision-aided estimation of attitude, velocity, and inertial measurement bias for UAV stabilization, ” Journal of Intelligent & Robotic Systems, Vol. 81, 2016, pp. 531-549.
[100] D. Nistér, O. Naroditsky, and J. Bergen, “Visual odometry, ” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition CVPR, Vol. 1,  2004, pp. I-652-I-659.
[101] C. G. Harris and J. Pike, “3D positional integration from image sequences, ” Image and Vision Computing, Vol. 6, 1988, pp. 87-90.
[102] H. C. Longuet-Higgins, “A computer algorithm for reconstructing a scene from two projections, ” Readings in Computer Vision: Issues, Problems, Principles, and Paradigms, pp. 61-62, 1987.
[103] J.-M. Frahm, P. Fite-Georgel, D. Gallup, T. Johnson, R. Raguram, C. Wu, et al., “Building rome on a cloudless day, ” in European Conference on Computer Vision, 2010, pp. 368-381.
[104] H. P. Moravec, Obstacle avoidance and navigation in the real world by a seeing robot rover, STANFORD UNIV CA DEPT OF COMPUTER SCIENCE; 1980 Sep.
[105] S. Lacroix, A. Mallet, R. Chatila, and L. Gallo, “Rover self localization in planetary-like environments, ” in Artificial Intelligence, Robotics and Automation in Space, 1999, p. 433.
[106] L. Matthies and S. Shafer, “Error modeling in stereo navigation, ” IEEE Journal on Robotics and Automation, Vol. 3, 1987, pp. 239-248.
[107] L. H. Matthies, “Dynamic stereo vision, ” 1989.
[108] C. F. Olson, L. H. Matthies, H. Schoppers, and M. W. Maimone, “Robust stereo ego-motion for long distance navigation, ” in IEEE Conference on Computer Vision and Pattern Recognition, 2000, pp. 453-458.
[109] M. J. Hannah, Computer matching of areas in stereo images,  DTIC Document 1974.
[110] H. P. Moravec, “Towards automatic visual bbstacle avoidance,” in International Conference on Artificial Intelligence (5th: 1977: Massachusetts Institute of Technology), 1977.
[111] W. Förstner, “A feature based correspondence algorithm for image matching,” International Archives of Photogrammetry and Remote Sensing, Vol. 26, 1986, pp. 150-166.
[112] J. Abascal, E. Lazkano, and B. Sierra, “Behavior-based indoor navigation, ” in Ambient Intelligence for Scientific Discovery, ed: Springer, 2005, pp. 263-285.
[113] C. Harris and M. Stephens, “A combined corner and edge detector,” in Alvey vision conference, 1988, p. 50.
[114] P. Corke, D. Strelow, and S. Singh, “Omnidirectional visual odometry for a planetary rover, ” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2004). 2004, pp. 4007-4012.
[115] R. Goecke, A. Asthana, N. Pettersson, and L. Petersson, “Visual vehicle egomotion estimation using the fourier-mellin transform,” in IEEE Intelligent Vehicles Symposium, 2007, pp. 450-455.
[116] M. Lhuillier, “Automatic structure and motion using a catadioptric camera, ” in Proceedings of the 6th Workshop on Omnidirectional Vision, Camera Networks and Non-Classical Cameras, 2005.
[117] M. J. Milford and G. F. Wyeth, “Single camera vision-only SLAM on a suburban road network, ” in IEEE International Conference on Robotics and Automation, ICRA 2008., 2008, pp. 3684-3689.
[118] D. Nistér, O. Naroditsky, and J. Bergen, “Visual odometry for ground vehicle applications, ” Journal of Field Robotics, Vol. 23, pp. 3-20, 2006.
[119] J.-P. Tardif, Y. Pavlidis, and K. Daniilidis, “Monocular visual odometry in urban environments using an omnidirectional camera, ” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2008, pp. 2531-2538.
[120] D. Scaramuzza, Omnidirectional vision: from calibration to robot motion estimation, ETH Zurich, PhD Thesis. 2008 Feb 22;17635.
[121] E. Mouragnon, M. Lhuillier, M. Dhome, F. Dekeyser, and P. Sayd, “Real time localization and 3d reconstruction, ” in  IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), 2006, pp. 363-370.
[122] A. Pretto, E. Menegatti, and E. Pagello, “Omnidirectional dense large-scale mapping and navigation based on meaningful triangulation, ” in IEEE International Conference on Robotics and Automation (ICRA), 2011, pp. 3289-3296.
[123] D. Scaramuzza, F. Fraundorfer, and R. Siegwart, “Real-time monocular visual odometry for on-road vehicles with 1-point ransac, ” in IEEE International Conference on Robotics and Automation. ICRA'09., 2009, pp. 4293-4299.
[124] D. Nistér, “An efficient solution to the five-point relative pose problem,” IEEE transactions on pattern analysis and machine intelligence, Vol. 26, 2004, pp. 756-770.
[125] M. Norouz, M. Ebrahimi, and M. Arbabmir, “Improvement of position and attitude of integrated VO/INS system,” in 1st international and 2nd Iranian Navigation Conference on,2016. (In persian)
[126] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, Vol. 24, 1981, pp. 381-395.
[127] A. Brown and D. Sullivan, “Inertial navigation electro-optical aiding during gps dropouts, ” in Proceedings of the Joint Navigation Conference, 2002.
[128] F. Caballero, L. Merino, J. Ferruz, and A. Ollero, “Unmanned aerial vehicle localization based on monocular vision and online mosaicking,” Journal of Intelligent and Robotic Systems, Vol. 55, 2009, pp. 323-343.
[129] T. Wang, C. Wang, J. Liang, Y. Chen, and Y. Zhang, “Vision-aided inertial navigation for small unmanned aerial vehicles in GPS-denied environments,” International Journal of Advanced Robotic Systems, Vol. 10, 2013.
[130] C. Troiani, A. Martinelli, C. Laugier, and D. Scaramuzza, “2-point-based outlier rejection for camera-imu systems with applications to micro aerial vehicles,” in IEEE International Conference on Robotics and Automation (ICRA), 2014, pp. 5530-5536.
[131] D. Dusha, Mejias, L., “Attitude observability of a loosely-coupled GPS/Visual Odometry Integrated Navigation Filter,” 2010.
[132] M. Norouz, M. Ebrahimi, and M. Arbabmir, “Modified Unscented Kalman Filter for improving the integrated visual navigation system, ” in Iranian Conference on Electrical Engineering (ICEE), 2017, pp. 753-758.
[133] J. Shi and C. Tomasi, “Good features to track,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'94., 1994, pp. 593-600.
[134] D. G. Lowe, “Object recognition from local scale-invariant features, ” in The proceedings of the seventh IEEE international conference on Computer vision, 1999, pp. 1150-1157.
[135] S. Se, D. G. Lowe, and J. J. Little, “Vision-based global localization and mapping for mobile robots,” in IEEE Transactions on Robotic, Vol. 21, 2005, pp. 364-375.
[136] S. Se, D. Lowe, and J. Little, “Mobile robot localization and mapping with uncertainty using scale-invariant visual landmarks, ” The international Journal of robotics Research, Vol. 21, 2002, pp. 735-758.
[137] A. Aminzadeh, M. A. Amiri Atashgh. “The practical implementation of vision navigation with Optical Flow sensor for a air vehicle robot, ” 2nd  National Congress Research in Computer Engineering, Iran, Tehran, 2015. (In Persian)
[138] M. M. G. Campa, M. L. Fravolini, Y. Gu, B. Seanor, and M. R. Napolitano, “A comparison of optical flow algorithms for real time aircraft guidance and navigation, ” 2008.
[139] S. Saripalli, G. S. Sukhatme, L. O. Mejías, and P. C. Cervera, “Detection and tracking of external features in an urban environment using an autonomous helicopter,” in IEEE International Conference on Robotics and Automation, ICRA 2005, 2005, pp. 3972-3977.
[140] S. van der Zwaan and J. Santos-Victor, “An insect inspired visual sensor for the autonomous navigation of a mobile robot,” Proc. of the Seventh International Sysposium on Intelligent Robotic Systems (SIRS), 1999.
[141] T. Netter and N. Francheschini, “A robotic aircraft that follows terrain using a neuromorphic eye,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2002, pp. 129-134.
[142] M. Srinivasan, S. Zhang, J. Chahl, G. Stange, and M. Garratt, “An overview of insect-inspired guidance for application in ground and airborne platforms,” Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering, Vol. 218, 2004, pp. 375-388.
[143] W. E. Green, P. Y. Oh, K. Sevcik, and G. Barrows, “Autonomous landing for indoor flying robots using optic flow,” in ASME 2003 International Mechanical Engineering Congress and Exposition, 2003, pp. 1347-1352.
[144] W. E. Green, P. Y. Oh, and G. Barrows, “Flying insect inspired vision for autonomous aerial robot maneuvers in near-earth environments,” in IEEE International Conference on Robotics and Automation, 2004, pp. 2347-2352.
[145] T. Cornall and G. Egan, “Optic flow methods applied to unmanned air vehicles,” in Academic Research Forum, 2003.
[146] S. Hrabar, G. S. Sukhatme, P. Corke, K. Usher, and J. Roberts, “Combined optic-flow and stereo-based navigation of urban canyons for a UAV, ” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005.(IROS 2005), 2005, pp. 3309-3316.
[147] H. Chao, Y. Gu, and M. Napolitano, “A survey of optical flow techniques for UAV navigation applications,” in International Conference on Unmanned Aircraft Systems (ICUAS), 2013, pp. 710-716.
[148] B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision, ” in IJCAI, 1981, pp. 674-679.
[149] B. K. Horn and B. G. Schunck, “Determining optical flow,” in  Technical symposium east, 1981, pp. 319-331.
[150] M. V. Srinivasan, “An image-interpolation technique for the computation of optic flow and egomotion, ” Biological Cybernetics, Vol. 71, pp. 401-415, 1994.
[151] S. Griffiths, J. Saunders, A. Curtis, B. Barber, T. McLain, and R. Beard, “Maximizing miniature aerial vehicles,” Robotics & Automation Magazine, IEEE, Vol. 13, 2006, pp. 34-43.
[152] J.-C. Zufferey and D. Floreano, “Toward 30-gram autonomous indoor aircraft: Vision-based obstacle avoidance and altitude control,” in IEEE International Conference on Robotics and Automation, (ICRA 2005). 2005, pp. 2594-2599.
[153] W. Ding, J. Wang, S. Han, A. Almagbile, M. A. Garratt, A. Lambert, et al., “Adding Optical Flow into the GPS/INS Integration for UAV navigation,” in Proc. of International Global Navigation Satellite Systems Society Symposium, 2009, pp. 1-13.
[154] B. Herissé, T. Hamel, R. Mahony, and F.-X. Russotto, “Landing a VTOL unmanned aerial vehicle on a moving platform using optical flow,” IEEE Transactions on Robotics, Vol. 28, 2012, pp. 77-89.
[155] A. Arvai, J. Kehoe, and R. Lind, “Vision-based navigation using multi-rate feedback from optic flow and scene reconstruction, ” Aeronautical Journal, Vol. 115, 2011, pp. 411-420.
[156] H. Chao, Y. Gu, J. Gross, G. Guo, M. L. Fravolini, and M. R. Napolitano, “A comparative study of optical flow and traditional sensors in UAV navigation,” in American Control Conference (ACC) , 2013, pp. 3858-3863.
[157] J. J. Koenderink and A. J. van Doorn, “Facts on optic flow, ” Biological cybernetics, Vol. 56, 1987, pp. 247-254.
[158] J.-C. Zufferey, A. Beyeler, and D. Floreano, “Autonomous flight at low altitude using light sensors and little computational power, ” International Journal of Micro Air Vehicles, Vol. 2, pp. 107-117, 2010.
[159] S. Roth and M. J. Black, “On the spatial statistics of optical flow,” International Journal of Computer Vision, Vol. 74, pp. 33-50, 2007.
[160] J. L. Barron, D. J. Fleet, and S. S. Beauchemin, “Performance of optical flow techniques, ” International journal of computer vision, Vol. 12, pp. 43-77, 1994.
[161] J.-C. Zufferey and D. Floreano, “Optic-flow-based steering and altitude control for ultra-light indoor aircraft,” 2004.
[162] F. Kendoul, I. Fantoni, and K. Nonami, “Optic flow-based vision system for autonomous 3D localization and control of small aerial vehicles, ” Robotics and Autonomous Systems, Vol. 57, 2009, pp. 591-602.
[163] S. Baker, D. Scharstein, J. Lewis, S. Roth, M. J. Black, and R. Szeliski, “A database and evaluation methodology for optical flow, ” International Journal of Computer Vision, Vol. 92, 2011, pp. 1-31.
[164] M. Mammarella, G. Campa, M. L. Fravolini, and M. R. Napolitano, “Comparing optical flow algorithms using 6-dof motion of real-world rigid objects,” Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, Vol. 42, 2012, pp. 1752-1762.
[165] J. J. Kehoe, A. S. Watkins, R. S. Causey, and R. Lind, “State estimation using optical flow from parallax-weighted feature tracking,” in Proceedings of the AIAA Guidance, Navigation, and Control Conference, 2006.
[166] R. F. Vassallo, J. Santos-Victor, and H. J. Schneebeli, “A general approach for egomotion estimation with omnidirectional images,” in Omnidirectional Vision, 2002. Proceedings. Third Workshop on, 2002, pp. 97-103.
[167] J. Kim and G. Brambley, “Dual optic-flow integrated navigation for small-scale flying robots,” in Proc. of Australasian Conference on Robotics and Automation, Brisbane, Australia, 2007.
[168] A. M. Hyslop and J. S. Humbert, “Autonomous navigation in three-dimensional urban environments using wide-field integration of optic flow, ” Journal of guidance, control, and dynamics, Vol. 33, 2010, pp. 147-159.
[169] M. Srinivasan, M. Lehrer, W. Kirchner, and S. Zhang, “Range perception through apparent image speed in freely flying honeybees, ” Visual neuroscience, Vol. 6, 1991, pp. 519-535.
[170] H. Romero, S. Salazar, and R. Lozano, “Real-time stabilization of an eight-rotor UAV using optical flow, ” IEEE Transactions on Robotics, Vol. 25, 2009, pp. 809-817.
[171] M. A. Garratt and J. S. Chahl, “Vision‐based terrain following for an unmanned rotorcraft, ” Journal of Field Robotics, Vol. 25, 2008, pp. 284-301.
[172] F. Ruffier and N. Franceschini, “Optic flow regulation: the key to aircraft automatic guidance, ” Robotics and Autonomous Systems, Vol. 50, 2005, pp. 177-194.
[173] J. S. Humbert, R. M. Murray, and M. H. Dickinson, “Pitch-altitude control and terrain following based on bio-inspired visuomotor convergence,” in AIAA conference on guidance, navigation and control, 2005.
[174] B. Herisse, F.-X. Russotto, T. Hamel, and R. Mahony, “Hovering flight and vertical landing control of a VTOL unmanned aerial vehicle using optical flow, ” in IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2008., 2008, pp. 801-806.
[175] A. Cesetti, E. Frontoni, A. Mancini, P. Zingaretti, and S. Longhi, “A vision-based guidance system for UAV navigation and safe landing using natural landmarks, ” in Selected papers from the 2nd International Symposium on UAVs, Reno, Nevada, USA June 8–10, 2009, pp. 233-257.
[176] D. Eynard, P. Vasseur, C. Demonceaux, and V. Frémont, “Real time UAV altitude, attitude and motion estimation from hybrid stereovision, ” Autonomous Robots, Vol. 33, 2012, pp. 157-172.