Appearance-based indoor place recognition for localization of the visually impaired person


Indoor localization and mapping is an important issue in computer vision. Many approaches have been proposed and used to give an accurate process of localization. most of them have limitations and cannot precisely recognize places since this challenge involves many issues like a random representation of features not based on spatial domains let mismatch of finding the corresponding image in an accurate way. In addition, some other minor problems are related to the way of features extraction like octave , Haar,…etc. Hence, it still is regarded as an open problem. The proposed system uses and compares different machine learning techniques for feature extraction like BOW, HOG, and EOH for visual place recognition in a way that improves accuracy and robustness of indoor localization for the visually impaired person. Here we combined several powerful approaches, then applied them to two international datasets (COLD and IDOL) and found more accurate results as compared to using each method separately.