Sensor Fusion deals with the amalgamation of multiple sensor data to provide a steady and reliable estimate of the pose: position and orientation of the system, generally a robot, relative to its environment. A good strategy for extracting sensor data and minimizing errors from the sensor needs to be adopted to address this challenge. Many algorithms have been employed to improve and solve this problem in recent times. Despite the tremendous expansion of work in this domain, a precise compilation and comparison of various methodologies have remained an unexplored subject. This paper presents the current state-of-the-art multi-sensor fusion methods, with a significant focus on partially Global Navigation Satellite System (GNSS) dependent techniques. We have investigated works with various architecture and classified them into two major categories: Loosely-coupled and Tightly-coupled. These methods are further differentiated based on the optimization used for minimizing error.