Navigating an autonomous underwater vehicle (AUV) is a difficult task. Dead-reckoning navigation is subject to unbounded error due to sensor inaccuracy and is inapplicable for mission durations longer than a few minutes. To bound the estimation errors a global referencing method has to be used. SLAM (Simultaneous Localization And Mapping) is such a method. It uses repeated recognition of significant features of the environment to reduce the estimation error. Devices for environment sensing that are used in most land applications like cameras, laser scanners or GNSS signals cannot be used under water: GNSS signals are attenuated very strongly in water and light propagation suffers mainly from turbid water. In more than a few hundred meters water depth there is also no sunlight. Sonic waves suffer much less from these problems and that is the reason why sonar sensors are the prevalent sensor type used under water. A main difficulty is to extract three-dimensional information from side-scan images to perform SLAM. An overview of existing approaches to underwater SLAM using sonar data is given in this paper. A short outlook to the system that will be used in the TIETeK project is also presented.