There has been a longstanding interest in multi-modality imaging especially in nuclear medicine where the lack of structural information in Single Photon Emission Computed Tomography (SPECT) images and Positron Emission Tomography (PET) images makes it difficult to assess the precise location of tissue with metabolic uptake, whereas Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) can provide great anatomical details. The hardware approach to image fusion is to use hybrid scanners (SPECT/CT, PET/CT or PET/MRI). With this approach image fusion is simplified because the two scans are acquired sequentially (or simultaneously with PET/MR) which minimizes any motion. Hardware hybrid scanner designs are facing many challenges such as the conversion of CT numbers to attenuation coefficients, artefacts due to the presence of high Zeff material, and respiratory motion. Software approaches to multimodality imaging emerged in the late 1980s and were using semi-automated or automatic approaches by registering surface contours or surface points. The big step towards fully automated software based image fusion appeared when the registration method started to use a direct analysis of image data to achieve an optimal registration often using iterative approaches to maximize mutual information between images. Software based image fusion is complex and involves many decisions regarding fusion tools, reconstruction techniques and final image presentation of fused data.