Over the last decade a number of different AR devices have been proposed for medical applications, primarily to assist in guidance of interventions and surgery. Existing technologies are either based on video mixing of real and virtual scenes or projection of the virtual scene as if to lie within the real scene - so-called see-through AR. This talk will suumarise experiences of each in a clinical environment. Either type of visualisation raises complex issues of visual perception as the human visual system has not evolved to deal effectively with the location of objects beneath transparent surfaces. We show some initial results of a study of perceptual issues that impact on the correct visualisation of virtual scenes in AR and show that robust systems are indeed feasible. Finally we speculate on how these technologies will develop and what the main clinical application areas might be. We show how coupling active models with the visualised scene might revolutionise application of these technologies.