Laboratory for Computational Sensing and Robotics
Department of Computer Science
Johns Hopkins University
The advent of computer-integrated surgery technologies such as the Intuitive Surgical da Vinci system has created new opportunities to record quantitative motion and video data of the surgical workspace, This data can be used to create descriptive mathematical models to represent and analyze surgical training and performance. These models can then form the basis for evaluating and training surgeons, producing quantitative measures of surgical proficiency, automatically annotating surgical recordings, and providing data for a variety of other applications in medical informatics.
In developing mathematical models to recognize and evaluate surgical dexterity, we must first investigate the underlying structure in surgical motion. We hypothesize that motion during a surgery is not a random set of gestures but a deliberate sequence of gestures, each with its own surgical intent. We will present the results of our investigation of the existence of structure in surgical motion. During our research, we made no assumptions about the construct of the structure. We borrowed techniques and ideas from computer vision, image processing, speech processing, language theory, machine learning and statistical analysis to help in our investigation. Our focus was on the analyses of fundamental surgical tasks, such as suturing, knot tying and needle hoop passing, across varying surgical skill levels.
Based on what we have learned, we are now developing motion-planning and control methods for human-machine cooperative task performance in complex domains such as robotic minimally invasive surgery (RMIS). The RMIS domain poses numerous significant challenges for which current motion-planning and control approaches are not well-suited, such as (i) dealing with a complex environment; (ii) solving tasks that involve numerous subtasks, each subtask requiring complex motions which are determined by physical constraints that are neither simple to elicit nor to model; and (iii) determining when and how to transition from one subtask to another as dictated by the overall task.
To address these challenges, our motion-planning approach takes advantage of learned models of human surgical performance. As noted above, we have successfully trained time series models such as Hidden Markov Models and Gaussian Mixture Models on data recorded from expert surgeons. Our motion-planning approach uses these prior motion models as "hints" for enhancing the effectiveness of motion planning. As the motion planner selectively explores the space of feasible motions, the motion models suggest (i) when to transition from one subtask to another, and (ii) what motions might be appropriate to accomplish a particular subtask. In this way, by using the expert models as a guide, the motion planner can focus the search in a smaller subspace and make significant progress in further advancing the exploration. Simulation experiments on suturing tasks commonly used to train novice surgeons provide promising initial validation, demonstrating the capabilities and efficiency of the approach. We are also integrating motion planning with low-level controllers in order to automatically execute the planned motions by the da Vinci system.
*This abstract describes collaborative work with Sanjeev Khudanpur, Rene Vidal, and Rajesh Kumar, Dr. David Yuh, Dr. Grace Chen, Erion Plaku, Nicolas Padoy, Carol Reiley, Henry Lin, Balakrishnan Varadarajan
Intuitive Surgical da Vinci system at Hackerman Hall.