Something I have always wondered about surgical robots is how they make up for a lack of tactile sense during an operation. A sense touch is still something that separates a human hand from a robotic probe. Sure, there are pressure gauges to see how much force is being applied and temperature can easily be detected, as well, but these are not the same thing as true tactile sensations. Does this mean that a surgeon is basically working as though their hands are asleep while using a robotic surgical system? Not a pleasant idea. Fortunately, this is a shared concern among many and a great deal of development has gone into addressing this issue. Until now, doctors have tried to compensate for this lack of touch through pre-operative imaging. Using things like MRI, X-ray imaging, and ultrasounds to map out the inside of the body before the actual operation so that they have a layout of the inside of their patient’s bodies. They use miniaturized cameras with their own lighting so that they can see what they are doing and track the location within the body via sensors and plot out their desired positions using the mappings. While these methods have had some positive results, they are not enough.
A new project, titled the Complementary Situational Awareness for Human-Robot Partnerships, is a collaboration between various research departments set on improving a robot’s ability to gather various forms of sensory information as it works and then using this information to guide its actions. The goal: to restore the awareness surgeons have been forced to give up due to the use of minimally invasive surgeries and robot assistants. Nabil Simaan, associate professor of mechanical engineering at Vanderbilt University, is in charge of a team that hopes to develop surgical snake-like robots that will explore the internal organs and tissue of patients prior to surgery. Howie Choset, professor of robotics at Carnegie Mellon University, is in charge of a team that hopes to use a technique called Simultaneous Localization and Mapping that will allow robots to navigate unexplored areas without interfering with their surroundings, which will map out the interior of a body and form the foundation of the entire project. Finally, Russel Taylor, the John C. Malone Professor of Computer Science at Johns Hopkins University, will be leading another team that will focus on the interfaces used by the surgeons, using Johns Hopkins’ open-source â€śSurgical Assistant Workstationâ€ť toolkit, such as that used by the Da Vinci system.
As technology advances, so does our understanding of what is possible. Through the innovations of the Complementary Situational Awareness for Human-Robot Partnerships project, we are looking at an advancement in medical technologies that will improve existing modern medical practices. This five year, 3.6 million dollar project exists to improve the functionality and use of robotic surgical systems such as the Da Vinci system, making them more intuitive, easier to use, and most importantly, safer for the patients.
Image Credit: Thinkstock