Quantcast

Improving Robo-Doc

Oct 30, 13 Improving Robo-Doc

Something I have always wondered about surgical robots is how they make up for a lack of tactile sense during an operation. A sense touch is still something that separates a human hand from a robotic probe. Sure, there are pressure gauges to see how much force is being applied and temperature can easily be detected, as well, but these are not the same thing as true tactile sensations. Does this mean that a surgeon is basically working as though their hands are asleep while using a robotic surgical system? Not a pleasant idea. Fortunately, this is a shared concern among many and a great deal of development has gone into addressing this issue. Until now, doctors have tried to compensate for this lack of touch through pre-operative imaging. Using things like MRI, X-ray imaging, and ultrasounds to map out the inside of the body before the actual operation so that they have a layout of the inside of their patient’s bodies. They use miniaturized cameras with their own lighting so that they can see what they are doing and track the location within the body via sensors and plot out their desired positions using the mappings. While these methods have had some positive results, they are not enough.

A new project, titled the Complementary Situational Awareness for Human-Robot Partnerships, is a collaboration between various research departments set on improving a robot’s ability to gather various forms of sensory information as it works and then using this information to guide its actions. The goal: to restore the awareness surgeons have been forced to give up due to the use of minimally invasive surgeries and robot assistants. Nabil Simaan, associate professor of mechanical engineering at Vanderbilt University, is in charge of a team that hopes to develop surgical snake-like robots that will explore the internal organs and tissue of patients prior to surgery. Howie Choset, professor of robotics at Carnegie Mellon University, is in charge of a team that hopes to use a technique called Simultaneous Localization and Mapping that will allow robots to navigate unexplored areas without interfering with their surroundings, which will map out the interior of a body and form the foundation of the entire project. Finally, Russel Taylor, the John C. Malone Professor of Computer Science at Johns Hopkins University, will be leading another team that will focus on the interfaces used by the surgeons, using Johns Hopkins’ open-source “Surgical Assistant Workstation” toolkit, such as that used by the Da Vinci system.

As technology advances, so does our understanding of what is possible. Through the innovations of the Complementary Situational Awareness for Human-Robot Partnerships project, we are looking at an advancement in medical technologies that will improve existing modern medical practices. This five year, 3.6 million dollar project exists to improve the functionality and use of robotic surgical systems such as the Da Vinci system, making them more intuitive, easier to use, and most importantly, safer for the patients.

Image Credit: Thinkstock

Facebook Twitter Pinterest Plusone Digg Reddit Stumbleupon Email

About 

Joshua is a freelance writer, aspiring novelist, and avid table-top gamer who has been in love with the hobby ever since it was first introduced to him by a friend in 1996. Currently he acts as the Gamemaster in three separate games and is also a player in a fourth. When he is not busy rolling dice to save the world or destroying the hopes and dreams of his players, he is usually found either with his nose in a book or working on his own. He has degrees in English, Creative Writing, and Economics.

Follow redOrbit on Twitter, Facebook, Instagram and Pinterest.