Korets 2011
<< Return to Validation Studies
Face and Construct Validity Assessment of 2nd Generation Robotic Surgery Simulator
Ruslan Korets, Joseph A. Graversen, Adam C. Mues, Mantu Gupta, Jaime Landman, Ketan K. Badani
Introduction and Objectives
Simulation is an established component of surgical training, and may prove to be a valuable tool in acquisition of robotic surgery expertise (RSE). Before using a surgical simulator to train surgeons or assess competence, the simulator must be subjected to rigorous validation testing. The Mimic dV-Trainer® is a stand-alone virtual reality simulator platform for the da Vinci® Surgical System. The aim of this study was to assess face and construct validity of the 2nd generation Mimic dV-Trainer (MdVT).
Methods
Between July 2010 and October 2010, 8 urology residents and 2 endourology fellows were assigned to Novice or Expert groups based on level of RSE. Each trainee completed 15 exercises from 4 domains. Performance was assessed using a built-in scoring algorithm, with multiple attempts allowed for each exercise until an overall score ≥80% was achieved. At the conclusion of all training modules, trainees completed a questionnaire assessing the ease of use and realism. Construct validity was analyzed by comparing the mean performance variables for the 2 groups (novice and experienced), using a 2-tailed t-test
Results
The median number of prior robotic surgery cases was 8 (range 0–15) for the novice group (NG) and 110 (range 55–170) for the expert group (EG). Compared to the novice group, the expert group had less instrument collisions (2.5 v 6.3 p<0.01), better “instrument out of view” scores (90.3 v 81.0 p=0.02), missed less targets within the “needle driving” domain (1.9 v 6.6 p<0.01), and performed better in the “work-space range” domain (93.9 v 79.8 p=0.01). There was no statistically significant difference between groups with respect to “excessive instrument force” score (p=NS) and number of “drops” (p=NS). The NG required more attempts to achieve score ≥80% (6.0 v 3.9 p=0.03). Both groups rated the MdVT as “easy to use” and “useful” in improving RSE with NG reporting improved confidence in robotic surgery skills after completing the training set. The EG rated MdVT as “somewhat realistic” in the “arm manipulation” and “camera movement” domains, but judged “needle control” and “needle driving” domains as “not realistic.”
Conclusion
The second generation MdVT demonstrates face and construct validity as a stand-alone robotic surgery simulator. Software updates may be necessary to improve utility of robotic suturing domain, and will require further validation.