Kelly 2012

<< Return to Studies

Face, Content, and Construct Validation of the da Vinci Skills Simulator

Douglas C.Kelly, Andrew C.Margules, Chandan R.Kundavaram, Hadley Narins, Leonard G.Gomell, Edouard J.Trabulsi, Costas D.Lallas

Objective

To report on assessments of face, content, and construct validity for the commercially available da Vinci Skills Simulator (dVSS).

Methods

A total of 38 subjects participated in this prospective study. Participants were classified as novice (0 robotic cases performed), intermediate (1-74 robotic cases), or expert (≥75 robotic cases). Each subject completed 5 exercises. Using the metrics available in the simulator software, the performances of each group were compared to evaluate construct validation. Immediately after completion of the exercises, each subject completed a questionnaire to evaluate face and content validation.

Results

The novice group consisted of 18 medical students and 1 resident. The intermediate group included 6 residents, 1 fellow, and 2 faculty urologist. The expert group consisted of 2 residents, 1 fellow, and 7 faculty surgeons. The mean number of robotic cases performed by the intermediate and expert groups was 29.2 and 233.4, respectively. An overall significant difference was observed in favor of the more experienced group in 4 skill sets. When intermediates and experts were combined into a single “experienced” group, they significantly outperformed novices in all 5 exercises. Intermediates and experts rated various elements of the simulators realism at an average of 4.1/5 and 4.3/5, respectively. All intermediate and expert participants rated the simulator’s value as a training tool as 4/5 or 5/5.

Conclusions

Our study supports the face, content, and construct validation attributed to the dVSS. These results indicate that the simulator may be most useful to novice surgeons seeking basic robot skills acquisition.

Click here for the full study