Skip to content

Alzahrani 2013

<< Return to Studies

Validation of the da Vinci Surgical Skill Simulator Across Three Surgical Disciplines: A pilot study

Tarek Alzahrani, MD, Richard Haddad, MD, Abdullah Alkhayal, MD, Josée Delisle, MSc, Laura Drudi, MD, Walter Gotlieb, MD, Shannon Fraser, MD, Simon Bergman, MD, Frank Bladou, MD, Sero Andonian, MD, MSc, FRCSC, and Maurice Anidjar, MD, PhD

Objective

In this paper, we evaluate face, content and construct validity of the da Vinci Surgical Skills Simulator (dVSSS) across 3 surgical disciplines

Methods

In total, 48 participants from urology, gynecology and general surgery participated in the study as novices (0 robotic cases performed), intermediates (1–74) or experts (≥75). Each participant completed 9 tasks (Peg board level 2, match board level 2, needle targeting, ring and rail level 2, dots and needles level 1, suture sponge level 2, energy dissection level 1, ring walk level 3 and tubes). The Mimic Technologies software scored each task from 0 (worst) to 100 (best) using several predetermined metrics. Face and content validity were evaluated by a questionnaire administered after task completion. Wilcoxon test was used to perform pair wise comparisons.

Results

The expert group comprised of 6 attending surgeons. The intermediate group included 4 attending surgeons, 3 fellows and 5 residents. The novices included 1 attending surgeon, 1 fellow, 13 residents, 13 medical students and 2 research assistants. The median number of robotic cases performed by experts and intermediates were 250 and 9, respectively. The median overall realistic score (face validity) was 8/10. Experts rated the usefulness of the simulator as a training tool for residents (content validity) as 8.5/10. For construct validity, experts outperformed novices in all 9 tasks (p < 0.05). Intermediates outperformed novices in 7 of 9 tasks (p < 0.05); there were no significant differences in the energy dissection and ring walk tasks. Finally, experts scored significantly better than intermediates in only 3 of 9 tasks (matchboard, dots and needles and energy dissection) (p < 0.05).

Conclusions

This study confirms the face, content and construct validities of the dVSSS across urology, gynecology and general surgery. Larger sample size and more complex tasks are needed to further differentiate intermediates from experts.

Click here for the full study