Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Abstract Background Current methods of surgeon performance assessment are unstructured, labour-intensive and subjective. Machine learning could provide rapid, automated and subjective assessment. We design a custom deep learning model to output performance metrics from videos of LSG and validate these against OSATS and clinical outcomes. Methods A novel dataset of 3210 images from videos of LSG was annotated to train an instrument segmentation framework based on Mask R-CNN, a state-of-the-art deep learning algorithm. Spatial outputs of detected instruments were used to determine workflow through a Markov chain and secondly to determine instrument metrics for assessment of surgical performance. Metrics including time spent in stage and instrument time, distance, smoothness, concentration, and efficiency were then validated against OSATS ratings from 2 independent scorers and clinical outcomes from 35 LSGs. Results Grasper and ligasure efficiency and smoothness were higher in videos rated within the top quartile by OSATS score when compared to those in the bottom quartile and when comparing consultants to trainees. Videos in the top quartile for grasper smoothness had higher total OSATS scores (17.31±2.27) compared to those in the lowest quartile (15.69±2.93) and across all 5 individual OSATS domains in addition to higher percentage of total weight loss at 3, 6, and 12 months. Conclusions This study demonstrates how vision based deep learning can be used for surgeon performance assessment. Future work will aim to validate this against a large number of videos, expand upon the range of metrics produced, and look towards the steps needed for clinical translation.

Original publication

DOI

10.1093/bjs/znad241.136

Type

Journal article

Journal

British Journal of Surgery

Publisher

Oxford University Press (OUP)

Publication Date

21/08/2023

Volume

110