Automated Identification of Key Steps in Robotic-Assisted Radical Prostatectomy Using Artificial Intelligence

医学 老年学
作者
Abhinav Khanna,Alenka Antolin,Omri Bar,Danielle Ben-Ayoun,Maya Zohar,Stephen A. Boorjian,Igor Frank,Paras Shah,Sharma Vp,R. Houston Thompson,Tamir Wolf,Dotan Asselmann,Matthew K. Tollefson
出处
期刊:The Journal of Urology [Ovid Technologies (Wolters Kluwer)]
卷期号:211 (4): 575-584 被引量:1
标识
DOI:10.1097/ju.0000000000003845
摘要

You have accessJournal of UrologyOriginal Research Articles1 Apr 2024Automated Identification of Key Steps in Robotic-Assisted Radical Prostatectomy Using Artificial IntelligenceThis article is commented on by the following:Editorial Comment Abhinav Khanna, Alenka Antolin, Omri Bar, Danielle Ben-Ayoun, Maya Zohar, Stephen A. Boorjian, Igor Frank, Paras Shah, Vidit Sharma, R. Houston Thompson, Tamir Wolf, Dotan Asselmann, and Matthew Tollefson Abhinav KhannaAbhinav Khanna *Corresponding Author: Abhinav Khanna, MD, 200 1st St SW, Rochester, MN 55905 E-mail Address: [email protected] Department of Urology, Mayo Clinic, Rochester, Minnesota , Alenka AntolinAlenka Antolin Theator, Inc, Palo Alto, California , Omri BarOmri Bar Theator, Inc, Palo Alto, California , Danielle Ben-AyounDanielle Ben-Ayoun Theator, Inc, Palo Alto, California , Maya ZoharMaya Zohar Theator, Inc, Palo Alto, California , Stephen A. BoorjianStephen A. Boorjian Department of Urology, Mayo Clinic, Rochester, Minnesota , Igor FrankIgor Frank Department of Urology, Mayo Clinic, Rochester, Minnesota , Paras ShahParas Shah Department of Urology, Mayo Clinic, Rochester, Minnesota , Vidit SharmaVidit Sharma Department of Urology, Mayo Clinic, Rochester, Minnesota , R. Houston ThompsonR. Houston Thompson Department of Urology, Mayo Clinic, Rochester, Minnesota , Tamir WolfTamir Wolf Theator, Inc, Palo Alto, California , Dotan AsselmannDotan Asselmann Theator, Inc, Palo Alto, California , and Matthew TollefsonMatthew Tollefson Department of Urology, Mayo Clinic, Rochester, Minnesota View All Author Informationhttps://doi.org/10.1097/JU.0000000000003845AboutAbstractPDF ToolsAdd to favoritesDownload CitationsTrack CitationsPermissionsReprints ShareFacebookTwitterLinked InEmail Abstract Purpose: The widespread use of minimally invasive surgery generates vast amounts of potentially useful data in the form of surgical video. However, raw video footage is often unstructured and unlabeled, thereby limiting its use. We developed a novel computer-vision algorithm for automated identification and labeling of surgical steps during robotic-assisted radical prostatectomy (RARP). Materials and Methods: Surgical videos from RARP were manually annotated by a team of image annotators under the supervision of 2 urologic oncologists. Full-length surgical videos were labeled to identify all steps of surgery. These manually annotated videos were then utilized to train a computer vision algorithm to perform automated video annotation of RARP surgical video. Accuracy of automated video annotation was determined by comparing to manual human annotations as the reference standard. Results: A total of 474 full-length RARP videos (median 149 minutes; IQR 81 minutes) were manually annotated with surgical steps. Of these, 292 cases served as a training dataset for algorithm development, 69 cases were used for internal validation, and 113 were used as a separate testing cohort for evaluating algorithm accuracy. Concordance between artificial intelligence‒enabled automated video analysis and manual human video annotation was 92.8%. Algorithm accuracy was highest for the vesicourethral anastomosis step (97.3%) and lowest for the final inspection and extraction step (76.8%). Conclusions: We developed a fully automated artificial intelligence tool for annotation of RARP surgical video. Automated surgical video analysis has immediate practical applications in surgeon video review, surgical training and education, quality and safety benchmarking, medical billing and documentation, and operating room logistics. Robotic surgery has been adopted widely across the globe, with an estimated 1.25 million robotic surgeries performed in 2020.1 This abundance of robotic surgeries generates vast amounts of data in the form of surgical video footage. Intentional and deliberate analysis of this surgical video has the potential to give surgeons and hospitals invaluable insights into their surgical practice. Potential opportunities include benchmarking surgical quality, surgeon teaching and education, medical billing and documentation, and optimizing operating room logistics and operations. Prostate cancer is the single most common solid-organ malignancy among men in the US,2 and robotic surgical approaches to radical prostatectomy have largely replaced open surgery in the last few decades.3 Indeed, robotic-assisted radical prostatectomy (RARP) is the single most commonly performed robotic urologic surgery in the US.4 Thus, RARP is ripe for the systematic and empiric study of surgical video footage. However, the process of manually reviewing and labeling surgical videos is laborious and resource-intensive, thus limiting its routine and widespread use.5 Currently, millions of surgeries are being performed and surgical video recorded globally each year, but the majority of that video is not being routinely or systematically analyzed, thus representing a tremendous untapped opportunity. We aim to develop an artificial intelligence (AI)‒enabled computer vision algorithm for automated identification of key surgical steps during RARP to serve as a framework for systematic and automated analysis of surgical video footage. Materials and Methods A retrospective review was performed to identify patients undergoing RARP for prostate cancer at a tertiary referral center in the US and a regional hospital in Israel from December 2021 through December 2022. Surgical video was recorded and stored on a secure cloud-based server using a novel AI surgical video platform (Theator, Inc). Key surgical steps, as outlined in Table 1, were defined using expert consensus among fellowship-trained urologic oncologists who routinely perform RARP. Full-length surgical videos were manually labeled with all predefined steps by a team of medical imaging annotators under the supervision of 2 fellowship-trained urologic oncologists. The interrater reliability of the medical imaging annotators was a mean (SD) of 95.82 (3.85; data not shown). The dataset included a variety of RARP techniques, including anterior approach (n = 365), posterior approach (n = 102), "hood-sparing," and numerous other variations in surgical technique. The dataset also included a diversity of technical configurations, including surgeries with both 1 and 2 laparoscopic assistant ports, traditional insufflation and AirSeal insufflation management system, single- and multiport robotic platforms, and bedside surgical assistants of varying skill levels. Table 1. Surgical Steps and Visual Triggers for Step Initiation in Robotic-Assisted Radical Prostatectomy Step Visual trigger to initiate stepa Preparation Includes obtaining pneumoperitoneum, trocar placement, patient/bed repositioning, robot docking, and initial inspection of the abdominal cavity. Adhesiolysis Surgical instruments begin lysing adhesions. This step is not always performed, and only annotated when adhesions are present. Lymph node dissection Instruments start dissecting the peritoneum over the iliac vasculature; involves excision of pelvic lymph nodes; includes view of iliac vessels and obturator nerve. Retzius space dissection Instrument starts dissecting peritoneum off the abdominal wall in the central/medial region; dissects the bladder from the abdominal wall anywhere from urachus to pubic bone; includes exposure of the anterior bladder neck. Can include dissection of the endopelvic fascia when performed. Anterior bladder neck transection Instrument starts transecting the anterior portion of the bladder neck; continues through exposure of the Foley catheter. Posterior bladder neck transection Catheter is elevated and the surgical instrument starts dissecting the posterior portion of the bladder neck. Seminal vesicles and posterior dissection Surgeon takes hold of the vas deferens and/or seminal vesicles and begins exposing these accessory structures via dissection. Includes ligation of the vas deferens, as well as posterior dissection and mobilization of the plane between prostate and rectum. Lateral and apical dissection Dissection of the prostatic pedicles; involves bilateral dissection of the neurovascular bundle as well as apical dissection and exposure of the urethra. Urethral transection Instrument (usually scissors) starts transecting the urethra and includes any additional soft-tissue dissection needed to mobilize the prostate apex completely. Also includes hemostatic control and packaging of the specimen. Vesicourethral anastomosis Suture needle enters the bladder neck; may include bladder neck plication sutures. Can include alternative sutures such as Rocco stitch. Final inspection and extraction Needle is removed and the leak test starts, includes obvious inspection of the operating field, additional irrigation and hemostasis as needed, and removal of the trocars. All steps end when the next step starts. Each surgical video is fully annotated; therefore, every second belongs to 1 (and only 1) step. The complete dataset of fully labeled videos was randomly divided into separate training, internal validation, and test cohorts using a 60:15:25 split. Manually annotated videos were utilized to train a computer-vision algorithm to perform automated video labelling of surgical steps, as outlined below. Algorithm development was performed using the training dataset, with periodic benchmarking of model performance on the internal validation dataset. However, the model was never trained on nor allowed to learn from any videos in the test dataset. Accuracy of the AI model was determined by comparing to manual human labels as the reference standard. Human video annotators did have access to algorithm labels during the manual video review process. Accuracy was measured for the overall surgery, as well as on a per-step basis. Patient confidentiality was maintained by employing our group's previously developed algorithm for automated video blurring upon removal of the surgical camera from the patient's body.6 Our novel RARP step detection AI tool was structured similarly to an algorithm previously developed by our group for surgical step recognition,7 but the current algorithm is unique and was built de novo in this RARP cohort (Figure 1). This approach includes 2 components: (1) a deep feature extraction model that produces a representation of each second in the surgical video, and (2) a temporal model that learns to predict surgical steps from a learned sequence of features. We applied a previously developed transfer-learning approach described by our group to reduce training dataset size requirements and improve step recognition by allowing for cross-training of similar steps across different surgery types.8 The current RARP step detection algorithm employs transfer-learning with feature extraction from our prior efforts in laparoscopic cholecystectomy, laparoscopic appendectomy, sleeve gastrectomy, and laparoscopic hysterectomy.7 Figure 1. Artificial intelligence algorithm architecture, including video transformer network (VTN) and long short-term memory (LSTM) networks. CLS indicates classification; FV, feature vector; P, positional embedding; PE, positional embedding; ViT, vision transformer. The first component of algorithm development was a feature extraction model. In this component, we employed a video transformer network (VTN)9 that processed the entire video from start to end as a sequence of images (frames). VTN uses attention-based modules that learn both the spatial and temporal information of the input video. The pretraining process started from the original VTN weights.9 Next, we fine-tuned the model for the step recognition task by continuing the training process using the RARP surgical video dataset, again employing transfer learning from other surgery types. Post-training, the resultant model was used as a feature extractor for the RARP videos. The features were then used as input for the subsequent temporal model. The temporal model was a long short-term memory (LSTM) network.10 This type of recurrent neural network can process long sequences, taking into account the current second representation and also maintaining a memory of past relevant information that contributes to the model's final predictions. Since videos are processed offline (ie, after the surgery was completed and the entire video is available to serve as model input), we use a bidirectional LSTM that processed the video in both directions (start-to-end and end-to-start). The hidden dimension was set to 128, followed by a dropout with a probability of 0.5 and a linear layer that mapped from the hidden LSTM space to the 11 steps of RARP. We used a cross-entropy loss function and trained the network for 100 epochs with a stochastic gradient descent optimizer and a learning rate of 10^−2. This study was deemed exempt by the Mayo Clinic Institutional Review Board (IRB No. 22-007371). We adhered to the STREAM-URO (Standardized Reporting of Machine Learning Applications in Urology) framework for this manuscript.11 This study was performed using PyTorch version 1.12.1. Results A total of 474 full-length RARP videos (median 149 minutes; IQR 81 minutes) were manually annotated with sequential steps of surgery. Of these, 292 cases served as a training dataset for algorithm development, 69 cases were used for internal validation, and 113 were used as a separate test cohort for evaluating algorithm accuracy. In the overall cohort, 371 cases (78.3%) included lymph node dissection, while 103 (21.7%) were RARP alone. Other than lymph node dissection, all steps were present in all surgical videos. As outlined in Table 2, the steps with longest and shortest median duration were the lateral/apical prostate dissection and anterior bladder neck transection, respectively. Overall concordance between AI video analysis and manual human video analysis for the full surgery was 92.8%. As shown in Figure 2, algorithm accuracy was highest for the vesicourethral anastomosis step (97.3%) and lowest for the final inspection and extraction step (76.8%). Table 2. Median Duration of Individual Surgical Steps in Robotic-Assisted Radical Prostatectomy Step Step duration, median (IQR), min Training cohort (n = 292) Validation cohort (n = 69) Test cohort (n = 113) Overall cohort (n = 474) Preparation 8.2 (4.52) 7.29 (5.59) 8.5 (5.42) 8.15 (4.95) Adhesiolysis 3.7 (4.26) 2.75 (3.7) 3.62 (3.43) 3.5 (4.02) Lymph node dissection 25.83 (18.25) 28.98 (23.72) 27.35 (22.65) 27.2 (20.87) Retzius space dissection 14.23 (14.84) 13.87 (14.24) 15.33 (16.73) 14.38 (15.47) Anterior bladder neck transection 2.98 (2.22) 2.98 (2.55) 2.83 (2.28) 2.97 (2.28) Posterior bladder neck transection 6.13 (5.68) 6.47 (6.98) 6.34 (6.53) 6.22 (6.35) Lateral and apical prostate dissection 29.94 (21.85) 28.98 (22.63) 31.63 (23.68) 30.42 (22.1) Seminal vesicles and posterior dissection 16.95 (15.43) 14.84 (18.92) 14.65 (13.42) 15.74 (15.98) Urethral transection 9.79 (8.42) 10.17 (8.05) 9.95 (6.92) 9.88 (8.13) Vesicourethral anastomosis 22.23 (16.27) 21.02 (16.45) 21.1 (14.25) 21.9 (15.92) Final inspection and extraction 3.86 (4.31) 3.97 (4.96) 4.72 (6.26) 4.02 (5.1) Figure 2. Confusion matrix demonstrating per-step accuracy of artificial intelligence (AI) step detection vs human step detection in robotic-assisted radical prostatectomy. Dark blue squares indicate degree of accuracy of AI step detection as compared to human step detection. Values closer to 1.000 indicate a greater degree of concordance between AI and human step detection (ie, greater AI accuracy). Light blue or white squares indicate degree of AI model confusion. ABN indicates anterior bladder neck transection; FIE, final inspection and extraction; LAD, lateral and apical dissection; LND, lymph node dissection; Lysis, adhesiolysis; PBN, posterior bladder neck transection; Prep, preparation; RSD, Retzius space dissection; SV, seminal vesicle and posterior dissection; Urethra, urethral transection; VUA, vesicourethral anastomosis. Discussion We developed an AI tool for fully automated detection of key surgical steps during RARP. This computer vision tool achieved high accuracy when compared to humans as the reference standard. To our knowledge, this represents the first report of a fully automated surgical step detection model for RARP based purely on surgical video alone. Automated identification of surgical steps allows for raw surgical video footage to be transformed into systematic and structured data in a form that can be analyzed and studied empirically. Analogous to the Human Genome Project's aim of mapping the full human genome to serve as a foundational framework for future research efforts,12 structuring of surgical video may provide a framework for future efforts to glean actionable insights into minimally invasive surgery. The potential future applications of an automated digital atlas of RARP surgical video are myriad, including in the realms of surgical quality, teaching and education, billing and documentation, and operating room logistics and operations. With regards to surgical quality, automated detection of intraoperative events facilitates analysis of adherence to established safety milestones. Figure 3 demonstrates rates of successfully viewing the obturator nerve during pelvic lymphadenectomy, a well-established safety maneuver,13 for individual surgeons as well as for the overall study cohort. Similar data were collected for automatically quantifying the frequency of acute hemorrhage events during RARP (data not shown). These metrics represent safety analytics powered by our AI-automated step detection algorithm, which can be used to automate surgical quality benchmarking in RARP. Figure 3. Examples of real-world clinical applications using the robotic-assisted radical prostatectomy automated artificial intelligence step-detection algorithm. A, Streamlined video review using step labels. B, Search video library by individual surgical steps. C, Quantifying safety milestones (successfully viewing the obturator nerve during pelvic lymphadenectomy) for individual surgeons as well as for the overall study cohort. Figure 3 (continued). D, Benchmarking operative times of individual surgical steps in robotic-assisted radical prostatectomy. E, Predicting estimated time to completion for surgeries as they progress through various steps in real time. Our AI step detection tool also has potential applications in surgical training. Video review has previously been suggested as a valuable tool in training novice surgeons and refining skills among experienced surgeons.14-16 Structured video with step labeling can help expedite video review among surgeons and trainees, but manually labeling surgical video footage with key steps is laborious, time intensive and expensive, which may limit its routine adoption.5 The current study overcomes this obstacle by fully automating the process of video labeling using a novel AI platform, which may facilitate targeted video review of individual surgical steps (Figure 3, B). The current study results also have potential applications in billing and documentation. Currently, surgical operative reports are often generated by surgeons in narrative form, which is an inherently subjective process that can sometimes be incomplete or inaccurate.17-24 Additionally, surgical operative reports serve as the basis of medical billing, which represents a potential opportunity for increased transparency and improved billing accuracy.25 For example, RARP and pelvic lymphadenectomy constitute 2 different Current Procedural Terminology codes. An AI model that is capable of automatically detecting lymphadenectomy during RARP surgical video could potentially serve as the basis for video-based medical billing in the future. Similarly, the future development of novel algorithms for automatically detecting surgical complexity may help with surgical risk stratification, including AI detection of aberrant anatomy, reoperative surgical fields, or distorted tissue planes due to prior radiation or inflammation. The current automated step detection model can transform raw surgical video, which is currently unstructured and largely unusable data, into an objective and transparent "ground truth" to serve as the basis for billing and documentation in a future digital era of surgery. Finally, automated step detection technology may support operating room logistics and operations. For instance, rather than measuring total operative time for a procedure, surgeons and administrators may be able to examine average duration of each individual surgical step, which could facilitate surgical benchmarking of operative times with greater granularity than previously possible (Figure 3). Our AI model may also facilitate predicting "estimated time to completion" for surgeries as they progress through various steps in real-time, thus enabling more accurate and data-driven staffing and resource allocation for operating rooms and surgical teams than with currently available tools (Figure 3).26 The current study results suggest high accuracy for the overall RARP step detection model, as well as for nearly all individual surgical steps. The only surgical step with accuracy below 80% was the final inspection and extraction step. This may be a result of the final inspection step being somewhat subjective, as this is defined as occurring after the vesicourethral anastomosis, including final irrigation and/or hemostasis of the operative field. However, unlike most other steps of RARP, this step is perhaps the least clearly defined and lacks a discrete surgical maneuver, thus resulting in an expected loss in accuracy of the AI model for this step. The current AI algorithm was developed using a transfer learning approach, building off our group's prior experience with automated annotation in laparoscopic cholecystectomy, laparoscopic appendectomy, laparoscopic sleeve gastrectomy, and laparoscopic hysterectomy.7,8 By applying experience from prior step detection efforts, the learning curve of new AI models is greatly reduced. This facilitates training novel AI step detection models for new surgical procedures without requiring excessively large training datasets, a requisite step in being able to scale this technology to various types of minimally invasive surgeries across disciplines. Prior authors have developed step-detection algorithms in RARP.27 However, these models have incorporated data from the robotic surgical platform, including "events" (eg, button presses, instrument clutching, energy activation, instrument swapping) and "kinematics" (eg, angles of instrument joints, economy of motion, instrument speed). Others have similarly correlated automated performance metrics to clinical outcomes in RARP.28-31 However, many of these studies also incorporate data from the robotic surgical platform, often using the Intuitive Surgical dVLogger to capture relevant instrument and system data. While the aforementioned studies are of tremendous value in establishing links between intraoperative events and postoperative outcomes, their generalizability to other surgical modalities, such as laparoscopy, endoscopy, or novel surgical platforms, is limited. In contrast, our current step-detection algorithm is purely video based, and is thus platform agnostic, offering the ability to translate this step-detection tool into a variety of surgical modalities. Additionally, our current model offers the ability to capture data only visible on video and not provided by the surgical platform, such as acute hemorrhage events, adherence to intraoperative safety maneuvers, tumor spillage events, and anatomic case complexity. Furthermore, our AI model moves beyond microgestures and instead assesses entire phases and steps of surgery, taking global anatomic and temporospatial relationships into consideration to provide meaningful predictions of sequential surgical phase during RARP. Study results should be interpreted in the context of methodological limitations. This is a 2-center study that requires external validation at other centers. However, the current dataset does include RARP video from several robotic surgeons, and captures a variety of different surgical techniques, including posterior approach, anterior approach, "hood-sparing," and other technical modifications. Notably, step detection was performed irrespective of the temporal sequencing of individual surgical steps. Thus, the training dataset used in this study includes notable heterogeneity in surgical technique, and we anticipate this algorithm will generalize well into new datasets. Additionally, the model was trained to detect RARP steps using current surgical techniques. As novel surgical approaches are developed and refined (eg, Retzius-sparing, transvesical, etc), the model will need to be updated and retrained as RARP techniques continue to evolve over time. Finally, given that the primary topic of this study was surgical video footage, individual patient clinical and demographic characteristics were not captured, which may lead to a potential source of unidentified bias in the data. Conclusions We developed an AI tool capable of automatically detecting key steps during RARP with high accuracy. This technology has potential applications in surgical quality, teaching and education, billing and documentation, and operating room logistics and operations, and warrants further evaluation. REFERENCES 1. Intuitive Surgical Inc. Annual report. 2020. https://isrg.intuitive.com/static-files/80b10bf5-c1da-4ad3-bb0e-8c595e2c712c Google Scholar 2. . Cancer statistics, 2022. CA Cancer J Clin. 2022; 72(1):7-33. Crossref, Medline, Google Scholar 3. . Facility-level analysis of robot utilization across disciplines in the National Cancer Database. J Robot Surg. 2019; 13(2):293-299. Crossref, Medline, Google Scholar 4. . Variation in the utilization of robotic surgical operations. J Robot Surg. 2020; 14(4):593-599. Crossref, Medline, Google Scholar 5. . Video labelling robot-assisted radical prostatectomy and the role of artificial intelligence (AI): training a novice. J Robot Surg. 2023; 17(2):695-701. Crossref, Medline, Google Scholar 6. . Accurate detection of out of body segments in surgical video using semi-supervised learning. PMLR. 2020; 121:923-936. Google Scholar 7. . Impact of data on generalization of AI for surgical intelligence applications. Sci Rep. 2020; 10(1):22208. Crossref, Medline, Google Scholar 8. . "Train one, classify one, teach one"—cross-surgery transfer learning for surgical step recognition. PMLR. 2021; 143:532-535. Google Scholar 9. . Video transformer network. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops. 2021:3163-3172. Google Scholar 10. . Deep Learning. MIT Press; 2016. Google Scholar 11. . Standardized reporting of machine learning applications in urology: the STREAM-URO framework. Eur Urol Focus. 2021; 7(4):672-682. Crossref, Medline, Google Scholar 12. . Implications of the human genome project for medical science. JAMA. 2001; 285(5):540-544. Crossref, Medline, Google Scholar 13. . Obturator nerve injury in robotic pelvic surgery: scenarios and management strategies. Eur Urol. 2023; 83(4):361-368. Crossref, Medline, Google Scholar 14. . Comparison of effective teaching methods to achieve skill acquisition using a robotic virtual reality simulator: expert proctoring versus an educational video versus independent training. Medicine (Baltimore). 2018; 97(51):e13569. Crossref, Medline, Google Scholar 15. . Video review for measuring and improving skill in urological surgery. Nat Rev Urol. 2019; 16(4):261-267. Crossref, Medline, Google Scholar 16. . A novel expert coaching model in urology, aimed at accelerating the learning curve in robotic prostatectomy. J Surg Educ. 2022; 79(6):1480-1488. Crossref, Medline, Google Scholar 17. . Prospective, blinded evaluation of accuracy of operative reports dictated by surgical residents. Am Surg. 2005; 71(8):627-632. Crossref, Medline, Google Scholar 18. . Differences between attendings' and residents' operative notes for laparoscopic cholecystectomy. World J Surg. 2013; 37(8):1841-1850. Crossref, Medline, Google Scholar 19. . Comparison of systematic video documentation with narrative operative report in colorectal cancer surgery. JAMA Surg. 2019; 154(5):381-389. Crossref, Medline, Google Scholar 20. . Association of video completed by audio in laparoscopic cholecystectomy with improvements in operative reporting. JAMA Surg. 2020; 155(7):617-623. Crossref, Medline, Google Scholar 21. . Operative notes do not reflect reality in laparoscopic cholecystectomy. Br J Surg. 2011; 98(10):1431-1436. Crossref, Medline, Google Scholar 22. . Comparison of operative notes with real-time observation of adhesiolysis-related complications during surgery. Br J Surg. 2013; 100(3):426-432. Crossref, Medline, Google Scholar 23. . Quality of narrative operative reports in pancreatic surgery. Can J Surg. 2013; 56(5):E121-E127. Crossref, Medline, Google Scholar 24. . Quality of neck dissection operative reports. Am J Otolaryngol. 2016; 37(4):330-333. Crossref, Medline, Google Scholar 25. . The operative note as billing documentation: a preliminary report. Am Surg. 2004; 70(7):570-575. Crossref, Medline, Google Scholar 26. . The accuracy of surgeons' provided estimates for the duration of hysterectomies: a pilot study. J Minim Invasive Gynecol. 2015; 22(1):57-65. Crossref, Medline, Google Scholar 27. . Novel evaluation of surgical activity recognition models using task-based efficiency metrics. Int J Comput Assist Radiol Surg. 2019; 14(12):2155-2163. Crossref, Medline, Google Scholar 28. . A deep-learning model using automated performance metrics and clinical features to predict urinary continence recovery after robot-assisted radical prostatectomy. BJU Int. 2019; 124(3):487-495. Crossref, Medline, Google Scholar 29. . Utilizing machine learning and automated performance metrics to evaluate robot-assisted radical prostatectomy performance and predict outcomes. J Endourol. 2018; 32(5):438-444. Crossref, Medline, Google Scholar 30. . Development and validation of objective performance metrics for robot-assisted radical prostatectomy: a pilot study. J Urol. 2018; 199(1):296-304. Link, Google Scholar 31. . Automated performance metrics and machine learning algorithms to measure surgeon performance and anticipate clinical outcomes in robotic surgery. JAMA Surg. 2018; 153(8):770-771. Crossref, Medline, Google Scholar Funding/Support: None. Conflict of Interest Disclosures: Dr Boorjian reported receiving grants from Artara, Ferring, FerGene, and Prokarium outside the submitted work. Authors from Mayo Clinic have no financial relationships with the commercial entity (Theator) whatsoever. Authors from Theator, Inc (Drs Antolin, Ben-Ayoun, Zohar, Wolf, and Asselmann) are all employed by the commercial entity. Ethics Statement: This study was deemed exempt from Institutional Review Board approval (IRB No. 22-007371). Author Contributions: Conception and design: Khanna, Antolin, Bar, Ben-Ayoun, Shah, Sharma, Thompson, Wolf, Asselmann, Tollefson. Data analysis and interpretation: Khanna, Antolin, Bar, Zohar, Boorjian, Frank, Wolf, Asselmann, Tollefson. Data acquisition: Khanna, Antolin, Bar, Asselmann, Tollefson. Drafting the manuscript: Khanna, Bar, Ben-Ayoun, Tollefson. Critical revision of the manuscript for scientific and factual content: Khanna, Antolin, Zohar, Boorjian, Frank, Shah, Sharma, Thompson, Wolf, Asselmann, Tollefson. Statistical analysis: Khanna, Bar, Zohar, Asselmann. Supervision: Khanna, Antolin, Bar, Boorjian, Frank, Shah, Sharma, Thompson, Wolf, Asselmann, Tollefson. Data Availability: The datasets generated during and/or analyzed during the current study are not publicly available but are available from the corresponding author on reasonable request. © 2024 by American Urological Association Education and Research, Inc.FiguresReferencesRelatedDetailsCited bySiemens D (2024) Editor's ChoiceJournal of Urology, VOL. 211, NO. 4, (507-508), Online publication date: 1-Apr-2024.Keefe D (2024) Editorial CommentJournal of Urology, VOL. 211, NO. 4, (585-585), Online publication date: 1-Apr-2024. Deol E, Tollefson M, Antolin A, Zohar M, Bar O, Ben-Ayoun D, Mynderse L, Lomas D, Avant R, Miller A, Elliott D, Boorjian S, Wolf T, Asselmann D and Khanna A (2024) Automated surgical step recognition in transurethral bladder tumor resection using artificial intelligence: transfer learning across surgical modalitiesFrontiers in Artificial Intelligence, 10.3389/frai.2024.1375482, VOL. 7 Related articlesJournal of Urology14 Feb 2024Editorial Comment Volume 211Issue 4April 2024Page: 575-584 Peer Review Report Open Peer Review Report Advertisement Copyright & Permissions© 2024 by American Urological Association Education and Research, Inc.Keywordsprostatectomyrobotic surgical proceduresartificial intelligencecomputer vision systemcomputer-aided surgeryMetrics Author Information Abhinav Khanna Department of Urology, Mayo Clinic, Rochester, Minnesota *Corresponding Author: Abhinav Khanna, MD, 200 1st St SW, Rochester, MN 55905 E-mail Address: [email protected] More articles by this author Alenka Antolin Theator, Inc, Palo Alto, California More articles by this author Omri Bar Theator, Inc, Palo Alto, California More articles by this author Danielle Ben-Ayoun Theator, Inc, Palo Alto, California More articles by this author Maya Zohar Theator, Inc, Palo Alto, California More articles by this author Stephen A. Boorjian Department of Urology, Mayo Clinic, Rochester, Minnesota More articles by this author Igor Frank Department of Urology, Mayo Clinic, Rochester, Minnesota More articles by this author Paras Shah Department of Urology, Mayo Clinic, Rochester, Minnesota More articles by this author Vidit Sharma Department of Urology, Mayo Clinic, Rochester, Minnesota More articles by this author R. Houston Thompson Department of Urology, Mayo Clinic, Rochester, Minnesota More articles by this author Tamir Wolf Theator, Inc, Palo Alto, California More articles by this author Dotan Asselmann Theator, Inc, Palo Alto, California More articles by this author Matthew Tollefson Department of Urology, Mayo Clinic, Rochester, Minnesota More articles by this author Expand All Funding/Support: None. Conflict of Interest Disclosures: Dr Boorjian reported receiving grants from Artara, Ferring, FerGene, and Prokarium outside the submitted work. Authors from Mayo Clinic have no financial relationships with the commercial entity (Theator) whatsoever. Authors from Theator, Inc (Drs Antolin, Ben-Ayoun, Zohar, Wolf, and Asselmann) are all employed by the commercial entity. Ethics Statement: This study was deemed exempt from Institutional Review Board approval (IRB No. 22-007371). Author Contributions: Conception and design: Khanna, Antolin, Bar, Ben-Ayoun, Shah, Sharma, Thompson, Wolf, Asselmann, Tollefson. Data analysis and interpretation: Khanna, Antolin, Bar, Zohar, Boorjian, Frank, Wolf, Asselmann, Tollefson. Data acquisition: Khanna, Antolin, Bar, Asselmann, Tollefson. Drafting the manuscript: Khanna, Bar, Ben-Ayoun, Tollefson. Critical revision of the manuscript for scientific and factual content: Khanna, Antolin, Zohar, Boorjian, Frank, Shah, Sharma, Thompson, Wolf, Asselmann, Tollefson. Statistical analysis: Khanna, Bar, Zohar, Asselmann. Supervision: Khanna, Antolin, Bar, Boorjian, Frank, Shah, Sharma, Thompson, Wolf, Asselmann, Tollefson. Data Availability: The datasets generated during and/or analyzed during the current study are not publicly available but are available from the corresponding author on reasonable request. Advertisement Advertisement PDF downloadLoading ...
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
2秒前
3秒前
pudding完成签到,获得积分20
7秒前
7秒前
清爽灵萱完成签到,获得积分10
8秒前
8秒前
萨摩耶发布了新的文献求助10
9秒前
10秒前
凤凰山发布了新的文献求助10
10秒前
伊森xay发布了新的文献求助10
12秒前
清爽灵萱发布了新的文献求助10
13秒前
tu发布了新的文献求助30
15秒前
张振宇发布了新的文献求助10
22秒前
24秒前
GuMingyang完成签到 ,获得积分10
25秒前
无敌鱼发布了新的文献求助50
27秒前
29秒前
29秒前
30秒前
SciGPT应助zr92采纳,获得10
32秒前
陈陈陈发布了新的文献求助10
33秒前
伊森xay完成签到,获得积分10
33秒前
tu完成签到 ,获得积分20
35秒前
热切菩萨应助hhhhhh采纳,获得10
35秒前
12312发布了新的文献求助10
36秒前
枭94发布了新的文献求助10
42秒前
43秒前
44秒前
赝品也烂漫完成签到,获得积分10
44秒前
45秒前
香蕉觅云应助激情的含巧采纳,获得10
46秒前
Carioao发布了新的文献求助10
49秒前
鸭梨发布了新的文献求助10
50秒前
52秒前
giao完成签到,获得积分10
52秒前
52秒前
有情皆苦发布了新的文献求助10
53秒前
李健应助12312采纳,获得10
58秒前
科研通AI2S应助有情皆苦采纳,获得10
1分钟前
枭94完成签到,获得积分10
1分钟前
高分求助中
【本贴是提醒信息,请勿应助】请在求助之前详细阅读求助说明!!!! 20000
One Man Talking: Selected Essays of Shao Xunmei, 1929–1939 1000
The Three Stars Each: The Astrolabes and Related Texts 900
Yuwu Song, Biographical Dictionary of the People's Republic of China 800
Multifunctional Agriculture, A New Paradigm for European Agriculture and Rural Development 600
Challenges, Strategies, and Resiliency in Disaster and Risk Management 500
Bernd Ziesemer - Maos deutscher Topagent: Wie China die Bundesrepublik eroberte 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 有机化学 工程类 生物化学 纳米技术 物理 内科学 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 电极 光电子学 量子力学
热门帖子
关注 科研通微信公众号,转发送积分 2481776
求助须知:如何正确求助?哪些是违规求助? 2144384
关于积分的说明 5469750
捐赠科研通 1866895
什么是DOI,文献DOI怎么找? 927899
版权声明 563039
科研通“疑难数据库(出版商)”最低求助积分说明 496404