Skip to main content
EURAXESS

FMAS: Postdoctoral Position in Utilizing Foundational Models for Analyzing Surgical Videos

INSERM U1099 Laboratoire du Traitement du Signal et de l'Image (LTSI) The Human Resources Strategy for Researchers
21 Mar 2024

Job Information

Organisation/Company
INSERM U1099 Laboratoire du Traitement du Signal et de l'Image (LTSI)
Research Field
Computer science » Other
Researcher Profile
Recognised Researcher (R2)
Country
France
Application Deadline
Type of Contract
Temporary
Job Status
Full-time
Hours Per Week
35
Offer Starting Date
Is the job funded through the EU Research Framework Programme?
Not funded by an EU programme
Is the Job related to staff position within a Research Infrastructure?
No

Offer Description

The Marie S. Curie Postdoctoral Fellowship (MSCA-PF) programme is a highly prestigious renowned EU-funded scheme. It offers talented scientists a unique chance to set up 2-year research and training projects with the support of a supervising team. Besides providing an attractive grant, it represents a major opportunity to boost the career of promising researchers.

The LTSI - UMR 1099 MediCIS Team, INSERM / Université de Rennes, is thus looking for excellent postdoctoral researchers with an international profile to write a persuasive proposal to apply for a Marie S. Curie Postdoctoral Fellowship grant in 2024 (deadline of the EU call set on 11 September 2024). The topic and research team presented below have been identified in this regard.

Research field: Information Science and Engineering (ENG), Artificial Intelligence, Computer Vision, Surgical Data Science

Keywords: Artificial Intelligence, Computer Vision, Surgical Data Science

Research project description:

Context:

The operating room (OR) of the future will require the seamless collaboration of human actors with advanced technology such as surgical robots and artificial intelligence. This has been fully identified in the recent international initiative regarding Surgical Data Science (Maier-Hein L, et al. 2017). However, in order for this level of collaboration to take place, the technology must be imbued with situational awareness (SA). This requires technology to be actively aware of the intentions of its human users, their behaviours, as well the identities and positions of all surgical tools. Systematic monitoring of procedural surgical aspects associated with clinical data implemented in a large scale in clinical setups will allow for the rigorous data collection and analysis necessary to train these artificial intelligence algorithms for SA. Crucially, it will also allow the research community to optimise surgical processes by improving hospital logistics, reducing surgical errors, and precisely calibrating how innovative digital devices are used in the OR.

Objectives:

We believe that such a level of automatic procedural situation awareness is crucial across all intelligent devices in the OR and requires three basiccomponents: perception of the OR, comprehension of the current surgical state, and projection of said state into potential future states. For the first, the artificial intelligence needs to be aware of the state of all effectors (humans such as the surgeon and surgical staff as well as devices such as robotic systems, intraoperative imaging devices or image guided systems) including their locations, displacements, movements and interactions. For the second, this information needs to be augmented with the activity of each effector and the intent of said activity. Lastly, in order to project information about the surgery into the future, the artificial intelligence will also be aware of how the sequence already observed coheres with that particular type of surgery and the various paths it could soon take

Perception is given by sensors available during surgery. We distinguish two levels: the micro level focusing of the operative field of view and the macro level focusing on the OR field of view. For the micro-level, in minimally invasive surgery, all surgical activities are performed through an endoscope giving direct access to video data. With surgical robots such as the da Vinci system, all micro-level surgical activities are applied through a robotic interface giving direct additional access to kinematic data and surgical videos which will be collected and annotated. For the macro-level, we will rely on larger views of the OR including PTZ, 360° and depth cameras that will allow capturing 3D structure of the OR and understanding the spatial layout of objects and people in the OR. This information will rely on surgical data collected during clinical cases. The videos recorded with the 360 and depth cameras will be annotated by expert surgeons and expert OR staff, including the OR areas and elements: structure, devices and human actors.

Comprehension of the procedural situation can be conceptualized as the augmentation the perception of each effector with information about its intention and activity. At the micro-level, automatic comprehension of surgeon’s activities is usually named “surgical workflow recognition”. Such recognition will be performed using data driven approaches with machine learning. We will develop real time surgical phase and step recognition approaches from video and kinematics data, relying on expert annotations of the aforementioned videos of the surgical field of view with regards to surgical phases, steps and events, relying on dedicated terminologies build with surgeons and compatible with the OntoSPM ontology. At the macro-level, image segmentation will allow the classification and segmentation of the OR areas and elements: structure, equipment and personnel. Computer visionbased object detection developed will allow the identification and location of key elements in the OR such as surgical instruments, medical equipment and staff. Pose estimation and tracking algorithms from depth cameras will determine the position and orientation of surgical instruments, medical devices, equipment and personnel and monitor movement of objects and staff in real-time and to track staff posture.

Projection completes this approach and allows for surgical devices to also show predictive intelligence, anticipating future activities and configuring itself accordingly. This will require models such as ontologies, graphs, and first order logics, in order to represent the current state (i.e. knowledge gained through perception and comprehension) as well as what other states could evolve from said state. These models require knowledge elicitation and analysis of the literature, for instance by means of Generic Surgical Process Models (gSPM). With these models, intelligent tools can predict future states such as future surgical steps and phases, evaluate and refine uncertainties in its current and past comprehension of the surgical state, as well as anticipate possible adverse events or technical errors. Such information could then be communicated to the devices in the OR for optimal assistance. Lastly, these models also structure surgical data in a way to allow for the evaluation and optimization of surgical processes as a whole, helping to best use hospital resources and ensure patient quality-of-care.

In the context of the project, we will focus on collecting and annotating data from robotic assisted hysterectomy. We will use the existing OntoSPM ontology. Foundational models will be studied for both analyzing surgical videos and surgical data for procedural recognition and in a second layout from the recognized workflow to predict future states and best surgical strategies.

1. Maier-Hein L, Eisenmann M, Sarikaya D, Marz K, Collins T, Malpani A, …, JanninP (last co-author). Surgical data science - from concepts toward clinical translation.Med Image Anal. 2022;76:102306.

2. Huaulme A, Harada K, Nguyen QM, Park B, Hong S, Choi MK, …, Jannin P. PEgTRAnsfer Workflow recognition challenge report: Do multimodal data improverecognition? Comput Methods Programs Biomed. 2023;236:107561.

3. Nyangoh Timoh K, Huaulme A, Cleary K, Zaheer MA, Lavoue V, Donoho D, JanninP. A systematic review of annotation for surgical process model analysis inminimally invasive surgery based on video. Surg Endosc. 2023;37(6):4298-314.

4. Lalys F, Jannin P. Surgical process modelling: a review. Int J Comput AssistRadiol Surg. 2014;9(3):495-511.

5. Gibaud B, Forestier G, Feldmann C, Ferrigno G, Goncalves P, Haidegger T, et al.Toward a standard ontology of surgical process models. Int J Comput Assist RadiolSurg. 2018;13(9):1397-408

Supervisor

The Postdoctoral Fellow will be supervised by Pierre Jannin, INSERM Research Director at the Medical School of the University of Rennes (France). He is the director of the MediCIS research group from both UMR 1099 LTSI, Inserm research institute and University of Rennes. He has more than 30 year experience in designing and developing computer assisted surgery systems. His research topics include surgical data science, surgical robotics, image-guided surgery, augmented and virtual reality, modeling of surgical procedures and processes, analysis of surgical expertize, and surgical training. He authored or co-authored more than 150 peerreviewed international journal papers. He was the President of the International Society of Computer Aided Surgery (ISCAS) from 2014 to 2018. He is the Editor in Chief of Computer Assisted Surgery journal (Taylor&Francis).

https://medicis.univ-rennes1.fr/https://scholar.google.com/citations?user=yr_qKA0AAAAJ&hl=fr

Department

The Laboratory of Signal and Image Processing (LTSI), is an INSERM laboratory at the University of Rennes with about 150 researchers dedicated on BioMedical Engineering research.

The MediCIS team, located at the medical university, is part of the LTSI and focuses on the study of surgical data science for different applications such as assistance in the OR, surgical robotics and evaluation and training, with the participation of surgeons from the Rennes University Hospital. The primary surgical applications of MediCIS include functional neurosurgery, ob/gyn, urology, and orthopaedics. This research team has published pioneering work in surgical skill assessment, augmented reality in surgery, surgical workflow analysis and procedure modeling, and ontologies for medical imaging and surgery.

https://medicis.univ-rennes1.fr/

 

Requirements

Research Field
Computer science
Education Level
PhD or equivalent
Skills/Qualifications

- A Ph.D. degree in computer science, biomedical engineering, or a related field.

- Strong background in machine learning, computer vision, and image/videoprocessing.

- Proficiency in programming languages such as Python, C++, or MATLAB.

- Experience with deep learning frameworks (e.g., TensorFlow, PyTorch) andrelevant libraries.

- Prior exposure to medical imaging or surgical data analysis is advantageousbut not mandatory.- Excellent communication skills and ability to work collaboratively in amultidisciplinary team environment.

 

Languages
ENGLISH
Level
Excellent

Additional Information

Eligibility criteria

Academic qualification: By 11 September 2024, applicants must be in possession of a doctoral degree, defined as a successfully defended doctoral thesis, even if the doctoral degree has yet to be awarded.

Research experience: Applicants must have a maximum of 8 years full-time equivalent experience in research, measured from the date applicants were in possession of a doctoral degree. Years of experience outside research and career breaks (e.g. due to parental leave), will not be taken into account.

Nationality & Mobility rules: Applicants can be of any nationality but must not have resided more than 12 months in France in the 36 months immediately prior to the MSCA-PF call deadline on 11 September 2024.

Selection process

We encourage all motivated and eligible postdoctoral researchers to send their expressions of interest through the EU Survey application form (https://ec.europa.eu/eusurvey/runner/2024-Formulaire-Candidature-Demarche-MSCA-PF), before 5th of May 2024. Your application shall include:

• a CV specifying: (i) the exact dates for each position and its location(country) and (ii) a list of publications;

• a cover letter including a research outline (up to 2 pages) identifying theresearch synergies with the project supervisor(s) and proposed researchtopics described above.

 

Estimated timetable

Deadline for sending an expression of interest

5 May 2024

Selection of the most promising application(s)

May – June 2024

Writing the MSCA-PF proposal with the support of the above-mentioned supervisor(s)

June – September 2024

MSCA-PF 2023 call deadline

13 September 2024

Publication of the MSCA-PF evaluation results

February 2025

Start of the MSCA-PF project (if funded)

May 2025 (at the earliest)

Website for additional job details

Work Location(s)

Number of offers available
1
Company/Institute
INSERM U1099 Laboratoire du Traitement du Signal et de l'Image (LTSI)
Country
France
Geofield

Contact

City
RENNES
Website
Street
LTSI, Université de Rennes 1, Campus de Beaulieu, Bât 22. 35042 Cedex - Rennes - FRANCE.
Postal Code
35042
E-Mail
contact@2PE-bretagne.eu