Project: Neural network-based object detection of anatomical structures and medical artifacts in Virtual Reality

Several medical procedures in surgery, or interventional radiology are recorded as videos that are used for review, training and quality monitoring. These videos have at least 3 interesting artifacts - (1.) anatomical structures such as organs, tumors, tissues etc. (2.) medical equipment and (3.) medical information overlayed or described about the patient. It will be immensely helpful for review and search purposes if these can be identified and automatically labeled in the videos.

Parellely, there is a need to scale the apprenticeship model of being in a procedure room. Virtual reality and live video streams seem to have picked up steam in the recent years as being able to provide immersive experience to participate in such procedures. Thus, this project will use deep learning approaches to scale the apprenticeship model of training future providers by doing object detection and then automatic labeling the artifacts of interest.

The following will mean successful completion of the project:

  1. Train a model that can do object detection on Kvasir dataset
  2. Convert the Kvasir video to an immersive experience on VR headset like Google cardboard (or another mobile VR) or Oculus
  3. Implement inference of the object detection model from Step #1 in the VR experience.

Mentors: @judywawira @pri2si17 Skills required: Python or ML.Net and C# programming for Unity SDK

3 Likes

Intro tasks for this project:

  1. Build a cross-platform object detection model on the cityscapes dataset that can run on mobile phones
  2. Implement this as an APK (Android mobile app) built through the Gitlab CI

Hello Everyone, I am Shivam Agarwal currently in my 3rd year at BITS Pilani, pursuing integrated course in CS and Economics. LibreHealth is an organization of one of its kind. I hope to contribute and learn from the organization in Gsoc 2021. I have one doubt, are the intro tasks given to gauge the ability of the contributor? (Because the model trained on cityscapes dataset cant be used in transfer learning for the actual project on Kvasir Dataset.) Is this the only intension or am I missing something here?

yes, it is to identify the ability of the contributor, like prereq. The cityscapes is a good example for the kind of architecture that needs to be selected for this project. and being able to deploy it to an app (even better a VR app) is another such prereq.

1 Like

Hi, My name is Shivaditya. I am studying at VIT Chennai.My major is Computer Science. I would to love to contribute to this project. I have completed the pre requisite. I wanted to know , whether doing the prerequisite as Non-VR app would satisfy the prerequisite. Also, I would like to know whether transfer learning is allowed for the prerequisite project.

1 Like

Hi,

I have implemented the Trained Model as Virtual-Reality App using Unity. For model inference I used DeepLab’s model for object detection. And used it along with TensorFlow Lite.I have written few unit-test for project and have used a local runner for building.

Eager to receive your feedback,

Shivaditya

Links:GitLab Link For the Repo

3 Likes

Hello Everyone, My name is Milind Thakur. I am currently in my 3rd year pursuing BTech in Electronics and Telecommunication from IIIT Naya Raipur. I have good experience in Python, Machine learning, Deep Learning, OpenCV, Flask, Tensorflow, and Pytorch. LibreHealth is a great organization and I would like to work and contribute to your organization in GSoC 2021.

2 Likes

Hello Everyone, I am Rohan Gupta !!

I am a CS Sophomore at NIT Durgapur. I have a good experience in Deep Learning and Computer Vision having published papers in Conference. I have previously worked with CNNs extensively to perform classification and Regression related tasks in various domains including in Medical Diagnosis. I also do have a sound knowledge of Kotlin and Android App Development having made good UIs and deployable apps. I have been a regular contributor to OrcaSound Organisation in the past. I have worked with Keras and Tensorflow to make ML deployable apps.

I am interested in developing the Project : Neural network-based object detection of anatomical structures and medical artifacts in Virtual Reality . If happen to get the chance, I assure my dedication and commitment to the fullest.

Looking forward to an awesome GSOC experience.

Thanks and Regards

Rohan Gupta

Github Link : rohankrgupta (Rohan Kumar Gupta) · GitHub

2 Likes

Yes, a non-VR app is fine for the starter task. But VR app will be nice to showcase your skill. We will need to do transfer learning for the model, but just training on the cityscapes is fine for the intro task.

Hello I am Chandra Irugalbandara :blush:

I am a Final Year Engineering Student at the University of Moratuwa. I have good experience in Deep Learning and Computer Vision. I have worked with the Kvasir Datasets (Kvasir v2, Kvasir-Capsule, Hyper-Kvasir) in the past couple of months (Reach me if you want access to the GitHub repo) and we have found out that imagenet pre-trained DenseNet201 works really for Kvasir Dataset classification. At the moment I am working on a GUI with Grad-Cam to work with the models we made with the Kvasir Datasets. I am familiar with React, Flutter development.

I am interested in contributing to this Project. If I happen to get the chance, I assure my dedication and commitment to the fullest.

Looking forward to an awesome GSOC experience. @sunbiz @judywawira

github : https://github.com/chandralegend

linkedin : https://www.linkedin.com/in/chandralegend/

Great to hear that you have experience with the Kvasir dataset. Please complete the starter tasks for this project and make a strong application on how you’ll complete the project. Please feel free to share the drafts with us and allow us to comment on the draft.

Hi,

I have implemented the Trained model for Vehicle Detection and counting using Open computer Vision. For inference I used DeepLab’s model for object detection. And used it along with TensorFlow and Cuda. For more accurate detection I use YOLO. I have previously work with RCNN to perform regression. I have implemented this object detection model which is very helpful in our daily traffic surveillance.

I am interested in developing the Project: Neural network-based object detection of anatomical structures and medical artifacts in Virtual Reality . If happen to get the chance, I assure my dedication and commitment to the fullest.

Looking forward to an awesome GSOC experience.

Eager to receive your feedback, Thank you Sir

1 Like

@Soumokanti123 @chandralegend @shivam-7500, please send a proposal for us to review your plan to implement this project. Please also show us a repo where you have implemented the intro tasks.

@sshivaditya your app codebase looks interesting, but I wasn’t able to see the classes marked correctly, or their labels showing up. Maybe it’s a VR headset configuration issue? Please also send your draft proposal for review.

@rohankrgupta I couldn’t find a repo with the intro tasks in your GitHub.

Hi,

I have shared the draft proposal through the GSoC Dashboard. The Classes are visible but due to limitations of the screen resolutions it appears like that

Eager to receive your feedback

Shivaditya

I feel you are so strong

I urge you to focus on your proposal – there’s not much time left.

@sshivaditya, I provided some comments on your proposal.

To everyone writing proposals for this project, please provide as much implementation details as possible. Please think of how your UI in VR would look like and provide mockups in the proposal. Also, provide screen-by-screen breakdown of the flow of the app and how the labels once shown can be stopped and additional comments can be added to the annotations. You can think of this as the interaction design. Please provide that in your proposals.

:link: Continue the discussion :point_right: Project: Neural network-based object detection of anatomical structures and medical artifacts in Virtual Reality