My initial Idea was to make tfjs app so that our servers don’t have too much load. But after further research I realized that models such as CheXnet are very huge and will be infeasible in client side. So I made a Django backend with DRF api with lightly trained mobilenet, for now trained on chest X-rays. I am yet to make the frontend, I plan to do so in Reactjs. I’ll be hosting it in 3-4 days. I’ll share the github link for the codebase shortly. Also Before final proposal submission I’ll implement the full CheXnet and host it. If you have any suggestions, it’ll be much appreciated.
Hello @pri2si17 , please take a look at my github repo, I have made a Django app as described above with 1 trained mobile net model. I plan to improve the model(train it from scratch as there are no pre-trained models for grayscale at 224*224 on tf-hub, so I had to do some temporary workarounds to make it work). Please do let me know if you have any suggestions.
In the last summer, I worked on the creation of Spatial Analysis Technology. In this project, I was responsible for Data Analysis, Dataset Creation, Research and object detection. After research from of 22 real-time object detection and segmentation algoritms We used RetinaNet as it was best suited for out problem. mAP was 0.72. Have a look at here. https://milestonezero.net/index.php/edifice-2/ Here is one of dataset converter curated that time. Currently, I am working on Talking face generation using GANs.
This project I could complete easily as per my prior experience. I am wondering what are my chances of getting selected. Thank you.
Hi @pri2si17 I am pretty much excited to work on this project. I would like to ask one thing. You mentioned it should automatically label specified diseases, so do we know the number of diseases beforehand? How many?
Hello Abhishek, it depends upon what disease classification you are trying to do and what dataset you are using. For example, We train CheXNet on the recently released ChestX-ray14 dataset, which contains 112,120 frontal-view chest X-ray images individually labeled with up to 14 different thoracic diseases. I hope it helps
Thanks for the reply @SinghKislay. My question was concerned mainly with the final tool we’ll be developing. Do we add the functionality of adding diseases on the web-application? And what is the expected outcome from this project? @pri2si17
Hello I am Gautham P Krishnan, I am a developer on the side of Machine learning ,Deep learning, Flutter, python, js I am interested in this project
Hello @pri2si17 should I start working on POC ?
Yeah, that’s how we’re evaluating your coding, as well as research skills.
That’s up to you. The code however must be yours entirely.
Hi @ZER-0-NE, well the labelling is not per disease. Its either segmentation, or bounding boxes. So it is generic for each disease. As @r0bby said, its up to you to add the functionality but the features mentioned should be there.
Hi @gauthampkrishnan, you are welcome to contribute on this project. Please read the deliverable. Expecting a github repo from your side.
Hi @manideep2510, welcome to the community. You should focus on both. We need a comprehensive research (not a research paper but it should show that you have done proper research before starting.) and code POC.
Hi @SinghKislay, I will look at it and let you know. Btw, you can pre process grayscale image to 3 channels and use it in model. Till then, enhance your code and explore more.
Hi @MrAsimZahid, welcome to the community. Well you need to do what is mentioned in project before we can make any decision of selection. Also you will have to submit a proposal for it. We need quality work. So work on it and all the best.
Hi @pri2si17. I have a quick question.
Do the predicted images by a particular model should have automated bounding boxes on it? Or is it okay just to get the predictions on the xray images like a particular image might have Pneumonia or Pneumothorax (and there is no bounding box defining the exact region).
This might come in handy when the expert may choose to label the images according to the predictions by the model.
Can you let me know your views on this?
Hi @ZER-0-NE, Yes, there should be bounding boxes of the prediction if the labelling is selected as bounding box. The whole purpose is to get as much labels as possible from the deep models itself, and radiologist/user will correct or make missing predictions.
I guess your data needs to be labeled accordingly for getting bounding box predictions afaik.?
Yes there will be a training set with bounding boxes or segmentation but you need to adapt it for different datasets.