The LH-Radiology AI model service is a web service that allows communication with the different AI models. The LibreHealth RIS uses this service to perform two-way communication (i.e., training and inference) between different types of AI models for radiology. The outputs of the models are meant to be shown in the DICOM Viewer embedded within lh-radiology. Still, they may also be used by other parts of the system, such as prioritizing a study list, assigning a specific type of study list to a specific radiologist user, or scheduling different modalities for the PACS to capture, etc.
This project is to implement a hook in the OHIF extension, so that based on the imaging modality (CXR, Mammo, Head CT, Abdomen CT), the AI model should be called and the type of output should be shown. The main types of model that will need to be implemented into the AI model service as part of the project is as follows:
- object detection (multilabel outputs like COCO)
- object detection (bounding box) - CheXNet already integrated
- segmentation (surrounding the region of interest)
- study list filtering based on binary classification (change the study list) - #79 already integrated
- Add a new AI model for an existing modality and show output in JSON from the AI model service.
- Fix one of the 30+ issues in the lh-radiology module.
- Python deep learning (pytorch or tensorflow) (critical skill)
- Flask API development knowledge (required skill)
- ReactJS framework to be able to change the OHIF extension (good to have)
- Java and Spring web framework (good to have)
@judywawira and @r0bby
Hello, I would like to contribute to this project, I have started the Flask backend for lh-radiology-aimodel-service locally and have started the API web interface for redoc-cli.
Regarding the preparatory tasks, is there a limit to the AI models that can be added? I want to add YOLO’s model on the COCO dataset, or should I use another model?
There is no limit, but we want to use radiology images trained model. The COCO was just given as an example. For the preliminary task, to show your skills, its fine to use any number of models and showing that the correct model gets selected for a given imaging study.
Thank you for your kind reply, I would first like to try a model trained on the COCO dataset to this service to familiarise myself with the project.
Hello, I have added a new YOLOX model to the lh-radiology-aimodel-service project with the dataset COCO. added two interfaces in Flask and ran it and tested it with swagger-ui as shown here.
Using the get method to obtain information about the model.
Use the post method to get the inference result of the model and return it as json.
Do I meet the requirements with this approach? Or should I make some changes in certain areas?
Hello, my name is Ryan Silu. Regarding the AI model, I have tried to open the “DenseNet121_aug4_pretrain_WeightBelow1_1_0.829766922537.pkl” using pickle to try and see the data but the output I got is: 119547037146038801333356. So how do I get to see the full content of the data or should I just use the sample train data? And for object detection and segmentation, where would I find the images to use?