The LH-Radiology AI model service is a web service that allows communication with the different AI models. The LibreHealth RIS uses this service to perform two-way communication (i.e., training and inference) between different types of AI models for radiology. The outputs of the models are meant to be shown in the DICOM Viewer embedded within lh-radiology. Still, they may also be used by other parts of the system, such as prioritizing a study list, assigning a specific type of study list to a specific radiologist user, or scheduling different modalities for the PACS to capture, etc.
This project is to implement a hook in the OHIF extension, so that based on the imaging modality (CXR, Mammo, Head CT, Abdomen CT), the AI model should be called and the type of output should be shown. The main types of model that will need to be implemented into the AI model service as part of the project is as follows:
There is no limit, but we want to use radiology images trained model. The COCO was just given as an example. For the preliminary task, to show your skills, its fine to use any number of models and showing that the correct model gets selected for a given imaging study.
Hello, my name is Ryan Silu. Regarding the AI model, I have tried to open the “DenseNet121_aug4_pretrain_WeightBelow1_1_0.829766922537.pkl” using pickle to try and see the data but the output I got is: 119547037146038801333356. So how do I get to see the full content of the data or should I just use the sample train data? And for object detection and segmentation, where would I find the images to use?