The LibreHealth Radiology Artifact Detection project aims to create an intelligent system that can identify and annotate similar image artifacts across multiple radiological studies based on user-selected regions of interest (ROI). This tool will enhance radiologist workflow by automatically detecting and marking similar artifacts across an entire worklist, reducing manual annotation time and improving consistency in artifact identification.
Core Functionality:
ROI Selection and Analysis
Interactive region selection tools
Feature extraction from selected artifacts
Artifact characteristic profiling
Annotation metadata storage
Multi-slice artifact tracking
Similarity Detection
Deep learning-based feature matching
Artifact pattern recognition
Cross-image similarity scoring
Confidence level calculation
False positive reduction
Automated Annotation
Consistent annotation styling
Automatic segmentation
Annotation propagation
Metadata synchronization
Version control for annotations
The deliverables of the project are as follows:
Develop an interactive ROI selection interface integrated with OHIF viewer
Create a deep learning model for artifact similarity detection
Implement automated annotation propagation across images
Build a review and validation interface for radiologists
Provide performance analytics and quality metrics
Create comprehensive documentation and training materials
The project will significantly improve radiological workflow by automating the tedious process of identifying and annotating similar artifacts across multiple images. The intelligent system will learn from radiologist-selected examples and propagate annotations consistently, maintaining the same style and metadata across all identified instances.
The integration with the existing LibreHealth Radiology viewer ensures seamless workflow incorporation while providing powerful new capabilities for artifact management. The system’s ability to learn from user selections and improve over time makes it an invaluable tool for maintaining consistent artifact documentation across large image sets.
This project will enhance the quality of radiological analysis by ensuring consistent artifact identification and documentation while significantly reducing the time required for manual annotation. The automated system will serve as a powerful assistant to radiologists, allowing them to focus more on diagnosis and less on repetitive annotation tasks.
This is an interesting project, provided my Research Background in Medical AI at the most Premier Institute of India. I would love to explore a bit more details on this.
Firstly, do we have any major proprietary dataset for radiology or are we interested to work on publicly available dataset like MIMIC or CheXPERT etc.
Secondly, What all modalities are we interested to work on would it be MRI, CT, XRays etc.
Furthermore, what pathologies are we interested to work on, would it be chest, brain, GI tract scans etc.
Hi,
I’m Anushka Dudhe, a first-year B.Tech CSE (AI/ML) student, and I’m interested in contributing to the LibreHealth Radiology Artifact Detection project. I find the idea of automatically identifying and annotating similar image artifacts across multiple radiological studies particularly impactful, especially in terms of reducing repetitive manual work for radiologists and improving consistency in artifact documentation.
The workflow of selecting a region of interest (ROI) and propagating consistent annotations across related images, along with radiologist review and validation, seems like a strong and practical approach. I’m especially interested in understanding how the ROI-based artifact profiling, similarity detection, and annotation propagation are planned to integrate with the existing LibreHealth Radiology viewer and OHIF workflow.
I’m exploring this project as a prospective Google Summer of Code 2026 contributor and would love to start by understanding the current codebase, viewer integration points, or any well-scoped initial tasks that would help move this project forward. Looking forward to collaborating with the community and mentors on this.
I am very interested in contributing to the LibreHealth Radiology Artifact Detection project. Given my experience building PyTorch-based tracking and mapping tools, the interactive, dynamic nature of this system really stands out to me.
Before drafting my formal proposal, I’d love to get your feedback on a few architectural and data strategies I’ve been outlining:
1. Model Architecture (Similarity vs. Classification): Because the system hinges on tracking user-selected ROIs, I am planning the backend around a one-shot/few-shot similarity matching architecture (such as a Siamese Network trained with contrastive loss), rather than a standard static classifier. Does this align with your vision for the model?
2. Dataset Strategy & Synthetic Generation: Since publicly available datasets with explicitly annotated artifacts are quite rare, my proposed approach is to use clean public datasets (like MIMIC-CXR or TCIA) and synthetically inject artifacts (e.g., motion blur, noise, or simulated hardware overlays). This would allow us to generate perfect bounding-box ground truths to pre-train the feature extractor. Would you support this approach for the training phase?
3. Project Scope and Phasing: To ensure we establish a stable, seamless integration between the PyTorch backend and the OHIF viewer frontend, would you recommend building the MVP around 2D radiographs first, and then scaling the multi-slice tracking to volumetric data (CT/MRI) in the second half of the timeline?
I look forward to hearing your thoughts and am excited about the possibility of collaborating on this!
The solution which i have planned is using an MedSAM model as it classify the scanning reports based on the visual signatures of the users selection. A 2.5D volumetric tracking in this once an artifact is identified on Slice N, the algorithm prioritizes searching the exact spatial coordinates on Slices N-1 and N+1 which reduces the requirement of computation power. planning of using the human in the loop method in which out of 10 if the 8 predictions are correct and 2 are wrong if the user deletes 2 wrong predictions then the model learns that these two are wrong. i planed of using DICOM for datamanagement .
The structure: frontend-> datainput → MedSAM model ->2.5D volumetric tracker→loss calculation and scoring ->human in the loop → datamanagement.
The solution which i have planned is using an MedSAM model as it classify the scanning reports based on the visual signatures of the users selection. A 2.5D volumetric tracking in this once an artifact is identified on Slice N, the algorithm prioritizes searching the exact spatial coordinates on Slices N-1 and N+1 which reduces the requirement of computation power. planning of using the human in the loop method in which out of 10 if the 8 predictions are correct and 2 are wrong if the user deletes 2 wrong predictions then the model learns that these two are wrong. i planed of using DICOM for datamanagement .
The structure: frontend→datainput→ MedSAM model→2.5D volumetric tracker→loss calculation and scoring ->human in the loop → datamanagement.
My name is Adil sayyed , I am a final-year Bsc computer science student from Mumbai India .I am very interested in the Artifact Detection project for Radiology. I have been reading through the project description and looking at the GitLab repositories
I am currently finalizing my proposal draft so I wanted to mention that since my university exams are in April 2026, I will be using that time to bond with the community and study the codebase ,then I will be ready for full-time coding in upcoming May 2026. I am looking to learn through amazing mentors like you and contribute some value through my broken but passionate efforts with time. I’m all here to learn and make changes.
Ensure your proposal is not using any published templates as many of them were generated from an LLM and that is a fast way to have your proposal discarded without consideration and I check all proposals.
Hey robby im Adil sayyed 3rd year CS student from Mumbai University i honestly apologize for the draft i posted because im currently preparing for my final semester 6 exams and was worried about the deadline, I used an llm just to help me structure my thoughts and make English clearer. I now realized this was a mistake this idea for automated Artifact Detection is something I genuinely want to build im the one who will be going to do coding and I am not bot im a student trying to break into open source im working on manual revision of my proposal now to show you my real voice and technical plan i hope you can give me chance to prove my skills during coding