Project: Automated Security Assessment Infrastructure for LibreHealth EHR Workflows

The LibreHealth EHR Security Assessment Infrastructure project aims to create an automated CVSS scoring system integrated with GitLab CI/CD to continuously evaluate and monitor security vulnerabilities across critical Meaningful Use (MU) certification workflows. This infrastructure will provide systematic security assessment, enabling early detection and mitigation of potential vulnerabilities.

The project focuses on implementing comprehensive integration testing for key clinical workflows while incorporating CVSS scoring to quantify potential security risks. This automated approach will ensure consistent security evaluation with each code change or release, maintaining a robust security posture for the EHR system.

The deliverables of the project are as follows:

  • Develop automated integration tests covering the specified MU workflows
  • Create a CVSS scoring framework that evaluates security aspects of each workflow
  • Implement GitLab CI/CD pipeline integration for automated security assessment
  • Generate comprehensive security reports with detailed CVSS metrics
  • Provide documentation for maintaining and extending the security assessment infrastructure
  • Create remediation guidelines for common vulnerability patterns

CVSS Scoring Implementation:

  • Implement automated scanning for common vulnerability patterns
  • Create custom scoring rules for healthcare-specific security concerns
  • Generate CVSS v3.1 scores for identified vulnerabilities
  • Track security metrics over time
  • Provide trend analysis for security posture

CI/CD Integration:

  • Configure GitLab CI/CD pipelines for automated testing
  • Implement security gates based on CVSS scores
  • Create notification systems for security issues
  • Generate security reports for each build/release
  • Archive security metrics for trend analysis

Preliminary tasks:

  • Map all target workflows and their components
  • Create initial integration test framework
  • Implement basic CVSS scoring for a sample workflow
  • Set up GitLab CI/CD pipeline structure

A developer working on this project needs to have skills in:

  • PHP and JavaScript testing frameworks
  • Security testing and vulnerability assessment
  • CVSS scoring methodology and implementation
  • GitLab CI/CD configuration and pipeline development
  • Healthcare workflow analysis and testing
  • Integration testing methodologies
  • Documentation and technical writing

Project size: Large (~350 hours)
Mentors: @muarachmann and @sunbiz

Hii @sunbiz I have a few questions:

  1. Are there existing integration tests or security scanners already in use within LibreHealth EHR that this project will extend?

  2. Which MU workflows should be prioritized first (e.g., patient registration, prescription management, lab results, billing)?

  3. Is there a preferred vulnerability scanning tool (e.g., OWASP ZAP, PHPStan security rules, custom static analysis, etc.), or is tool selection part of the project scope?

  4. Would the CVSS scoring engine be fully custom-built, or should it integrate with existing vulnerability databases (like NVD feeds)?

1 Like

Hi @kishansinghifs1 please review the code here - LibreHealth / LibreHealth EHR / LibreHealth EHR Laravel · GitLab, and we want you to answer some of your questions in the proposal. There aren’t any security tests or integration tests in there at the moment. You should propose the main MU workflows that are present in the Laravel port. Please propose which vulnerability scanning tool you think is appropriate. Similarly, propose some CVSS scoring libraries (if external) or, if you want to build a custom one, justify why any of the existing ones aren’t good enough.

Hi @sunbiz thank you for the response!

I’ve explored the LibreHealth EHR Laravel codebase and here are my initial observations:

  • The test suite currently has only the default scaffolding (ExampleTest.php in both Feature/ and Unit/), so there’s a clean slate for building the security test infrastructure.

  • The CI pipeline (run-tests.yml in GitHub Actions) runs basic tests — I can extend this to add security scanning stages.

  • I’ve identified the key models that map to MU workflows — Patients/, Encounter, Prescription, Immunization, Insurance/, Forms/, AuditMaster/AuditDetail — which gives me a clear starting point for mapping workflows to test coverage.

My preliminary approach would be:

  1. Map MU workflows to controllers/models, starting with Patient Registration and Encounter management. Write integration tests covering authentication, authorization, input validation, and PHI data handling for these flows.

  2. Integrate static analysis (PHPStan with security rules + composer audit) and dynamic scanning (OWASP ZAP against test endpoints) into the CI pipeline.

  3. Build the CVSS v3.1 scoring engine that maps scan findings + test results to CVSS vectors, with custom rules for healthcare-specific concerns (e.g., PHI exposure, missing audit trails, broken access controls).

  4. Dashboard/reporting with trend analysis, security gates, and remediation guidelines.

For the preliminary tasks, I’d like to start contributing right away. Could you point me to:

  • Any specific branch I should work off of?

  • Guidelines for submitting PRs (I see the CONTRIBUTING.md but want to confirm the workflow)?

Happy to get know more from you and that would be really helpful to me!

Hi,

I’m Anushka Dudhe, a first-year B.Tech CSE (AI/ML) student, and I’m excited to start contributing to LibreHealth EHR. I’m particularly interested in understanding how large-scale healthcare systems are built and maintained through open-source collaboration.

I’m exploring LibreHealth EHR as an open-source contributor and as a prospective participant in Google Summer of Code 2026, and I wanted to get involved early by contributing meaningfully and learning the project’s architecture, workflows, and development practices. Over the past few days, I’ve been setting up the project locally and reviewing the repository, documentation, and open issues to build context around the codebase.

My current focus is on starting with well-scoped contributions - such as setup improvements, documentation, issue investigation, or initial test coverage - so I can align with existing patterns and gradually take on more complex tasks as I gain deeper understanding. I value clear communication, code quality, and constructive feedback, and I’m looking forward to collaborating with the community and learning from experienced contributors.

Thanks for maintaining such an impactful project and for supporting new contributors. I’m excited to contribute and grow with the LH-EHR community.

Hi! @sunbiz @muarachmann I’ve prepared a proposal after going through the codebase. I’d love to share it and discuss a few sections. Should I post it here, or would it be better to send it via email?

Also , regarding the preliminary tasks @sunbiz @muarachmann , could you please guide me on the best way to demonstrate my implementation? Should I open a PR/merge request, or is there another preferred way to share the work?

Thank you!

You should post here. Don’t worry about people stealing your proposal, if we catch that, those who steal it will not be selected.

The idea here is to work in the open.

Hi @r0bby , Thank you for letting me know about the work in open .

And the another question was could you also tell me the best way to share what I have done for the preliminary task .

Hi @kishansinghifs1 and welcome to LH,

The best way to share your preliminary task is by opening a MR. You can work off the develop branch

Thank you !!! @muarachmann for the response.

Hi @muarachmann @sunbiz and @r0bby — I’ve been working through the architecture for the project and wanted to share my thinking visually before finalizing the proposal.

The attached diagram maps out my planned four-layer pipeline: static analysis (PHPStan L5+L8 + Enlightn + composer audit) → dynamic scanning (OWASP ZAP in three phases: seeded auth session → spider all pages → active scan) → CVSS v3.1 scoring engine (findings → vector → PHI-tier multiplier → base score) → security gate (pass = merge allowed, fail = merge blocked) → report generated.

A few design decisions I’d love your feedback on:

  1. PHPStan dual-level (L5 + L8) — L5 as a baseline across all controllers/models, L8 only on PHI-handling controllers to avoid false positive overload. Does this feel right, or would you prefer a single level?

  2. PHI-tier multipliers — I’m classifying models into PHI_CRITICAL / PHI_MODERATE / NON_PHI and adjusting CVSS Confidentiality Impact accordingly, since not all data carries equal risk (e.g., insurance policy numbers vs. category labels). Open to feedback on the tier boundaries.

  3. ZAP phase structure — planning seeded-auth session first so ZAP scans authenticated endpoints. Any specific MU workflow endpoints you want prioritized in the active scan policy?

Happy to walk through any part of this in more detail!

HIii @muarachmann I have opened the MR in context to the preliminary task : GSoC 2026 preliminary tasks — Integration tests , CVSS v3.1 engine and CI/CD pipeline (!24) · Merge requests · LibreHealth / LibreHealth EHR / LibreHealth EHR Laravel · GitLab

Hii @r0bby @sunbiz @muarachmann I wanted to ask about the structure of my GSoC proposal. Currently I have included the following sections:

  • About Me

  • Project Overview

  • Proposed System Architecture

  • Project Goals and Deliverables

  • Technology Stack and Tools

  • Implementation Plan and Timeline

Do you think this is sufficient, or should I include additional sections such as testing and evaluation , conclusion , future works ?

could you please help me out.

Pay attention to the posts marked with the prefix README: – on our ideas list – they have information you will need potentially.

Hi @r0bby @sunbiz @muarachmann I have now finalized my GSoC 2026 proposal and would really appreciate your review before the submission deadline.

Proposal link: GSoC2026_LibreHealth_KishanSingh_Automated_Security_Assessment - Google Documenten

my current structure (Overview, Architecture, Deliverables, Tech Stack, Timeline) covers the core requirements.also can I submit this proposal on gsoc website.

Looking forward to your feedback!

Hiii @r0bby I would really appreciate if you do a quick review of my proposal GSoC2026_LibreHealth_KishanSingh_Automated_Security_Assessment - Google Documenten if there is anything that seems unclear or over engineered please let me know actually I want it to be aligned with the mentor’s thought before I finalize it.

Rewrite this so it doesn’t use AI. You knew our policy before you submitted this.

No LLM usage whatsoever. None. I caught your little boilerplate stuff prior. That needs to go. Redesign this so no LLM-generated text is used.

You’re good now. Good luck.

1 Like

@r0bby I had one doubt too , everyone is adding about me section or introduction section in the proposals. Is it actually necessary does it helps mentors in real to understand the applicant better and should I add it too ?