Project: FHIR Analytics Using Apache Spark and Cassandra

@yashdsaraf I can confirm that data is fetching correctly. I added following code to check whether data is fetching correctly in search method which gives the results. Only issue is that response isn’t coming.

List<CPatient> patients = patientRepository.findAll().collectList().block();

I change the search method as follow and then it’s started to work fine. I hope we can use this method.

 @GetMapping
 public ResponseEntity << ? > search(@RequestParam Map < String, String > params) {

  if (params.size() == 0) {
   // Return all entities if no search parameters specified
   List < CPatient > data = patientRepository.findAll().collectList().block();
   return new ResponseEntity(data, new HttpHeaders(), HttpStatus.OK);
  }

  try {
   List < CriteriaDefinition > criteriaDefinitions = new ArrayList < > ();
   for (Map.Entry < String, String > entry:
    params.entrySet()) {
    if (entry.getValue() == null) {
     throw new IllegalStateException("Search parameter value cannot be null");
    }
    criteriaDefinitions
     .add(getCriteriaDefinition(entry.getKey(), entry.getValue()));
   }

   reactiveCassandraOperations.select(Query.empty(), CPatient.class).subscribe(System.out::println);

   Flux < CPatient > patients = reactiveCassandraOperations
    .select(Query.query(criteriaDefinitions).withAllowFiltering(), CPatient.class);
   return new ResponseEntity(patients.collectList().block(), new HttpHeaders(), HttpStatus.OK);
  } catch (IllegalStateException ex) {
   return ResponseEntity.status(HttpStatus.BAD_REQUEST).body(BodyInserters.fromObject(ex.getMessage()));
  }
 }
1 Like

@yashdsaraf one thing that I want to know is that, is there any possibility of saving entire resource to a single column? So when it come to analytics, we can simply utilize the contents of that column.

By a single column I assume you mean the entire resource would be saved as a JSON object?
If that’s the case, instead of using the existing FHIR structures as models, we’ll need to create new models with just the id and value properties (much like your project) and use them to save the real structures.
CRUD operations could very well be adjusted for this format, but search operation using CQL wouldn’t work as it would be comparing the entire object’s JSON string against the given value.

That’s nice, I’ll make the necessary changes in my project as well, so you can directly merge the resources other than Patient.

1 Like

@yashdsaraf I meant we can keep the existing columns, is there any possibility of adding extra column like resource which can stored the complete resource?

Yes we can do that, only the CREATE and UPDATE operations will have to be modified to handle the extra column. What do you think about this @sunbiz?
I’ll start working on a merge request for this.

I don’t suggest you do that. Instead write a helper method in the service that will give you a resource. I would have suggested that this be in analytics module, but this is a generic feature that needs to be in the platform.

1 Like

@sunbiz The reason for asking that is, analytics module read data through the spark cassandra connector. Which won’t be using the spring data APIs to query data because spark cassandra connector handle the data partitioning when it come to processing data across spark cluster. If there is no sinle column to collect data means, analytics module need to implement mapping for each resource when fetching resource data to spark. Which means for patient. we will need to implement a mapper which creates a FHIR patient resource by going through the each of the data available in the cassandra columns. Having sinle column with entire resource will useful in terms of converting this to Java Patient resource. @sunbiz do you think we should handle this at analytic module level?

1 Like

In addition, since we created CPatient extending Patient resource in HAPI FHIR things gets bit complex as FHIR encoders only worked with base HAPI FHIR resources.

@yashdsaraf I have merge your changes with master branch. Let me know when you complete other resources as well. So we can combine them and merge into a single repository.

2 Likes

Yes @prashadi , the analytics module should have a way to convert the resource to a Bundle or another form that it needs, instead of having a combined way to have the entire resources.

1 Like

Ok @sunbiz . So when analytic module fetches data, it will need to convert cassandra table data to each relevant FHIR resource by taking column data and create a new HAPI FHIR resource object out of it. I’ll check for writing a mapper or look for functionality in spark Cassandra connector to map database row in to a FHIR resource.

1 Like

@sunbiz @namratanehete @judywawira @yashdsaraf I have gone through possible approaches that I can map cassandra table to object via spark cassandra connector. Since @yashdsaraf patient representation contain complex attributes, the only way that we can map patient table data to FHIR Patient object is by going through data of each column and map it to relevant attributes. It will be a time consuming task but that is our only option. Any other thoughts on this matter?

@prashadi When you say mapping from table to object, in this case is the object an HAPI FHIR structure? or some other spark specific object?

It’s HAPI FHIR Patient resource. But data is fetching via Spark Cassandra Connector which return the row containing the patient data.

If you ultimately need the HAPI FHIR patient resource why don’t you try using my project to retrieve the objects? For e.g Say you need to retrieve a Patient structure from the database, you can use an implicit cast to use CPatient as Patient, something like so

Patient patient = patientRepository.find(<identifier>);

Although the find function will return a CPatient object, it’ll get implicitly casted to Patient since CPatient is a direct subclass of Patient.

Update:

I just realized that I’m assuming Spark performs the analytical processes after all the data is retrieved from the database. If that assumption is wrong the solution above won’t work.

@yashdsaraf spark Cassandra connector internally handle the data distribution across entire spark cluster. Hence it won’t load all data to a single node. It internally handle data distribution. That’s why we need to go through the spark Cassandra API to retrieve the data.

1 Like

I think we should go with column to column approach until we get alternative. What do you all think? @sunbiz @yashdsaraf @prashadi

2 Likes

Thank you for the response I’ll be looking into the mapper implementation.

@sunbiz @namratanehete @judywawira I have added my blogpost on work accomplished during gsco in https://medium.com/@prkpbandara/gsoc-librehealth-work-accomplished-on-fhir-analytics-during-gsoc-2018-c3b0fded975e. Since I was away for few days as my university has started, I’ll be continue to work on integrating the newest changes from Yash’s module and combining the data sources. I have tried to regenerated the JSON from interating columns and it gives me a parsing error. I will be looking into that. If it’s sorted then integration will be completed. @yashdsaraf let’s test your resources with google data set, which contains different kind of resources with attributes.

1 Like