GSoC Project: Database Upgrade and Laravel Intgration

Hello @aethelwulffe @teryhill , let’s discuss and refine the project :

Your inputs are valuable to make it more concise.

Mine work is to upgrade database queries to PDO from MySqli and integrate laravel framework.

That link is not accessible to the public @pri2si17

Sorry @r0bby. I have updated the link.

I’d also recommend retitling the document to be more relavant – your project has since been redefined…your proposal is now a project plan – which (i hope) is honed by May 30th – so you can begin work.

The Calendar features are really the “core issues” to be addressed. Taking inventory of the “existing functionality” and figuring out how to implement them with your “new API” is going to require the most attention. “Recurring appointments” and “find available timeslots” are two pieces of hand that may not be that simple.

Regarding the database changes, the issue of switching to PDO for compatibility with other databases besides MySQL isn’t really going to matter, as there a many other non-compatible queries in the system such that migration is not going to be seamless to something like Postgres. Also one of the core features in LibreEHR (activity logging) is handled as an extension in the ADODB code. The claim that “performance” will be better with PDO isn’t valid without some quantitative analysis to support it. Optimizing for speed without understanding the baseline is often a recipe for disaster. Suggest that this piece should be lower priority.

Using Knockout (MV-VM framework) which is already part of the codebase (it’s the framework for the most recent major revision of the interface which drives the tabs) instead of Laravel would have been my preference, I’m a little late to the game providing that feedback, so if you aren’t wedded to Laravel, please consider this suggestion.

As I mentioned, I’m late to the game regarding what’s going on with GSOC and the EHR, not sure if this project was acceptable “primarily” as “calendar refactor” or “framework/database” refactor. If it’s the later, than my critique is a lot less valid.

What Kevin said about having PDO as a gateway to an agnostic database scheme. That isn’t going to happen. What can happen with standardized PDO stuff is the ability to touch a non-mysql database at the same time and run queries (designed for that DB engine) to bring in and process that stuff. This (despite some very good arguments from @sunbiz about sticking to FHIR middleware) can be useful. PDO, btw, is NOT a faster engine. It is just a less picky and more general-use one with advantages of it’s own. Myself, I already use PDO and mySQLi both. I have never gone and tried to validate that mysql extension is not actually in use (or at least has code lurking around). Database renovation, meaning cleaning it up, doing all those smart back-end sorts of things that cool kids like Priyanshu know all about while possibly making things a little more amenable to a laravel framework in the future should be kept in mind at all times. Unless Pri can say “look dudes, I can get us all the way right now, and here is how”, I don’t think any immediate move to laravel is actually popular with us old foagie developers, mostly based on the fact that there are a number of prerequisites that should happen. Frankly, I am of the opinion that a Laravel -based version should start with a clean framework in a repo, then be built up with pre-cleaned modules pulled out of the nasty oldness… @pri2si17 I am NOT suggesting that you are not on a good chain of thought, and I am just reacting to my current beliefs (not validated) that moving a large application to a framework is a very large project. I DO think there is an amazing ton of DB work to be done before actually attempting to do anything else major. Being the lead on the DB schema itself during this period puts you in a leadership hotseat, because you have to coordinate a great number of things while construction is going on all around you. I, for one, believe you would be very busy in a very prominent role in the near future just playing DB Master. There is a huge amount of work involved in eyeballing all those table structures, documenting, optimizing the data types, testing and on and on and on. There are query optimization to be made all over the place, functions to get rid of, documentation (traditional, table comment, or by relational database)…just…a lot. @yehster Your critique may be less valid, but not off-target. Any of us who have tried to make decent forms for the EHR have pretty much decided that Knockout is the way to go much of the time. The results are evident.

I realize that the students have made introductions, but where I’m a little confused is I thought that there was only going to be one student working on the EHR, but it looks like there are 3?

  1. This project/proposal database/frame work update
  2. Smarty Template Removal
  3. PostNuke Calendar Removal (which also uses Smarty)

I’ve tried to gather what I can from the GSOC chat channel, but I’m still don’t get it.

Yeah originally – they all applied for the same project then wound up selecting all 3 since they were very strong students with proposals and community activity to match that were REALLY strong…that said, we need to ensure that the work can be merged together.

We don’t want to make it impossible to merge the code since the code structure changed drastically…I’m not gonna lie, making drastic changes to the project structure will make merging code a nightmare…since we have 3 students working on the codebase, there should be no major changes to the structure once coding has begun without a clear, concise plan on how to reconcile differences among the 3 students’ codebases.

…what Robby said.

The main strength here is that while we don’t have the three projects coordinated yet, the three interns are very good at collaboration, and we have very good people here at LibreHealth to help guide our intern team’s collaborations. There is, frankly, a ton of project management experience here, a whole community of defacto mentors, and four interns with wide-open ears and the ability to communicate.

If Laravel is or isn’t a ready pathway, let us have that debate. I think it boils down to finding out if the intern’s plans are all dependent on that framework. If not, we need the strangle-vine plan. If so, then we need to make that framework available in the least intrusive fashion possible (contained, directory-wise) and let them work.

I am less scared of this than I should be perhaps, but I don’t consider my own opinions fully formed. I also know that we had a Laravel framework lurking in a repo started by Sam Likins (github slikins).

1 Like

@yehster Definitely using PDO for just various Database support is not valid point. But yeah its performance is certainly better than ADODb drivers.

I have created two programs for insert and select query, ADODB PDO driver and PHP Native driver. Insert query inserts 10,000 random data and selects just select them.

I haven’t checked it with mysqli.

System I’m using is : Ubuntu 16.04, PHP 7.0 and MySql 5.7 with intel i5.

The ouput which i got is as follows :

Insert Query : ADODB (Using PDO) : ~708.5 msec PHP PDO : ~658.91 msec

Select Query : ADODB (Using PDO) : ~0.3353 msec PHP PDO : ~ 0.3181 msec

Here is the link of github repo with screenshots :

The current use of ADODB in the EHR has even more overhead because of the logging mechanism than native database drivers would, and a 40 millisecond gain with 10,000 inserts hardly seems worth it since most of the pages do only one insert at a time. Making the switch might make sense in specific areas like in the CDR engine. Performance tests are meaningful in specific contexts, while this benchmark doesn’t demonstrate what will happen when actually using the EHR.

There is code which uses the connection currently established by ADODB in sql.inc.php but that doesn’t go through the abstraction layer sql.inc.php provides. Those pages are very likely to break. Identifying those and other potential problems introduced by changes to sql.inc.php would need to be part of the process.

Of course, certainly in respect to the CDR engine, the problem isn’t so much the driver, as the very serious issues with the queries, business logic et cetera.

Priyanshu, you hear what we are saying at a number of points. Whatever and where ever it may get used, having the protocols and functions in place for using PDO costs us nothing, and most feel it would certainly be a very useful tool.

I would say after all 3 student’s code is merged – THEN work on migrating to Laravel. This isn’t a GSoC where we can have students NOT coordinate their work – you are all working off of the same source tree – so need to be sure that it’s not diverging too badly. Massive code structure changes – that’s not good. This work would be AFTER GSoC is over. We have PLENTY of work to do around here.

To reduce the need to coordinate the student’s work, one possible approach would be to have each of them work in a distinct functional area of the application. The “common goal” that’s been suggested is removal of Smarty, so three areas in the system that use smarty are, appointments/calendar, clinical forms (vitals/ROS), and administrative (X12, insurance).

Each student would then be able to work independently and explore tools/frameworks as they see fit. This would minimize merge conflicts, but also means we might need to bring in three new sets of dependencies.

I have no opinions on HOW this happens – and I don’t view merge conflicts as a bad thing – just so long as they are resolved.

Of course it isn’t that we believe that the template system is simply old and clunky, it is more that we believe it is morally destitute and inherently evil.:slight_smile:

On dependencies:

  • A directory/maintenance goal of mine has been to move us (slowly) out of dependency hell by doing the following:
  • Use /libraries for native/homegrown common functions we know we are totally responsible for.
  • Use /assets for externally maintained libraries (lots of JS stuff in this).
  • Require large packages of dependencies from either home-grown optional packages or core features (like the new calendar) to be located in /modules, where, if for expediency or necessity it is best to house all the module’s associated libraries. Ideally, when someone is using a current common external library (like jquery etc…) they should place that library under /assets, then if possible migrate the rest of the code-base to using that new location. When there is time, upgrading things using older library versions is also a great project, not because they are bad code or anything, just that we need to reduce the number of versions for application efficiency (more efficient is always better in the long run) and we may actually be capable of actually knowing what security issues we may have by getting rid of the 82 custom.min.js sort of things we have going on. There is a pack of global variables that contain the new directory locations already, and should make the use of them as includes pretty simple, and I have been looking at making a function that calls lists of common headers (libraries) for code simplification. I may be a library scrooge, but if nothing else, keeping footprints and complexity low really helps your daily GREP. Being able to easily exclude /fonts and minimized libraries when looking up native code is also really convenient. Who knows, one day we may actually start to get some use out of our L3 cache…

Hello @yehster @aethelwulffe @teryhill @tony
This is regarding refactor of users table. I originally posted on github. Please have a look at it and tell me if there is any correction.

Here is users table break-down. Please do have a look at it and let me know if any correction.

**Notes : **

  • All the address related field’s info will be saved in address table.
  • User’s security credentials has been moved entirely to user_secure table.
  1. user_residential_link id int(11) auto-increment primary key, user_id foreign key references Users.id, address_id foreign key references address.id, type int(1) not null.

Note : type : 0 → primary address, 1 - > alternate address

  1. user_secure id int(11) auto_increment primary key, username varchar(100) not null unique, password varchar(256) not null, active boolean not null, authorized boolean not null, pwd_expiration_date datetime not null, user_id foreign key references users.id
  1. user_password_history id int(11) auto-increment primary key, password varchar(256) not null, last_update datetime not null, user_id foreign key references users.id

Note :

  • I dropped salt* field, because I am taking password_hash and password_verify function into account for future. I will include it if you say that it is really needed. I think while everything is in process this to be removed. But you are the guy to decide, so it’s totally up to you.
  • Intention of creating a new table user_password_history is that it is dedicated to store only password history and it has nothing to do in authentication i.e, while logging in we don’t need to look at these fields. They will come in use whenever any change is made. Regarding how it will work, then, first time entered password is saved in password field and a time stamp is maintained, second time also a password is saved and time stamp. Now each user is having a count of 2 passwords (must be coded in program), then a new change will update the older time stamp password. This way we can increase the limit to 2, 3 , 4 … and also this is properly relational. What you say?
  1. user_communication id int(11) auto increment primary key, number varchar(10), type int(1), user_id foreign key references users.id

Note :

  • type : 0->home_phone, 1 → work_phone, 2 → mobile, 3 → fax, 4-> alternate_work_phone
  • According to me this table will remove the chances of storing 5 times phone numbers (including fax.) We can have one number entered and mark its type according to the entry. Suppose a user enters all 5 phone numbers, then there will be 5 row linked to one user, but if he/she enters only 1 or 2 phone numbers then we can easily avoid null values in our database. More over we can extend this to store contact numbers of patient and contacts. It will remove some columns from the table. What you say?
  1. users id int(11) auto-increment primary key, fname varchar(100) not null, mname varchar(100) null, lname varchar(100) not null, federalTaxId varchar(100) unique, federalDrugId varchar(100) unique, see_auth int(1) not null, npi varchar(15) not null, suffix varchar(10), taxonomy varchar(100) , calendar varchar(100), info text null, access_control_id foreign key references access_control.id, user_role foreign key references role.id
  1. user_addr_book id int(11) auto-increment primary key, title varchar(10), email varchar(100) unique, url varchar(100), assistant varchar(100), organization varchar(100), valedictory varchar(100), speciality varchar(100), notes text null, abook_type varchar(10), user_id foreign key references users.id
  1. user_facility_link id int(11) auto-increment primary key, user_id foreign key references users.id, facility_id foreign_key references facility.id, is_default boolean not null
  1. access_control id int(11) auto increment primary key, access_type varchar(15)
  1. role id int(11) auto-increment primary key, role_name varchar(20)

Note:

  • 8 & 9 are static table. I think instead of storing name we can store their reference. May be needed in future, but can be merged in users table. I will do according to your suggestion.
  • 7 will create many to many relationship and marking any facility for user as default will be set accordingly. We can check in code to make sure that only one facility is marked as default.
  • some fields like source, default_warehouse, irnpool, are not getting info from either user or addr_book. Is there anything to get that fields info from UI?
  • You told me that password_expire_duration is set from global settings. Can’t we do it in the code itself where we are storing the info at the time of user’s registration. Lets say 180 days, and set that in table from the current timestamp?

These are my suggestions. You please look at it and let me know if i’m missing anything or is there any wrong info? I will surely improve it.

I don’t have a clear understanding of the goals/scope of the project to provide comprehensive feedback on these proposed schema changes.

However, regarding removing the salt column from user_secure [quote=“pri2si17, post:18, topic:808”] I am taking password_hash and password_verify function into account for future. [/quote] Does “for future” mean something you are handling or are you deferring to someone else to handle the change over to the built in PHP functions? Also if the column is removed, what is the plan for migrating existing user account?

How many existing queries/PHP pages would be impacted by all of these schema changes?

@yehster[quote=“yehster, post:19, topic:808”] Does “for future” mean something you are handling or are you deferring to someone else to handle the change over to the built in PHP functions? Also if the column is removed, what is the plan for migrating existing user account? [/quote]

Well re-writing all the queries, to make it compatible with the new schema is also part of my project. So Yes, I will surely handle this. Regarding migration of existing user, at present I haven’t planned it yet, as I don’t know the present process. But, I can write an script which shall take the information of existing user from the existing schema, and migrate it to new schema accordingly. I can’t give exact answer as I haven’t made any plan about it. Right now I’m concerned with refactoring the table’s structure.

If you say that it is necessary to include salt field, then I will get it merged into user_password_history table.

Almost all the queries will change after the new schema.