Reconsider whether or not a wiki is necessary

I know the EHR team is gung-ho and persistent on using a wiki as they feel it’s the only way to lower the barrier of entry for those writing docs, but GitLab has upped their game with the editor on their site. I have voiced my objection to the wiki, as has @judywawira, @downey, and @sunbiz. I will deploy the wiki – and in fact, it’s mostly configured – just have to toss it up the on server – but I want to make one more last-ditch effort to force the EHR team to reconsider if they are outright rejecting the idea without even considering alternatives. I have an alternative use for the server created already in mind.

The alternative that most of you are rejecting is one where we can render the site using Sphinx with a ReadTheDocs theme will suffice in my opinion. We do not need a wiki, and this actually eliminates the need for us to deploy and configure a wiki.

Hi @r0bby-- Please help me understand what the issue is with lowering the barrier for entry to contributing documentation to the LibreHealth EHR project? I am aware of your objections to a wiki server as the documentation repository, which seem from my vantage point to be based on anticipating problems that others have committed to dealing with. This project has so many separate resources already, for chat, for the forum, the public demo(s), for the code repo, for the Libre Project itself that having one more for the doc repo doesn’t seem like a problem. Especially if others have offered to take it off your plate and be responsible for maintaining it.

In the 5 years that I’ve been involved with FOSS projects I’ve seen and read plenty about how they tend to be run with a developer’s sensibilities. That’s great, and how it should be for the coding aspect of the project, and using developer oriented tools is the only way to go. However, maybe you’ve noticed how documentation is always the critical weakness in FOSS projects? I believe that’s a direct result of the developer orientation.

When you’re trying to interact with non- developers, which most documentarians are, you need to use human- oriented tools. Cloning a repo then pushing a document to it is an alien activity for non- devs. To say nothing of the interns we’ve asked to write docs for the project. Most of the ones I am supervising right now (5 at latest count) have some web coding experience and that’s it. They are not devs, but they’re writing very usable docs because they can simply learn the EHR module, write what they know and convert it with easy tools into mediawiki format.

It’s not that the “EHR team” is not considering the alternatives, because they are. It is that the alternatives are unsuitable for non- dev contributions. I really am not trying to be antagonistic here. If you want to write off non- dev contributions, and welcome only those made by people who git (sic) dev tools, you’re going the right way about it. But if you do that you might as well shut down the Outreachy internship program and whatever other appeals to plain healthcare users’ contributions you have planned. Best- Harley

1 Like

Just give it to @tony and be done with this issue. Is LibreHealth EHR necessary?

I’ve got little skin in this game, but wanted to point out a few things.

The “docs-as-code” movement has been growing in popularity over the past several years. Most new startup FOSS projects don’t use wikis. If they do, they use the built-in wiki in the repo service they’re using. And even then, they stop if it becomes more than a few pages and switch to a docs repo built on Markdown or RST formatted text files, using CI to deploy to a static site.

There are probably lots of articles about this, but one that came to my mind from the “corporate world” was https://bloomfire.com/blog/corporate-wikis-are-dead/ and is just as applicable to FOSS communities. Such reasons that wikis have fallen out of favor for documentation, have led to services like ReadTheDocs becoming just as ubiquitous as GitHub, etc.

Also in open source projects, it’s important to think critically (and realistically!) about documentation. Especially how it will be maintained and by whom. Spoiler alert: Users never come in meaningful numbers to maintain and edit documentation beyond occasional typo fixes. This is another reason why “docs as code” has become the dominant strategy: The workflow fits in well with existing development practices – even if there are “dedicated resources” doing documentation, the process of code review and releases alongside software releases “just makes sense” for development projects. The Write The Docs community of documentarians have a good guide on this here: http://www.writethedocs.org/guide/writing/beginners-guide-to-docs/.

Some examples of well-planned static site documentation for FOSS projects:


To be accurate: The tools already available to the LH community currently support static docs sites through the GitHub/GitLab Pages feature. It is not necessary per se to roll out a separate server and/or tools to accomplish documentation that using tools that are generally accepted by the FOSS world as easy to write & easy to maintain. Which reminds me:

The above behavior is not the normal workflow for maintenance of most documentation static sites, unless someone is wanting to do a massive structural rework of the doc site. Each page offers a link to its repository home, and then the user could propose edits right in the browser. A real-world example:

  1. Go to https://kotlinlang.org/docs/reference/. Find something you want to change.
  2. Click the Edit Page button.
  3. Sign in to GitHub if necessary, create an account if need be.
  4. Click the big green Fork this repository and propose changes button.
  5. Edit the page in the in-browser editor.
  6. Click the green Propose file change button.

This process does not require any knowledge of git, checkouts, pull requests, anything beyond what someone would need to know to edit any popular wiki platform’s page.

The doc site maintainers then have a pull request to review and merge if they think the changes are correct. Upon commit, the site is automatically re-rendered and deployed. Putting the review process here (which does require a bit of advanced knowledge, by those that already have it) is actually a good thing, because it reduces spam and other vandalism and/or changes that disrupt the planned structure of the documentation.


Generally, my professional recommendation for anyone wanting to build infrastructure, is to be conservative and only deploy new tools when one is certain that they can’t do what they need with what they currently have. This is especially true in open source projects where resources are scarce. Here’s a good explanation of why:

So, I’d definitely encourage folks to explore existing/available tools before adding more complexity.

We discussed this at length. We decided on the wiki model.

1 Like

@tony – Please consider @downey’s very logical argument. I’m actually following the conservative philosophy outlined in the blog post he linked. I don’t create/deploy servers unless I absolutely have to do so. Check the docs he linked. This is how everyone is doing it…very few projects use wikis nowadays.

https://www.docslikecode.com/articles/when-to-wiki/

Please consider this resources as well as @downey’s argument. The biggest win for the use of wikis is that you can review every change before it goes live…as opposed to a wiki where it just happens.

Some more:

Hey, I personally find no use in wiki format. I have a 14-year history of evading and protesting the use. That said, the folks willing to do the writing say wiki, and if I contribute something, then those folks are willing to work with what I send them. There we are.

There are lots of ways and means: With .ods you can track changes, export to whatever you like, and have everyone working off of the same set of files (if you break them up into small files). This is easy for people to swallow. Unfortunately, you need file sharing in place for that. Using a git repo, you can easily collaborate on documents and track everything without dealing with the whole word-processor “track changes” issues, but then there is the whole review process where stuff gets totally bogged down. Additionally, the true killer is that you simply TOTALLY CUT OFF INPUT from non code-monkeys. We do not really want to limit our selection of Liberal Arts contributors to those that can also put forth the effort to overcome the GIT learning curve. Face it: Either Git terminology and process descriptions are written so poorly with the intent to rule out use by anyone not already an UberGeek, OR they are written so poorly due to the fact that the folks who CAN describe it have zero technical documentation skills. My own reaction, were I joining the project as a tech writer would be be to take one look at the Git docs and go “Oh heeeeell nooo! These people need WAAAY too much help!”

-I’m not just talking about non-tech people that think of a text file as a miniature piece of paper in a hard drive, and I am not just talking about nerds that can’t even look at someone else’s shoes when they are talking to them. The documentation needs to be nearly immediately accessible to a wide range of clinical and clerical folks that have some technical know-how. I think the Wiki format absolutely stinks, and is the worst possible final format you could wish for as a multi-topic piece of documentation, but it is (reportedly…I wouldn’t know) an accepted content management system in large-scale use.

Personally, I would rather we had both options in a nosql-type document database that either approach could interact with. I am ready to assist with mediawiki stuff. I just hope we can use the data to generate something we can put out as an updateable e-pub or the like.

I had no intention of taking away the wiki – what I was hoping to do was change see if people could change their minds potentially – but there are bigger battles to be fought – let’s get work done – we can always revisit later on if we must. There are ways to extract the information from Mediawiki.