HealthMate

Personal Health Record Aggregation Web Service

HealthMate automates the task of contacting, collecting, and storing a patients up-to-date medical history. This allows users to manage their data independently, without risks of losing it, and leads to increased patient awareness and better health service.

This project was completed for the class 18749: Building Reliable Distributed Systems.


Motivation

Currently, patients are reliant on their medical providers to manage their health data. If a patient needs to switch hospitals or doctors, they must request for their old provider to send their medical history to the new provider. Anyone who has experienced this knows that this process is not usually easy.

HealthMate gives the user direct control of their medical history across all of their providers and automates the process of aggregating past medical records to one central location.


Functional Requiremnts

We narrowed the scope of our project to enable two key features for users:

  1. Submitting Requests to Medical Providers for Information

    • Select a provider from a list of hospitals

    • Enter account credentials for that medical provider's patient portal website

    • Automatically parse patient portal website to retrieve medical records

    • Store medical records in DB

  2. Interacting w/ medical information

    • User can view past medical records across multiple hospitals or organizations

    • User can update his medical records


Initial Architecture

We decided to use the MEAN (MongoDB, Express, AngularJS, and NodeJS) stack and decided to use AWS for our hosting.

Our initial system architecture consisted of four key components.

  • Front-End Web Server

  • Master Server

  • Worker Servers

  • Database


Minimum Viable Product

We quickly implemented the minimum viable product for our service. This consisted of a website with a basic user interface created using Bootstrap.

Users could create and manage user accounts. Once logged in, they could request for their medical records to be scraped from the UPMC patient portal after they provide us with their UPMC user credentials. Finally, their records would be automatically scraped and displayed in their home page for future viewing.

The Master Server manages the state of user requests and forwards the requests to one of the Worker Servers. The Worker Server performs the scraping and notifies the Master Server when it is complete.

We implemented the automatic scraping of health records from UPMC's web portal using Nightmare.JS, which is a headless web scraper built for NodeJS.

From here, we shifted our focus towards designing our system to be fault tolerant, which was the main goal of this course project.


Fault Tolerance

The next step was to update our architecture to make the system fault tolerant to message loss and OS/Process crashes.

We focused on ensuring fault tolerance at three out of our four key components in our initial architecture.

  • Passively Replicated Master Server

  • Stateless Worker Servers

  • Passively Replicated Database

Our updated final architecture diagram:

Our final system has two single points of failure: The Web Server and the Coordinator Server. Given that the project was completed over a single semester, we decided not to focus on replicating the Web Server or Coordinator Server.


Passively Replicated Master Server

We ensured fault tolerance for our Master Server through passive replication. We had one primary Master Server that actively receives requests from users and two backup Master Servers that are not actively receiving requests. We have a Request Log that is stored in the DB that stores the state of user requests.

When the primary Master Server receives a new request to scrape information, it adds a new entry to the Request Log with information about the request and a state of "started". When the primary Master Server receives confirmation that the scrape is complete, it updates the entry in the Request Log to the state of "finished".

We added a Coordinator Server that manages the primary and backup Master Servers. It receives requests from the Web Server and forwards them to the primary Master Server. Every 5 seconds, the coordinator checks the health status of the primary Master Server. If it detects that the primary Master Server has crashed, it elects one of the backup Master Servers to be primary and notifies both backup Master Servers of the reelection process.

The new primary Master Server reads the Request Log and retries all requests that are not in the state of "finished". The Coordinator Server now forwards all requests to the new primary Master Server and our system is back to a consistent state.


Stateless Worker Servers

We kept our Worker Servers completely stateless to ensure fault tolerance. All of the Worker Servers are completely identical and their sole function is to perform the scraping of medical records. If one of the Worker Server crashes during a scrape request, the Master Server will detect this and retry the scrape request with another Worker Server that is randomly elected. Because of this, we can easily spin up/down Worker Servers as needed based on demand.


Passively Replicated Database

We have a passively replicated database with one Primary Node, and two Backup Nodes in a MongoDB replica set to ensure redundancy and increase data availability. The Primary Node records all changes in an operation log. The Backup Nodes replicate this operation log asynchronously and apply data operations to ensure that they match the Primary Node.

If the Primary Node crashes, this is detected by the rest of the members in the replica set. This triggers a reelection process and a new node is elected to be Primary.


More Information

Sorry, but we are not actively running the web service due to AWS costs.

Github

Homepage image taken from here.