MorpheusQ Diagnosis System

A digital pipeline for Fetal Alcohol Syndrome diagnosis and research

Collaborators

Rady Children's Hospital

Ganz Chockalingam

Ryuen Le

Robin Xu
 

Role

iOS Developer

UX Designer

Timeline

May 2019 - Aug 2020

Languages & Tools

Swift

AWS

Axure

Problem Space

"Fetal alcohol syndrome (FAS) is a condition in a child that results from alcohol exposure during the mother's pregnancy." Early diagnosis relies on measurements of physiological features such as lip shapes and palpebral fissure lengths. But the lack of consensus among physicians as well as physicians with such expertise lead to difficulties in diagnosis and further research. 

Our Approach

We conducted field research with expert physicians and identified the problem in the current diagnosis method. We then proposed and prototyped a 3D scan based tool for measuring facial feature length as well as a video frame based continuous scrolling scale for lip shape matching.

My Contribution

I designed the UX for the app and built the iOS app all the way until the v1.0 App Store distribution. With my PM, I conducted field research and explained our prototype to physicians onsite. With the VR team, I coordinated the cloud data format synchronization and  set up the AWS S3 storage.

Icon-1024.png

MorpheusQ

 
/ Background

// About Fetal Alcohol Syndrome (FAS)

Fetal Alcohol Syndrome is a condition among children whose mothers had alcohol exposure during pregnancy. The syndrome may cause irreversible birth defects and other cognitive development challenges. A diagnosis primarily relies on identifying common FAS-related features, such as short palpebral fissures, smooth philtrum, and thin upper lip. An early diagnosis can help parents better assist a child's physical and mental development.

Identifying Problems

 

// Our Methods

We conducted user research with pioneering physicians and researchers in the domain from Rady Children's Hospital and San Diego State University. Simulated diagnosis were carried out to evaluate consistencies and error rates in the existing method. The following problems have been identified.

// Problems Identified

  1. Measuring the palpebral fissure length with a ruler is prone to errors.

    • Children's movement and safety concerns over placing the ruler close to the eye leads to alignment errors.

    • Rulers do not measure the true 3D distance between eye corners.

    • Angles and positions at which a ruler is placed is not consistency among physicians.

  2. Current lip shape reference guide lacks precision and accuracy.

    • Reference scale made up of 5 discrete pictures lacks intermediate pictures and thus precision.

    • Children from different ethnicity naturally have different physiological characteristics, making a single reference guide inaccurate.

  3. As a result, the diagnosis relies on experience, making medical practitioners with less expertise unable to perform the task.

  4. The lack of descriptions for measurement methods makes it difficult for researchers to use clinical data.

  5. Field researchers and physicians do not have a convenient way to collect and record data on the go.

 
 
Design Requirements

 

// Functional

  • Measure the true 3D distance between palpebral fissures (accuracy).

  • Achieve higher precision of measurements for both distance and lip ranking.

  • Standardize results among physicians.

// Non-functional

  • Minimal learning curve for physicians of various experience level with tech.

  • Super reliable. A few error crashes are enough to get the project rejected.

  • Easy setup and fast runtime. Children cannot sit still for very long.

 
The Product (beta)

The app is not intended for public usage. In the next stage, it will allow institutional access. You may still check out the the current store page here

/ Later Features Not Shown In the Demo

// Anti-bias Slider Control

For the continuous lip rank video scale. I replaced the video progress bar with forward/backward buttons to reduce physicians' bias from guessing the rank based on video play time.

IMG_0113.PNG
Screen Shot 2020-10-07 at 23.46.15.jpg

// Hands-free Control for Improved Stability

To combat potential scan inaccuracies from hand shakes as well as to free users from having to reach the capture button by hand, I added two more modalities to trigger the capture:

  1. I added voice trigger by adding Speech framework to detect a range of similar phrases to "measure eyes", such as "measure eye", "measure ice", etc., once the face is in the appropriate distance range.

  2. I allowed bluetooth remote controls to trigger the capture in the same way as some selfie sticks work.

// Improved Consistency

On the result saving page, I replaced the small popup box design with a page sheet, consistent with the lip shape page design as well as the new iOS navigation standard.

/ Two Key Functions Explained

 

// Eye (palpebral fissure length measure):

  1. The child goes through a 10-second 3D face scan.

  2.  A 3D face model is created. Two measurement markers are initialized at the approximate eye-corner positions using facial landmarking algorithms.

  3. The physician adjusts the markers to the desired eye-corner positions.

  4. Results are saved securely onto the cloud. (Other input boxes in the demo are for research purposes. We are comparing the digital result with the physical ruler-based result.)

 

// Mouth (lip and philtrum shapes):

  1. The physician selects the video frame with the lip shape that most closely resembles that of  the subject by:

    • Click ​on either the forward or backward button for fine-tuning.

    • Hold click on either of those buttons to jump many frames quickly.

  2. The selected frame index is converted to a score from 1.00 to 5.00. The result is hidden from the physician and is securely stored on the cloud.

 
/ Design Choices

// Measurement Tool

I tested out various libraries using various existing iPhone cameras. I continued with the front TrueDepth camera for maximum precision.  

 

The iOS default AR ruler app (the left one) using back cameras is only up to half inch precision. Our product, utilizing a 3D mesh library that processes front TrueDepth camera point cloud, gives sub-millimeter precision (The true diameter of a 1 cent coin is 19.05 mm. Our measures consistently produces an error within +/- 0.5 mm).

Cross marker.jpg

A previous version

IMG_0118.jpg

Latest version

// 3D Measurement Marker Design

Design considerations for the 3D measurement marker:

  • Easy to grab the marker

  • The finger should not block the view of the anchor point

  • Anchor point is small enough so that precision is not lost.

  • User should move the point as if it is on a 2D surface (User should not worry about the z-axis positioning).

  • The marker always be fully visible (not blocked in part by the head model).

visual anchor

visual handle

actual measurement is taken from the center of the anchor

the invisible handle that user actually "grabs" onto

To accommodate all the requirements for the 3D measurement marker. I wrapped the marker in an invisible cylindrical handle. I then detect if the user's touch interests with the handle.

In addition, the measurement marker has its anchor half buried into the face model surface. The handle part projects 45 degrees upwards away from the face surface and 45 degrees to the right away from the nose to avoid being visually blocked by the face model.

// Measurement Page Gesture Design

Get user touch location in the screen view.

Use Ray Casting to find the intersection of the extension ray from the point with any 3D geometries in the scene.

If intersects with the face invisible marker handle

Otherwise

Get the 2D position of the marker's anchor point on the screen using reverse Ray Casting.

Translate the 2D position by the amount of finger panning gesture.

Pan the viewing camera. 

Vertical pan

Vertical move of viewing frame

Horizontal pan

Rotate head model around the vertical axis.

Ray casting from the translated 2D position to find the intersection with the face model. Move the marker's anchor point to that position.