top of page

Validation

Updated: Mar 31, 2022

How we approached validation work, with an attached white paper. Peer reviewed publications linked as well.


Link to white paper:


Link to validation papers, Journal of Biomechanics:




Any one who is familiar with the concept of “the burden of proof” understands that the individual presenting the thesis must also provide the supporting evidence. For instance, someone could say “Hey, did you know that the earth is flat?”, and I could respond with “No, it’s not”. As the presenter, they would have to prove that the earth is flat, and, since I am not presenting the thesis, my response is just as good as any (and in this case, entirely correct).


Though I digress a bit, I consider measurement systems in general to follow a similar principle. When presented with a new measurement system, a series of validations typically ensue, to provide reassurance that the system meets the standards of the intended user. The system in this case, is our thesis. In markerless motion capture, this is particularly important. Historically, many promises have been made, and unfortunately, very few have been delivered.


When initially researching this technology, we regularly discussed this, and because of the negative connotations, we knew that we would need to provide not just an adequate amount of validation work, but an extensive validation in order to change the opinions within the community.


Accordingly, we partnered with a premiere research institution to run third party validation. The objective was to determine the capabilities of the system, in a nice sequence that gets progressively more difficult. Here, if one test didn’t meet our standards (or theirs), then there wouldn’t be much point in doing the next. I’m not going to discuss them in detail here. For that, just review the white paper or the validation papers themselves. I’m hoping to explain more generally the approach that was taken.


A natural starting point was to run a spatial temporal validation. These are somewhat basic gait parameters, but are actually really important to assess function (also, my favorite type of experiments are the most simple!). Furthermore, they are really commonly measured. Once this was collected and analyzed, we continued to the next step, which was to compare our measurement modality to a marker based measurement. Walking was the main focus, as the partner institute measures a lot of walking. However, we also looked at running and many functional movements (I believe these are coming out shortly). The last test was repeatability. Effectively, if we get the same people in on multiple days, do we measure the same thing? This is a sour topic in biomechanics because this last one is actually very difficult.


As a final aside (because this should be about the white paper, not this blog post), we pretty much universally agree internally that the validation work is such an important aspect of technology, which is why we continue to evolve on the studies and encourage people to reproduce them. Now, while we will not re-do another marker to markerless comparison (because I am currently aware of over 10 labs who have done this), we will always continue to research and test different hypotheses in an effort to improve the accuracy of our system. For me, this is pretty cool, because in any case, as long as our input data is just calibrated video data, everything we do in the future to improve the accuracy of the system applies to all data that has been collected. Because I personally know so many of our customers, I really like this concept.


If this type of thinking resonates with you, get in contact with us and we can tell you more.


Link to white paper:


Link to validation papers, Journal of Biomechanics:



798 views0 comments

Recent Posts

See All

Join our mailing list

Thanks for subscribing!

bottom of page