Two studies that are among our most used studies at the moment, are about measuring bias or difference between methods or instruments producing quantitative results. This is something which practically all laboratories should be doing from time to time in some extent. Measurement Procedure Comparison study is the more flexible one enabling many kinds of method comparisons. For cases where only parallel instruments are compared, we have a slightly simpler option, Instrument Comparison study.
Here’s a brief introduction to these studies. We’ll discuss shortly, what kind of things you need to consider when conducting them, and what kind of tools we offer to make high quality validations easy.
What’s it all about?
Even though we’ve tried to make these studies super easy, there are many things one needs to understand to be truly able to interpret comparison results given by Validation Manager. So let’s go briefly through some profound questions you need to consider in your validations and verifications.
What are you measuring?
Do you want to find out the difference between two methods, or do you want to estimate bias of a new method? Are you comparing to a reference method, or to a somewhat biased method? Do you want to measure just systematic error, or do you want results that represent real use so that random error is also present in the results?
Answers to these questions affect, what kind of analysis rules you should use in your study. With Validation Manager default settings, you are estimating bias in a setup that represents real use, using a comparative method that is assumed to have some bias. If you want to do something else, just adjust these settings in the analysis rules of your study plan.
How to evaluate bias?
If difference plot shows constant (absolute or relative) bias throughout the measurement range, and results seem to follow normal distribution around mean difference, (absolute or relative) mean difference gives a good estimation for bias. In this case the amount of samples does not need to be very large. This is often the case when comparing parallel instruments.
When comparing e.g. methods from different manufacturers, the measurement procedures may have such various differences that a larger sample amount is need. Bias is evaluated at different concentration levels using regression analysis. In most cases, Passing-Pablok regression model (which is the default selection in Validation Manager) is the best choice, as it copes with statistical imperfections of the data that are commonly present in medical data (mixed variability, nonsymmetrical distribution of data, skewed dataset). With data sets smaller than 40, other regression models may produce more accurate results if certain assumptions about the statistical qualities of the data are met.
Would you like to learn more? In case you are interested, we also provide statistical trainings. Contact us to order a training for your team!
Of all the options that you have
Validation Manager offers quite a range of tools for analyzing your comparison data.
As soon as you have some data in Validation Manager, your results will be shown on the report. It gives you a tool to evaluate, whether your samples span the measurement range well enough, or should you look for certain concentration levels to supplement the data set. You may also consider whether or not you should use more samples to get more reliable results.
To get an idea about your results with one glance, we have many options for setting goals for your results. You can define range specific goals, so that you can e.g. have absolute goals for low values and relative goals for large values. You can also set goals for medical decision points, i.e. exact concentration values that are crucial for clinical interpretations. Your report overview table will show whether these goals are met or not. Overview table can also show warnings e.g. about outliers, depending on your plan selections. This makes it quite easy to see, which results need your attention.
When digging deeper into the results, there are many ways to get more out of data analysis. In addition to the more familiar difference plots and regression plots, Validation Manager draws you a bias plot. It shows you how bias behaves on different concentration levels in regards to your bias goals. You can also define reference intervals to your reference tests to be shown on all study graphs, so you can easily examine behavior on relevant value ranges.
It is also possible to examine your data piecewise. This can be handy e.g. if your results span a wider range than the measurement range of your instrument. If you want to, you can add a name to these measurement ranges.
What is your favorite feature in quantitative comparison studies?
For a more detailed eplanation of what it’s all about, please see our more recent articles
Accomplish more with less effort
See how Finbiosoft software services can transform the way your laboratory works.