Validation Manager has been built to make it easy for you to do your validations and verifications according to latest protocols and guidelines. But during the many years we’ve worked with laboratories across Europe, we’ve learned that our users also have needs for simpler protocols. For example, in periodical parallel instrument comparisons and reagent lot changes, there’s no point in making an extensive comparison study using tens of samples spread evenly across the measuring range. This is because you can get the information you need with a smaller number of samples, and if something looks suspicious in the results, you can always add more samples for analysis later.
That is to say, although our users have been able to do such verifications in Validation Manager, the tools they’ve got haven’t been quite optimal. Therefore, we’ve released some improvements to make it simpler and more effective for you to do these kinds of verifications.
So, what has changed?
Lot-to-lot comparison easier than ever
In Validation Manager, you don’t just compare some sets of data to later wonder where they came from. For better traceability and objectivity, we start each verification by planning it: what are we verifying, and what kind of samples and protocols we use. The plan also enables automating the data flow, building all your reports automatically as soon as you’ve imported your data, and viewing verification reports of multiple instruments and analytes simultaneously.
For example, when doing comparisons, you need to plan comparison pairs to tell Validation Manager which measurement result sets are to be compared with each other. We have now improved the functionality related to creating these pairs. In our new Comparison study, if you have reagent lots defined on your test, you can build comparison pairs between these reagent lots.
We also provide you two options for giving reagent lot information for the data: by including the lot information in your import or by adding it in the new Test Runs view.
Small verifications more effective than ever
Validation Manager has always calculated bias for you using a regression model. But with small data sets, this doesn’t necessarily make any sense. That’s why we now give you new options for planning your goals: In the new comparison study, you can select whether you want to see mean difference or bias at all in your result overview table.
We have also added two new parameters that may be more useful in small verifications. First, you can now set a separate goal for the difference between individual samples. Second, if you use repeated measures, we can now calculate the SD and %CV of the sample-specific results, so you can also get estimates of the precision at each sample level.
Now you might be wondering: Why would I want to use replicates? In extensive comparisons with tens of samples, replicates are not commonly used. But if you make a comparison using only a few sample levels, the random error related to the results easily makes the results difficult to interpret. To avoid that, you’d want to measure some replicates of each sample and compare the mean values of each sample. This way, the effect of random error diminishes, and your results will better describe the bias between your compared measurement procedures. And by comparing the sample specific differences with estimations of precision, you can get some understanding on the significance of your bias estimation.
Why a new way to combine bias and precision?
Some of our users may be a bit puzzled by why we again wanted to combine bias and precision in the same study. We already did that with the ANOVA protocol in our Quantitative Accuracy study, which is specifically meant for cases where you measure replicates of only a couple of sample levels.
The thing is, estimating bias and precision are at the core of verifications, but there are different needs related to them. The benefits of the ANOVA protocol are in large numbers of replicates and the ability to divide the measurements over multiple days to find different precision components. But if you don’t expect anything to change and mostly use the replicates for making the difference calculation more reliable, the new study will give you a more useful approach to the results.
For example, when estimating bias based on EQA results, the new comparison study will make it easier for you to get an overview of the results, examine them, and make your conclusions. You simply use your own measurement results as candidate results and compare them to the true values reported by the EQA provider. You don’t need replicated measurements to do this, but if you manage to divide the EQA sample into a few replicates, you can compare the mean of these replicates to the true value to reduce the effect of random error in your bias estimation. Just remember that this is a separate examination compared to the actual EQA round, and you should always follow the guidance of the EQA round for measuring and reporting your EQA results.
There you have it, a simpler and more effective way to do small verifications. We hope you enjoy these new improvements. And as always, feel free to drop us a line at [email protected] if you have any comments or questions!
Many improvements in Validation Manager have been developed together with our customers to meet their needs. These improvements are no exception. Therefore, we’d like to use this opportunity to thank our Validation Manager users who have been giving feedback. Especially the input by some of our Norwegian customers has been crucial in designing these improvements.
Accomplish more with less effort
See how Finbiosoft software services can transform the way your laboratory works.