Validation Manager

Introducing Validation Manager’s new study for small-scaled comparisons

By September 5, 2022September 6th, 2022No Comments

SUBSCRIBE TO THE FINBIOSOFT BLOG

Get insights like this to your inbox, max twice a month.

Subscribe

Validation Manager has been built to make it easy for you to do your validations and verifications according to latest protocols and guidelines. But during the many years we’ve worked with laboratories across Europe, we’ve learned that our users also have needs for simpler protocols. For example, in periodical parallel instrument comparisons and reagent lot changes, there’s no point in making an extensive comparison study using tens of samples spread evenly across the measuring range. This is because you can get the information you need with a smaller number of samples, and if something looks suspicious in the results, you can always add more samples for analysis later.

That is to say, although our users have been able to do such verifications in Validation Manager, the tools they’ve got haven’t been quite optimal. Therefore, we’ve released some improvements to make it simpler and more effective for you to do these kinds of verifications.

So, what has changed?

Lot-to-lot comparison easier than ever

In Validation Manager, you don’t just compare some sets of data to later wonder where they came from. For better traceability and objectivity, we start each verification by planning it: what are we verifying, and what kind of samples and protocols we use. The plan also enables automating the data flow, building all your reports automatically as soon as you’ve imported your data, and viewing verification reports of multiple instruments and analytes simultaneously.

For example, when doing comparisons, you need to plan comparison pairs to tell Validation Manager which measurement result sets are to be compared with each other. We have now improved the functionality related to creating these pairs. In our new Comparison study, if you have reagent lots defined on your test, you can build comparison pairs between these reagent lots.

Planning reagent lot comparison pairs in Validation Manager.

We also provide you two options for giving reagent lot information for the data: by including the lot information in your import or by adding it in the new Test Runs view.

Setting reagent lots for Test Runs in Validation Manager.

Small verifications more effective than ever

Validation Manager has always calculated bias for you using a regression model. But with small data sets, this doesn’t necessarily make any sense. That’s why we now give you new options for planning your goals: In the new comparison study, you can select whether you want to see mean difference or bias at all in your result overview table.

We have also added two new parameters that may be more useful in small verifications. First, you can now set a separate goal for the difference between individual samples. Second, if you use repeated measures, we can now calculate the SD and %CV of the sample-specific results, so you can also get estimates of the precision at each sample level.

Now you might be wondering: Why would I want to use replicates? In extensive comparisons with tens of samples, replicates are not commonly used. But if you make a comparison using only a few sample levels, the random error related to the results easily makes the results difficult to interpret. To avoid that, you’d want to measure some replicates of each sample and compare the mean values of each sample. This way, the effect of random error diminishes, and your results will better describe the bias between your compared measurement procedures. And by comparing the sample specific differences with estimations of precision, you can get some understanding on the significance of your bias estimation.

Bias estimated using five samples. On the left (graph a), there are no replicates, so there’s a lot of variance in the results. On the right (graph b), each sample is measured three times using both methods, and the mean values are used for comparison. Data set in graph a is part of the data set in graph b. By comparing the scales and the confidence intervals in the images, we can clearly see that there is less variance in graph b.

Why a new way to combine bias and precision?

Some of our users may be a bit puzzled by why we again wanted to combine bias and precision in the same study. We already did that with the ANOVA protocol in our Quantitative Accuracy study, which is specifically meant for cases where you measure replicates of only a couple of sample levels.

The thing is, estimating bias and precision are at the core of verifications, but there are different needs related to them. The benefits of the ANOVA protocol are in large numbers of replicates and the ability to divide the measurements over multiple days to find different precision components. But if you don’t expect anything to change and mostly use the replicates for making the difference calculation more reliable, the new study will give you a more useful approach to the results.

For example, when estimating bias based on EQA results, the new comparison study will make it easier for you to get an overview of the results, examine them, and make your conclusions. You simply use your own measurement results as candidate results and compare them to the true values reported by the EQA provider. You don’t need replicated measurements to do this, but if you manage to divide the EQA sample into a few replicates, you can compare the mean of these replicates to the true value to reduce the effect of random error in your bias estimation. Just remember that this is a separate examination compared to the actual EQA round, and you should always follow the guidance of the EQA round for measuring and reporting your EQA results.

Comparing results of two EQA samples to their true values reported by the EQA provider. On the left, each sample has been measured only once. On the right, three replicates are used to get some information about the effect of imprecision in the measurement.

There you have it, a simpler and more effective way to do small verifications. We hope you enjoy these new improvements. And as always, feel free to drop us a line at [email protected] if you have any comments or questions!


Many improvements in Validation Manager have been developed together with our customers to meet their needs. These improvements are no exception. Therefore, we’d like to use this opportunity to thank our Validation Manager users who have been giving feedback. Especially the input by some of our Norwegian customers has been crucial in designing these improvements.

Accomplish more with less effort

See how Finbiosoft software services can transform the way your laboratory works.

Request a demo

Leave a Reply