To get an understanding of instrument calibration, we can think of it as being an essential phase in most measurement procedures. The calibration curve is a set of operations with which the relationship between the outputs of the measurement system (for example, the response of an instrument) and the accepted values of the calibration standards (for instance, the amount of analyte it has) can be established. Understanding the calibration curve is important because this is required in a large number of analytical methods.
How to go about a calibration curve?
At its most basic form, understanding the calibration curve has to begin with the preparation of a group of standards which contain a known amount of the analyte of interest. In this, the instrument response for each standard has to be measured. In addition, the relationship between the instrument response and analyte concentration has to be established. This relationship is then used to transform measurements made on test samples into estimates of the amount of analyte present.
The calibration process
In understanding the calibration curve, we have to know that a number of stages go into calibrating an analytical instrument. This is how a logical sequence of steps would look:
- Planning of the experiments;
- Making the relevant measurements;
- Plotting the results;
- Carrying out regression analysis on the data, which will help obtain the calibration function;
- Assessing the results of this regression analysis;
- Using the calibration function to estimate the values of the test samples;
- Evaluating the ambiguity of the values obtained
43337 Livermore Common | Fremont| CA | USA | 94539