The Complete Guide to Percentage Error
A percentage error calculator measures how far an experimental or observed measurement deviates from a known, accepted, or true value. It is one of the most essential tools in science, engineering, and quality control, enabling you to quantify the accuracy of any measurement.
How to Calculate Percentage Error Step by Step
- Find the absolute error: Subtract the true value from the observed value, then take the absolute value: |Observed - True|.
- Divide by the absolute true value: This normalises the error relative to the expected measurement.
- Multiply by 100: Convert the decimal to a percentage.
The result is always positive because the formula uses absolute value. It does not matter whether you over-measured or under-measured; the percentage error reflects the magnitude of the deviation.
Worked Example: Chemistry Lab
A student measures the boiling point of water and records 101.3 degrees Celsius. The accepted true value is 100 degrees Celsius.
- Step 1: |101.3 - 100| = 1.3
- Step 2: 1.3 / |100| = 0.013
- Step 3: 0.013 x 100 = 1.3%
A 1.3% error is excellent for a school laboratory experiment.
Worked Example: Engineering Measurement
A machined part should measure exactly 50.000 mm. The inspector measures 49.82 mm.
- Step 1: |49.82 - 50.000| = 0.18
- Step 2: 0.18 / |50.000| = 0.0036
- Step 3: 0.0036 x 100 = 0.36%
Understanding the Three Types of Error
- Absolute Error: The raw numerical difference between observed and true values (e.g. 0.18 mm). Useful for understanding the physical magnitude of the deviation.
- Relative Error: The absolute error divided by the true value, expressed as a decimal (e.g. 0.0036). Useful in mathematical contexts.
- Percentage Error: The relative error multiplied by 100 (e.g. 0.36%). The most intuitive and widely used format for reporting experimental accuracy.
Our calculator displays all three so you can choose the most appropriate measure for your context.
Common Sources of Error
- Systematic errors: Consistent biases in equipment calibration or methodology that push all measurements in the same direction.
- Random errors: Unpredictable fluctuations caused by environmental conditions, human judgement, or instrument precision limits.
- Gross errors: Outright mistakes such as misreading a scale, recording data incorrectly, or using damaged equipment.
Understanding the source of your error is just as important as quantifying it.
Real-World Applications
- Science: Validating experimental results against theoretical predictions in physics, chemistry, and biology.
- Manufacturing: Ensuring machined parts fall within specified tolerance bands.
- Finance: Comparing forecasted revenue against actual figures.
- Medicine: Verifying the accuracy of diagnostic equipment against calibrated standards.