Let's start with an example. The length of a pencil measured by a ruler is \(5.4\text{cm}\), the measurement has an uncertainty of \(\pm 0.1\text{cm}\). It means the answer lies somewhere between \(5.3\text{cm}\) and \(5.5\text{cm}\).

Because you can not be exactly certain that your answer is \(5.4\text{cm}\), it can be plus \(0.1\text{cm}\) or minus \(0.1\text{cm}\) and that is called uncertainty in measurement. This uncertainty depends on how much deep your scale can measure.

In this case your measurement has an **uncertainty** of \(\pm 0.1\text{cm}\) which is the maximum difference between the measured value and the actual value. You can not exactly say that your answer is \(5.4\text{cm}\) due to how much accurate and precise your measuring device is.

The uncertainty depends on the measuring device you use and how much deeper or precisely the measuring device actually measures.

For example, if you measured the thickness of a book using a more precise vernier calliper which can measure up to \(0.01\text{cm}\) scale and found the value to be \(4.34\text{cm}\), your value has an uncertainty of \(\pm 0.01\text{cm}\); it means your value lies somewhere between \(4.33\text{cm}\) and \(4.35\text{cm}\).

Your value with your uncertainty can be written as \(4.34\pm 0.01\text{cm}\). If your measuring device can measure up to \(0.001\text{cm}\) scale, your uncertainty will be \(\pm 0.001\text{cm}\). Greater the accuracy and precision of the measuring device, lesser the uncertainty.