# Why do we pretend we are 100% certain of everything?

A statistical view of human overconfidence.

Start reading any book on statistics or scientific methods and you’ll quickly see references to concepts such as a **mean** (average value of somethings), **variance** (a measure of the range of variation within a value), and **95% confidence intervals **(the range in which there is a 95% chance a value will be within).

However, when you then jump into a so-called *“intellectual source”* of information like the Financial Times, Apple's spec website, or Wikipedia you see absolutely no trace of variability within numbers.

Whether because it's too much work or we don’t understand it yet — for some strange reason we still tend to view numbers as absolute values.

When inherently any number has a degree of *variability* (natural change per measurement) and *uncertainty* (degree of change due to imprecise measurement).

# Why is it that we still don’t use 95% confidence intervals to represent all numbers?

If we talk about fake news — it's no wonder so many articles and research papers are hard to replicate or find figures for as we can’t see

a.) The number of trials/measurements done (likely to be just 1 measurement)

b.) We can’t see the inherent range within that number.

## This is as bad as saying that the price of bitcoin will forever be the average of its ups and downs in the past 5 years.

I really think there is huge potential to advance the way we use numbers (as ranges rather than absolute values) but also our perception of them. Because I still really see people use single value numbers as a universal truth or 100% confidence interval which is beyond silly.

Perhaps what is needed is better tools and standards to encourage and ease into a transition of ranges instead of absolute numbers.

Would love to know your thoughts in the comments.