A common request when analyzing large amounts of data is to evaluate the impact exceptional data has on results. Statistics addresses these needs by offering “median” and “average” when normalizing large numbers of data points.
Median selects a data point in the exact center of all data points to define the “normal” value and, as a result, is unaffected by exceptionally high or low data.
Average, also known as “mean”, on the other hand, sums all of the data points and divides by the number of data points to determine the “normal” value. Average is affected by exceptionally high or low data.
Latest posts by Eric Axelrod (see all)
- Metadata Automation |Tableau Community - March 1, 2017
- How Amazon Will Ride Big Data To $1 Trillion Market Cap - January 22, 2017
- Why physicists are a good fit for data science jobs - January 16, 2017