I am going to explain what are probability distributions, why they are important, and how they can help you when estimating measurement uncertainty. Simply explained, probability distributions are a function, table, or equation that shows the relationship between the outcome of an event and its frequency of occurrence. Probability distributions are helpful because they can be used as a graphical representation of your measurement functions and how they behave.
When you know how your measurement function have performed in the past, you can more appropriately analyze it and predict future outcomes. In the next few paragraphs, I am going to explain some characteristics that you should know. A histogram is a graphical representation used to understand how numerical data is distributed. Take a look below at the histogram of a Gaussian distribution. Look at the histogram and view how the majority of the data collected is grouped at the center. This is called central tendency. Now look at height of each bar in the histogram.
The height of the bars indicate how frequent the outcome it represents occurs. The taller the bar, the more frequent the occurrence. Skewness is a measure of the probability distributions symmetry. Look at the chart below to visually understand how probability distributions can skew to the left or the right. Kurtosis is a measure of the tailedness and peakedness relative to a normal distribution.
As you can see from the image below, distributions with wider tails have smaller peaks while distributions with greater peaks have narrower tails. Do you see the relationship? I know it seems like I am making you read more information that you want to know, but it is important to know these details so you can select the appropriate probability distribution that characterizes your data. If you are uncertain how your data is distributed, create histogram and compare it to the following probability distributions.
Below you will find a list of the most common probability distributions used in uncertainty analysis. After reading this article, you should be able identify which probability distributions you should use and how to reduce your uncertainty contributors to standard deviation equivalents. The Normal distribution is a function that represents the distribution of many random variables as a symmetrical bell-shaped graph where the peak is centered about the mean and is symmetrically distributed in accordance with the standard deviation.
The normal distribution is the most commonly used probability distribution for evaluating Type A data. If you do not know what Type A data is, it is the data that you collect from experimental testing, such as repeatability, reproducibility, and stability testing. To get a better understanding, imagine you are going to collect measurement samples and create a histogram graph with your results. The histogram for your data should resemble a shape close to a normal distribution. The more data that you collect, the closer your histogram will begin to resemble a normal distribution.
Now, I do not expect you to collect samples every time you perform repeatability and reproducibility test. Instead, I recommend that you begin by collecting 20 to 30 samples for each test. This should give you a good baseline to begin with, and allow you to characterize your data with a normal distribution. To reduce normally distributed data to a standard deviation equivalent, use the following equation. For example, if you collect 20 samples for a repeatability experiment and calculate the standard deviation, the value of k is 1.
If you are wondering, it is equal to 1 because your standard deviation is already at the 1-sigma level i. For the next example, imagine you are evaluating the measurement uncertainty from your calibration report. If your reported uncertainty is 1 ppm, then;. The Rectangular Distribution is a function that represents a continuous uniform distribution and constant probability. In a rectangular distribution, all outcomes are equally likely to occur. The rectangular distribution is the most commonly used probability distribution in uncertainty analysis.
When you are not confident how your data is distributed, it is best evaluate it conservatively.
Concepts, Principles, and Methods for the Assessment of Measurement Uncertainty
So, make sure to pay attention, you will be using this probability distribution a lot. To reduce your uncertainty contributors to standard deviation equivalents, you will want to divide your values by the square-root or 3. For example, if you performing measurement uncertainty analysis and evaluating the contribution of a factor that has an influence of 1 part-per-million and you propose that the data is uniformly distributed, then;.
The U-shaped Distribution is a function that represents outcomes that are most likely to occur at the extremes of the range. The u-shaped distribution is helpful where events frequently occur at the extremes of the range. Consider the thermostat that controls the temperature of your laboratory.
If you are not using a PID controller, your thermostat controller only attempts to control temperature by activating at the extremes. For this reason, it is best to characterize your laboratory temperature data using a u-shaped distribution.
Type A and Type B Uncertainty: Evaluating Uncertainty Components
To reduce your uncertainty contributors to standard deviation equivalents, you will want to divide your values by the square-root or 2. So, if you performing measurement uncertainty analysis and evaluating the contribution of a factor that has an influence of 1 part-per-million and you propose that the data for this factor is u-shaped distributed, then;. The Triangle Distribution is a function that represents a known minimum, maximum, and estimated central value.
Additionally, the triangle distribution is commonly used where the data collection is difficult or expensive. For a real world example, image your laboratory is temperature controlled using a PID thermostat controller. The PID thermostat controller is constantly trying to achieve the target temperature set-point. Therefore, it is best characterized by a triangular distribution because we know the limits and the estimated mean but we are unsure how the data is distributed between these points.
To reduce your uncertainty contributors to standard deviation equivalents, you will want to divide your values by the square-root or 6. The log-normal distribution is a distribution that is commonly encountered but rarely used. Most of the time it is the result of lack of knowledge or failure to develop a histogram for your data. For example, if you are performing measurements that are finite, such as length, height, weight, etc.
It is most common in dimensional and mechanical metrology.
To give a better understanding, think of calibrating a gage block. Before you begin calibration, you know the target length. If you perform repeated measurements at the single point on the gage block, the majority of your measurement results will be centered around the actual length of the gage block. Some measurement results will be larger than the actual value of the gage block, and much fewer measurement results will be less than the actual value of the gage block.
The reason this happens is your measurement results are limited by the length of the gage block. Realistically, you cannot measure less than the length of the block; so, your measurement results are finite or limited. Make sure to consider the log-normal distribution next time you are performing measurements that are finite. It may prevent you from encountering measurement errors and mis-calculated uncertainties.
To reduce your uncertainty contributors to standard deviation equivalents, you will want use the following equation. When directional components are orthogonal and normally distributed, the resulting vector will be Rayleigh distributed. Rayleigh distributions are commonly used in electrical metrology for RF and Microwave functions. Additionally, they are commonly used in mechanical metrology where vectors are involved.
For example, when wind velocity is analyzed by its 2 dimensional vector components, x and y, the resulting vector is Rayleigh distributed. For this to happen, x and y must be orthogonal and normally distributed. So, you may have to make some assumptions. Most credible manufacturers publish specifications with an associated confidence interval. Looking at the 1 Year absolute uncertainty specification for the 11 volt range, the uncertainty for 10 volts is approximately 38 micro-volts.
Feel free to use the values 2 or 1. When evaluating Type B uncertainty, you are not always going to have the convenience of using your own data. Most laboratories do not have the time or resources required to test every factor that contributes to uncertainty in measurement. Therefore, you are going to use data from other laboratories that have already done the work for you. The biggest challenge is finding the data! You must put some time and effort into conducting research. To make life easier, I have already created a list of 15 places you can find sources of uncertainty.
Once you find the data and deem it applicable for your measurement process, you can evaluate it for your uncertainty analysis. Now, you can evaluate Type B uncertainty data in many ways. Typically, you are going to find information in a guide, conference paper, or journal article that gives you data with no background information about it. Therefore, you are most likely to characterize the data with a rectangular distribution and use the following equation to evaluate the uncertainty component.
For example, imagine that you are estimating uncertainty for measuring voltage with a digital Multimeter. You are performing research and stumble upon a paper published by Keysight Technologies that has really good information that is relatable to the measurement process you are estimating uncertainty for.
- Existence: A New Dimension in Psychiatry and Psychology.
- The Shadow - 017 - The Five Chameleons.
- Navigation menu.
It contains information on Thermal EMF errors that you want to include in your uncertainty budget. The table in the image has some great information to help you quantify thermal EMF errors, but provides very little information on the origin of the data. Therefore, it would be best to assume that the data has a rectangular distribution. To convert your uncertainty component to standard uncertainty, you would divide the uncertainty component by the square-root of three. On the other hand, you may find data in a guide, conference paper, or journal article that is normally distributed or has been already converted to standard uncertainty.
Look for clues to help you find the right method to evaluate it. For example, imagine that you are performing research and stumble upon a paper published in the NIST Journal of Research. The study you found has information that is relatable to the measurement process you are estimating uncertainty for. It contains data for the elastic deformation of gage blocks calibrated by mechanical comparison that you want to include in your uncertainty budget.
Assuming that the data has a normal distribution and a coverage factor of one, use the equation below to evaluate Type B uncertainty. Therefore, your evaluation of Type B uncertainty should be approximately 2 micro-meters since your coverage factor k is one.