0% found this document useful (0 votes)
418 views13 pages

Physics

The document discusses different types of errors that can occur in experimental measurements: - Systematic errors occur consistently and are caused by faults in equipment or techniques. They can be identified by calibrating equipment against known values or noticing non-zero intercepts on graphs. - Random errors are unpredictable and caused by human limitations or uncontrolled variables. They can be reduced by taking more measurements and improving experimental methodology.

Uploaded by

matteosquire
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
418 views13 pages

Physics

The document discusses different types of errors that can occur in experimental measurements: - Systematic errors occur consistently and are caused by faults in equipment or techniques. They can be identified by calibrating equipment against known values or noticing non-zero intercepts on graphs. - Random errors are unpredictable and caused by human limitations or uncontrolled variables. They can be reduced by taking more measurements and improving experimental methodology.

Uploaded by

matteosquire
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

Error analysis and uncertainty

In making physical measurements, one need to keep in mind those measurements is not completely accurate. Each measurement will have some number of significant figures and should also have some indication as to how much we can trust it (i.e. error bars). Thus in order to reliably interpret experimental data, we need to have some idea as to the nature of the errors associated with the measurements.

A systematic error is when measurements are incorrect in some consistent fashion.


The consistency of the error is the key point. There has to be a system to what has gone wrong. This is the opposite of a random error. Systematic errors are caused by some consistent fault. This is going to be either - a fault with the apparatus - a faulty technique (but one that is consistently applied)

Apparatus faults
A common fault with certain apparatus is a zero error, where the instrument does not measure zero correctly. E.g. a newton meter spring has slipped a little, so that the meter reads 0.3N before any force is applied. Every reading that you take with this newton meter will be 0.3N too large. A similar fault might be if the spring in the newton meter had started to weaken with age. If the spring was (for example) twice as easy to stretch as it should be, every force reading would be twice the true value.

Technique faults
If you consistently do the wrong thing, this may cause a systematic error. E.g. A student is investigating how the length of a pendulum affects the time for one swing. The length should be measured to the centre of the pendulum bob at the end of the string (the centre of gravity of the bob). The student didn't read the instructions, so always measured to the top of the bob. All length results will be too short, but by the same amount.

Please send comments/other information required: sudip10in@gmail.com or www.royphysics.netau.net Informations are collected and compiled from websites mentioned at the end. Page 1

E.g. In the same experiment, another student measures a swing as from one end of the swing to the other, rather than there and back (to make a complete swing that ends at its starting place). Every time value measured would be half the true value. E.g. A student is conducting an electrical experiment. The ammeter is measuring in milliamps (mA) but the student doesn't notice and writes down all the values as amps (A). Every value is 1000 times too large.

Spotting systematic errors


Zero errors can often be spotted easily by checking the equipment before starting to take readings. They can also be noticed when a graph that should go through the origin turns out not to do so. E.g.

No mass should produce no friction, so there must be a zero error somewhere.

The other types of error, such as the misreading of the ammeter, or the weakened newton meter spring, won't show up on the graph and are harder to notice. The only way to spot these is to calibrate the equipment using known values.

Removing zero errors


Averaging won't help here at all! If all the results are 2cm too large, the average is going to be 2cm too large. Averaging helps reduce random errors, not systematic errors. Calibration is the key here. If you check the measuring instrument against known values then you will spot that the instrument is reading incorrectly. The good news is that systematic errors are easy to fix once you spot them. E.g. If all your length measurements are 2cm too large (due to a 2cm zero error), simply take 2cm off every reading.
Please send comments/other information required: sudip10in@gmail.com or www.royphysics.netau.net Informations are collected and compiled from websites mentioned at the end. Page 2

E.g. If all your current values are 1000 times too large (due to mis-reading the ammeter scale), simply divide every reading by 1000.

Random errors are unpredictable inaccuracies when an experiment is repeated.


The classic example of a random error is timing the oscillations of a pendulum with a stopwatch. - Your reaction times mean that you cannot start and stop the watch at exactly the right instants, and so your measured time value will be inaccurate. It will have an error. - However, you will sometimes measure a time that it too small (stopping the watch early) and sometimes too large (stopping it late). Your results may be a little bit wrong (you started and stopped almost perfectly), or more wrong. The error is unpredictable. It is the second part that makes this a random error. There is an element of chance in your results. (The opposite of this is a systematic error.) Random errors are usually down to - human limitations - not being able to control all variables properly

Human limitations
Never, ever, refer to this as a "human error". This means nothing. If it is a human limitation, explain exactly what is wrong with being human in this case, otherwise "human error" is meaningless. If, for example, you were measuring the length of the lab with a single metre ruler, there would be a random error in your results. This would be because you need to move the ruler along the lab, as the distance you are measuring is bigger than one metre. When you move the ruler, it would not necessarily move exactly to the correct new position - you might overlap the previous metre, or leave a gap. Your result could be too large, or too small, by an unknown amount.

Uncontrolled variables
In a perfect experiment, we control all variables except the one that we wish to investigate. This makes the test fair. It may be hard to do this in practice.
Please send comments/other information required: sudip10in@gmail.com or www.royphysics.netau.net Informations are collected and compiled from websites mentioned at the end. Page 3

If, for example, we were trying to measure how acid concentration affects the time for chalk to dissolve, we would vary the acid concentration but try to use exactly the same mass of chalk. In reality, whenever you repeat a measurement, the mass of chalk won't be quite the same as before. You will also need fresh acid for every new time measurement. The acid concentration will probably be slightly different from the original attempt. These effects will mean that the repeated time may be different from the original time. It could be a larger or smaller value of time, and could be out by very little or quite a lot, depending on how different the mass or acid concentrations were from each other. The error is random.

Spotting random errors


When you take two sets of results in order to take an average, you can easily compare them in the table. If the two sets of results are very similar (i.e. they are reliable) then there is little random error in the experiment. If very different, there is a lot of random error. You will also spot random errors on a graph. If you have drawn a good line of best-fit, your points should be scattered either side of the line, and by different amounts. This variation implies a random error. If your points are all close to the line, there is little random error.

Reducing random errors


You can do one of three things - Take more repeat readings to improve the average - Replace human beings with electronic equipment - Improve the control of variables or other problems with the method If you repeat the reading more times, the results still have random errors in them, but the average should become more accurate. This is because random errors will result in some values that are too large and some that are too small. If you have enough results, they will be split (more or less) evenly around the true value, and so the calculated average in the middle of your results will hopefully be true and accurate. If you only have two results, they might both be too large, resulting in an average that is still too high. Similarly, they might both be too small. But if you have a lot of results, you are more likely to end up with a mix of too large and too small, resulting in an accurate average in the middle.

Please send comments/other information required: sudip10in@gmail.com or www.royphysics.netau.net Informations are collected and compiled from websites mentioned at the end. Page 4

Human limitations can sometimes be solved by replacing the human with a data logger. This will work well for time measurements of a moving object, where light gates or pressure pads could be used to stop and start a timer. Electronics are not always the answer. There is no point replacing an ordinary thermometer with an electronic thermometer if the problems lie with inconsistent stirring of a liquid. The human element lies elsewhere in the experiment, not in the thermometer. Improving the method may involve controlling variables more exactly, or could involve reducing the human inconsistency in some fashion.

A control variable is any variable that is kept constant as it would otherwise affect the outcome of the experiment.
In any experiment, you choose to change your independent variable in order to see what it does. If you want to be able to spot a pattern in your results, you mustn't let any other factor affect the results as well, or it will be impossible to judge what is happening. E.g. If you were trying to find out how rapidly different materials dissolve in acid, your independent variable is the material - that's what you want to investigate. The time taken to dissolve would also depend upon the mass of material used, and the type of acid, the concentration of the acid and the temperature of the acid. All of these must be controlled and kept constant, or they would affect the time measurements.

Fair test
Making sure that variables are controlled properly is often referred to (especially in primary school) as making the test fair. The Examiner has made it quite clear that they think that this is a bit limited, and would expect a GCSE student to be able to explain themselves properly. By all means start off an answer about controlling variables with a reference to a fair test, but then expand the answer to say that your specific dependent variable would otherwise be affected. E.g. I kept the concentration of the acid constant to make the test fair, as otherwise it would also have affected the time taken for the materials to dissolve. Even better E.g. Strong acids would dissolve the materials faster than weak acids and so I kept the concentration the same in order to make the test fair.

Unacceptable "control variables"


"The equipment" is an unacceptable answer. It isn't a variable - you can't measure "the equipment" in any way.
Please send comments/other information required: sudip10in@gmail.com or www.royphysics.netau.net Informations are collected and compiled from websites mentioned at the end. Page 5

The control variable must be something that could be investigated in a separate experiment. If you think that something about your equipment will affect the outcome, then be specific. Is it the mass of one component of the equipment? Is it the temperature? Mass and temperature are proper variables. Another common incorrect answer is "the same measuring device, as another device may have a zero error or be wrongly calibrated". Again, the measuring device is not a variable. You could not investigate the effect of changing the measuring device as a separate experiment. The last common error is to claim that "I had to increase my independent variable by the same amount every time." This is incorrect. Keeping the measurement interval constant is a convenience as it makes the graph easier to plot, that's all. It won't change the pattern in your results if you decide to jump around a bit. There are even times when it is good practice to deliberately alter the measurement interval - look here.

The independent variable is that which you deliberately change in order to see its effect in the experiment.
In other words, it's what you want to investigate. E.g. if you were trying to find out how temperature affects the rate of a reaction, then temperature becomes your independent variable. Other independent variables are often possible in an experiment i.e. there are other variables that would affect your measurements (the dependent variable). You control all the other independent variables so that you can spot a pattern. This makes your results valid.

The dependent variable is the one which you measure each time you make a change in the experiment.
This variable depends upon what you do and what changes you make to the method or the independent variable. It is always plotted on the vertical axis of the graph. E.g. if you were hanging masses on a spring and measuring its length, the spring length is the dependent variable. The spring length depends upon the mass used (and the spring).

Please send comments/other information required: sudip10in@gmail.com or www.royphysics.netau.net Informations are collected and compiled from websites mentioned at the end. Page 6

Valid data has been obtained from a fair test and is relevant to the investigation.
Valid data is of value. If the data is useless in some fashion, then it is invalid. If the test isn't fair because there are uncontrolled variables then the data is meaningless. You cannot draw any conclusions from such data because you cannot identify which variable has caused any pattern in the results. The data is not valid. It is also possible to gather data that has no relevance to the problem (although you would have to be quite daft to do this). In this case, the data is not valid as it does not help answer the original problem, even if the data is accurate, reliable, precise and has been obtained in a fair test. It is also the case that any bias in the observer gathering the data will also invalidate the results. Observer bias makes the data worthless. (You can also argue that the data must be reliable in order for it to be valid. If there is a large amount of random error then the data could be regarded as worthless.)

The precision of a measuring device is the smallest scale division on the device.
E.g. A standard 30cm ruler has 1mm markings all along it. It has a precision of 1mm. A standard 30m tape measure has no mm marking. It is only marked at 1cm intervals. It has a precision of 1cm. This means that the ruler is more precise than the tape measure. E.g. A small thermometer usually has the scale marked in 1 degree centigrade divisions. Its precision is 1 degree centigrade. We do have some much longer thermometers that are more precise, as they have markings every half degree. (We don't often use these as they are more likely to get broken.) Precision is really about detail. It has nothing to do with accuracy. Accuracy is about giving true readings, not detailed readings. For example, if one of the very long thermometers has been damaged then it might give bad readings. It might measure my body temperature as 56.5 degrees centigrade. This is precise (detailed, to half a degree centigrade), but inaccurate (untrue). Anyone with a temperature that high is very dead. In the same way, if I measured the height of a Year 9 student and claimed that the answer was 3.756m tall, then my measurement is obviously inaccurate (this student would be over 12 feet tall; you'd probably have noticed such a student before now!) but the measurement is precise (measured to the nearest mm).

An accurate measurement is one close to the true value.


Please send comments/other information required: sudip10in@gmail.com or www.royphysics.netau.net Informations are collected and compiled from websites mentioned at the end. Page 7

The "true value" is the value that would be measured with no errors, so another way of saying this would be An accurate measurement has little or no error. We can improve accuracy by trying to reduce random errors and systematic errors. - This might involve repeating more often to reduce the random error in the average. - It might involve improving the technique, or controlling other variables in a better fashion, in order to prevent the random errors from occurring in the first place - It might involve re-calibrating the equipment in order to get rid of systematic errors. It does not have anything to do with precision. A scale with more detail is not more accurate. The answers it gives could be badly wrong, just very detailed. It also is not necessarily the case that digital devices or electronic equipment are more accurate. They might be badly calibrated. It may be that the experimental technique creates random errors; if so, the electronic device will record these errors. Example: Turbine efficiency ISA (usable up to June 2008) Question: What could have been done to produce a more accurate mean? Answer: Take more repeated readings Also acceptable: Discard any anomalies before calculating the mean. Explanation: Both answers will reduce the error in the calculated mean. The first answer reduces the effects of random errors. The second discards obvious errors (anomalies) from the calculation.

Reliable results are ones in which you have confidence.


You need to consider how you can judge and improve the confidence in your results. Judging reliability There are lots of ways of doing this. If your table has repeated readings for each measurement, then the variation in the results can give an indication of random error. - If your repeated results are usually very similar, there must be little random error and you will feel confident that your average is good. This indicates reliable results. - If your repeated results are usually very different, you probably won't have much confidence in your average value either. Your results are not reliable. -

You will always draw a best-fit line on your graph.


Please send comments/other information required: sudip10in@gmail.com or www.royphysics.netau.net Informations are collected and compiled from websites mentioned at the end. Page 8

- If your graph has points that are usually close to the best-fit, this is another indication that there is little random error. You can be confident that the best-fit line is in the right place. Your results are reliable. If someone else has also performed the experiment, you can compare your results with the other student. - If your results match those of the other student, you can be more confident that you haven't used a bad technique, or failed to follow the instructions. Your results will be more reliable. - If your results don't match up, one or the other of you has some errors. These might be random errors, but you would probably spot those with the repeated readings or graph. This comparison may allow you to spot a systematic error in the method or apparatus (either yours or that of the other student). If you repeat the experiment with different equipment, you can compare the two sets of results with each piece of equipment. This will allow you to spot systematic errors due to badly calibrated or otherwise faulty equipment. If you repeat the experiment using a different technique, you can again compare the two sets of results. This might spot systematic errors due to technique, or faulty equipment if you use different equipment as well. Note that checking reliability does not always mean checking accuracy. If your equipment has a systematic error, you won't spot this in your table, and may not spot it in your graph. For example, if I measured someone's height using a tape measure, I might use the side with the marking in feet and inches but think that I am measuring in metres. I might repeatedly measure a student's height as 6m tall (when they are actually 6 feet tall), making the same mistake every time. My results are reliable, but not accurate.

Improving reliability
This is about minimising random errors or spotting systematic errors. - Taking more repeat readings will improve the average measurement. If you have more results, your average is more likely to be correct as the random variations will (hopefully) cancel each other out. - Repeating the experiment with different technique, equipment, or getting someone else to repeat it will hopefully highlight a systematic error and hence improve the reliability.

To calibrate a device is to create a scale on it.


Please send comments/other information required: sudip10in@gmail.com or www.royphysics.netau.net Informations are collected and compiled from websites mentioned at the end. Page 9

The classic example of calibration usually involves a thermometer. If you were given a thermometer that had no scale on it (and we have some in Physics), then it is easy to add a scale to the thermometer. - Place the thermometer in melting ice. Mark this on the thermometer scale as zero degrees centigrade. - Place the thermometer in boiling water. Mark this as 100 degrees centigrade. - Construct a scale between these two points by dividing it up equally. (This does presume a linear response between temperature and the length of the mercury column.)

Calibration usually involves two known points, as above.


Calibration in this sense - creating a scale from nothing - is something that you will be unlikely to do at GCSE. It is something you would be more likely to do at A level and University. In the GCSE ISA, a question involving this meaning of calibrate could easily occur in section B, however. The term calibrate can also be used to describe checking an existing scale to make sure that it is correct, (but the examiner does not seem to use it in this context very often). If you wanted to improve the accuracy of some results involving a newton meter, you might be worried that the meter had a systematic error. This might occur because the spring inside the newton meter could get damaged after some years of use, and no longer stretch as intended. If the newton meter was a 0-10N meter - Check that the Newton meter reads zero when no force is applied (i.e. check for a possible zero error). - Apply a known force to the meter, preferably one that is 10N or close to it. You could probably do this most easily by hanging a known mass from the meter, knowing that a 1kg mass weighs 9.8N on Earth. Hopefully, both readings (0N and 9.8N) would match the scale. If you wanted to be thorough, you might check one or two values in the middle as well. This might lead to some section A answers. If you were asked how to improve the accuracy of your results, you could suggest checking the calibration of your equipment (although other responses might be easier). Note that you can also calibrate one device against another. E.g. If you have some potentially dodgy old ammeters, but you also have one very good ammeter that you know is accurate, you can set up an electric circuit and use each ammeter in turn. If the "dodgy" meters give the same readings as the "good" meter, they are correctly calibrated.
Please send comments/other information required: sudip10in@gmail.com or www.royphysics.netau.net Informations are collected and compiled from websites mentioned at the end. Page 10

Ideally, you would do this test with at least two different current values at both ends of the scale, and maybe one or two in the middle for thoroughness.

Uncertainties
An uncertainty is sometimes called a probable error. No practical measurement will be perfect and an uncertainty is an attempt to estimate how wrong the measurement might reasonably be. An uncertainty estimate is usually shown using notation. A measurement recorded as (4.8 0.2)N means that the best estimate of the value is 4.8N, but it could reasonably lie within 0.2N of this. This means that the actual value is (probably) between 4.6N and 5.0N. A time measurement of (10.3 0.2) seconds means that the actual value is somewhere between 10.1 seconds and 10.5 seconds, and is hopefully 10.3 seconds. Note that an uncertainty is useless without a unit. If I told you that I had measured a length and that my uncertainty is 5, then this unhelpful: does this mean 5mm or 5 miles?

Estimating uncertainties
This involves one of two techniques. Method 1 - Repeated readings. Take an average to find the best estimate, then look at the spread of readings. The estimated uncertainty will be half the range (highest value lowest value). E.g. if you measure the length of a room twice, and got measurements of 12.65m and 12.73m, you have an average value of 12.69m and a range of 0.08m. You would write down (12.69 0.04) m. If you have a lot of results, you can probably discard the most extreme values when estimating the range. E.g. you record time measurements of 2.6s, 2.2s 2.3s, 2.6s, 2.4s, 2.4s, 2.8s, 2.5s, 2.5s, 2.7s,

Ignoring the extreme values of 2.2 and 2.8, the range of the remainder is 0.4 seconds. You could reasonably claim a value of (2.5 0.2) seconds. The majority of the values lie within 0.2 seconds of the average of 2.5s. Method 2 - Equipment limitations.

Please send comments/other information required: sudip10in@gmail.com or www.royphysics.netau.net Informations are collected and compiled from websites mentioned at the end. Page 11

You cannot hope to measure more accurately than the smallest reading you could take from your equipment (its precision). If you are using a standard mm division ruler then your uncertainty is probably at least 1mm. Methods 1 or 2 may allow you to assign a larger uncertainty than this, but this method should give a minimum value that reflects manufacturing error of the equipment and your ability to read the scale. If the scale divisions are larger, you may be able to judge between divisions. You can usually read a thermometer to 0.5oC by judging between the marked 1oC divisions. This 0.5oC then becomes your minimum uncertainty. This last method is useful when using digital equipment. If you have an ammeter that measures to 0.01A then this will be a reasonable estimate of the uncertainty. It is also the method used if your repeated values are identical and method 1 gives you an uncertainty of 0. Under those conditions, the uncertainty is taken to be equal to the precision of the measuring device.

Absolute and percentage uncertainties


An absolute uncertainty is what we have been quoting so far an actual value with a unit. A percentage uncertainty converts the absolute uncertainty into a percentage of the value. E.g. A force of (5.0 0.4) N becomes 5.0N 8%. The 0.4N uncertainty is 8% of 5.0N. Percentage errors can be used to see which errors have the biggest effect in calculations. If you have two measurements of (2.00 0.5)cm and (0.50 0.3)cm then the biggest percentage uncertainty is the 0.3cm (6% as opposed to 2.5%). This measurement is the one that needs to be improved.

Calculations If you are adding or subtracting values, then you always add the absolute uncertainties. You assume that the problems reinforce each other. E.g. X = (10 1) cm i.e. X lies between 9 and 11 cm, probably at 10 cm Y = (5 2) cm i.e. Y lies between 3 and 7 cm, probably at 5 cm Calculating X + Y = (15 3) cm This means that our total could be as large as 18 cm (if X =11cm and Y = 7cm).
Please send comments/other information required: sudip10in@gmail.com or www.royphysics.netau.net Informations are collected and compiled from websites mentioned at the end. Page 12

It could be as small as 12 cm (if X = 9 cm and Y = 3 cm). Calculating X Y = (5 3) cm This means that our total could be as large as 8 cm (if X =11cm and Y = 3cm). It could be as small as 2 cm (if X = 9 cm and Y = 7 cm). If you are multiplying or dividing, you add the percentage uncertainties E.g. X = 10 cm 10% Y = 5 cm 40% Calculating XY = 50 cm2 50% We often convert this back into an absolute uncertainty i.e. XY = 50 cm 2 25 cm2 Calculating X/Y = 2 50% Converting back to absolute uncertainties, X/Y = 2 1 Note that squaring a number involves multiplying the number by itself. This will double the percentage uncertainty. Similar things happen with cubes etc. E.g. A = (10 0.5) cm. Calculate A3 and its associated uncertainty. A = 10 cm 5%, and A3 = A x A x A = 1000 cm3 15% = (1000 150) cm3
{The treatment of errors in calculations shown above is not strictly correct, but is a very handy first step towards the proper way of doing it. If you wish to know the real way, come and talk to me about standard deviation, variance and partial differentiation. Or just trust us at this stage ADH)

Reference: kent.sch.uk /Cambridge University website/Oxford University website/Wikipedia website/Google search/ kent.sch.uk website

Please send comments/other information required: sudip10in@gmail.com or www.royphysics.netau.net Informations are collected and compiled from websites mentioned at the end. Page 13

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy