Physics
Physics
In making physical measurements, one need to keep in mind those measurements is not completely accurate. Each measurement will have some number of significant figures and should also have some indication as to how much we can trust it (i.e. error bars). Thus in order to reliably interpret experimental data, we need to have some idea as to the nature of the errors associated with the measurements.
Apparatus faults
A common fault with certain apparatus is a zero error, where the instrument does not measure zero correctly. E.g. a newton meter spring has slipped a little, so that the meter reads 0.3N before any force is applied. Every reading that you take with this newton meter will be 0.3N too large. A similar fault might be if the spring in the newton meter had started to weaken with age. If the spring was (for example) twice as easy to stretch as it should be, every force reading would be twice the true value.
Technique faults
If you consistently do the wrong thing, this may cause a systematic error. E.g. A student is investigating how the length of a pendulum affects the time for one swing. The length should be measured to the centre of the pendulum bob at the end of the string (the centre of gravity of the bob). The student didn't read the instructions, so always measured to the top of the bob. All length results will be too short, but by the same amount.
Please send comments/other information required: sudip10in@gmail.com or www.royphysics.netau.net Informations are collected and compiled from websites mentioned at the end. Page 1
E.g. In the same experiment, another student measures a swing as from one end of the swing to the other, rather than there and back (to make a complete swing that ends at its starting place). Every time value measured would be half the true value. E.g. A student is conducting an electrical experiment. The ammeter is measuring in milliamps (mA) but the student doesn't notice and writes down all the values as amps (A). Every value is 1000 times too large.
The other types of error, such as the misreading of the ammeter, or the weakened newton meter spring, won't show up on the graph and are harder to notice. The only way to spot these is to calibrate the equipment using known values.
E.g. If all your current values are 1000 times too large (due to mis-reading the ammeter scale), simply divide every reading by 1000.
Human limitations
Never, ever, refer to this as a "human error". This means nothing. If it is a human limitation, explain exactly what is wrong with being human in this case, otherwise "human error" is meaningless. If, for example, you were measuring the length of the lab with a single metre ruler, there would be a random error in your results. This would be because you need to move the ruler along the lab, as the distance you are measuring is bigger than one metre. When you move the ruler, it would not necessarily move exactly to the correct new position - you might overlap the previous metre, or leave a gap. Your result could be too large, or too small, by an unknown amount.
Uncontrolled variables
In a perfect experiment, we control all variables except the one that we wish to investigate. This makes the test fair. It may be hard to do this in practice.
Please send comments/other information required: sudip10in@gmail.com or www.royphysics.netau.net Informations are collected and compiled from websites mentioned at the end. Page 3
If, for example, we were trying to measure how acid concentration affects the time for chalk to dissolve, we would vary the acid concentration but try to use exactly the same mass of chalk. In reality, whenever you repeat a measurement, the mass of chalk won't be quite the same as before. You will also need fresh acid for every new time measurement. The acid concentration will probably be slightly different from the original attempt. These effects will mean that the repeated time may be different from the original time. It could be a larger or smaller value of time, and could be out by very little or quite a lot, depending on how different the mass or acid concentrations were from each other. The error is random.
Please send comments/other information required: sudip10in@gmail.com or www.royphysics.netau.net Informations are collected and compiled from websites mentioned at the end. Page 4
Human limitations can sometimes be solved by replacing the human with a data logger. This will work well for time measurements of a moving object, where light gates or pressure pads could be used to stop and start a timer. Electronics are not always the answer. There is no point replacing an ordinary thermometer with an electronic thermometer if the problems lie with inconsistent stirring of a liquid. The human element lies elsewhere in the experiment, not in the thermometer. Improving the method may involve controlling variables more exactly, or could involve reducing the human inconsistency in some fashion.
A control variable is any variable that is kept constant as it would otherwise affect the outcome of the experiment.
In any experiment, you choose to change your independent variable in order to see what it does. If you want to be able to spot a pattern in your results, you mustn't let any other factor affect the results as well, or it will be impossible to judge what is happening. E.g. If you were trying to find out how rapidly different materials dissolve in acid, your independent variable is the material - that's what you want to investigate. The time taken to dissolve would also depend upon the mass of material used, and the type of acid, the concentration of the acid and the temperature of the acid. All of these must be controlled and kept constant, or they would affect the time measurements.
Fair test
Making sure that variables are controlled properly is often referred to (especially in primary school) as making the test fair. The Examiner has made it quite clear that they think that this is a bit limited, and would expect a GCSE student to be able to explain themselves properly. By all means start off an answer about controlling variables with a reference to a fair test, but then expand the answer to say that your specific dependent variable would otherwise be affected. E.g. I kept the concentration of the acid constant to make the test fair, as otherwise it would also have affected the time taken for the materials to dissolve. Even better E.g. Strong acids would dissolve the materials faster than weak acids and so I kept the concentration the same in order to make the test fair.
The control variable must be something that could be investigated in a separate experiment. If you think that something about your equipment will affect the outcome, then be specific. Is it the mass of one component of the equipment? Is it the temperature? Mass and temperature are proper variables. Another common incorrect answer is "the same measuring device, as another device may have a zero error or be wrongly calibrated". Again, the measuring device is not a variable. You could not investigate the effect of changing the measuring device as a separate experiment. The last common error is to claim that "I had to increase my independent variable by the same amount every time." This is incorrect. Keeping the measurement interval constant is a convenience as it makes the graph easier to plot, that's all. It won't change the pattern in your results if you decide to jump around a bit. There are even times when it is good practice to deliberately alter the measurement interval - look here.
The independent variable is that which you deliberately change in order to see its effect in the experiment.
In other words, it's what you want to investigate. E.g. if you were trying to find out how temperature affects the rate of a reaction, then temperature becomes your independent variable. Other independent variables are often possible in an experiment i.e. there are other variables that would affect your measurements (the dependent variable). You control all the other independent variables so that you can spot a pattern. This makes your results valid.
The dependent variable is the one which you measure each time you make a change in the experiment.
This variable depends upon what you do and what changes you make to the method or the independent variable. It is always plotted on the vertical axis of the graph. E.g. if you were hanging masses on a spring and measuring its length, the spring length is the dependent variable. The spring length depends upon the mass used (and the spring).
Please send comments/other information required: sudip10in@gmail.com or www.royphysics.netau.net Informations are collected and compiled from websites mentioned at the end. Page 6
Valid data has been obtained from a fair test and is relevant to the investigation.
Valid data is of value. If the data is useless in some fashion, then it is invalid. If the test isn't fair because there are uncontrolled variables then the data is meaningless. You cannot draw any conclusions from such data because you cannot identify which variable has caused any pattern in the results. The data is not valid. It is also possible to gather data that has no relevance to the problem (although you would have to be quite daft to do this). In this case, the data is not valid as it does not help answer the original problem, even if the data is accurate, reliable, precise and has been obtained in a fair test. It is also the case that any bias in the observer gathering the data will also invalidate the results. Observer bias makes the data worthless. (You can also argue that the data must be reliable in order for it to be valid. If there is a large amount of random error then the data could be regarded as worthless.)
The precision of a measuring device is the smallest scale division on the device.
E.g. A standard 30cm ruler has 1mm markings all along it. It has a precision of 1mm. A standard 30m tape measure has no mm marking. It is only marked at 1cm intervals. It has a precision of 1cm. This means that the ruler is more precise than the tape measure. E.g. A small thermometer usually has the scale marked in 1 degree centigrade divisions. Its precision is 1 degree centigrade. We do have some much longer thermometers that are more precise, as they have markings every half degree. (We don't often use these as they are more likely to get broken.) Precision is really about detail. It has nothing to do with accuracy. Accuracy is about giving true readings, not detailed readings. For example, if one of the very long thermometers has been damaged then it might give bad readings. It might measure my body temperature as 56.5 degrees centigrade. This is precise (detailed, to half a degree centigrade), but inaccurate (untrue). Anyone with a temperature that high is very dead. In the same way, if I measured the height of a Year 9 student and claimed that the answer was 3.756m tall, then my measurement is obviously inaccurate (this student would be over 12 feet tall; you'd probably have noticed such a student before now!) but the measurement is precise (measured to the nearest mm).
The "true value" is the value that would be measured with no errors, so another way of saying this would be An accurate measurement has little or no error. We can improve accuracy by trying to reduce random errors and systematic errors. - This might involve repeating more often to reduce the random error in the average. - It might involve improving the technique, or controlling other variables in a better fashion, in order to prevent the random errors from occurring in the first place - It might involve re-calibrating the equipment in order to get rid of systematic errors. It does not have anything to do with precision. A scale with more detail is not more accurate. The answers it gives could be badly wrong, just very detailed. It also is not necessarily the case that digital devices or electronic equipment are more accurate. They might be badly calibrated. It may be that the experimental technique creates random errors; if so, the electronic device will record these errors. Example: Turbine efficiency ISA (usable up to June 2008) Question: What could have been done to produce a more accurate mean? Answer: Take more repeated readings Also acceptable: Discard any anomalies before calculating the mean. Explanation: Both answers will reduce the error in the calculated mean. The first answer reduces the effects of random errors. The second discards obvious errors (anomalies) from the calculation.
- If your graph has points that are usually close to the best-fit, this is another indication that there is little random error. You can be confident that the best-fit line is in the right place. Your results are reliable. If someone else has also performed the experiment, you can compare your results with the other student. - If your results match those of the other student, you can be more confident that you haven't used a bad technique, or failed to follow the instructions. Your results will be more reliable. - If your results don't match up, one or the other of you has some errors. These might be random errors, but you would probably spot those with the repeated readings or graph. This comparison may allow you to spot a systematic error in the method or apparatus (either yours or that of the other student). If you repeat the experiment with different equipment, you can compare the two sets of results with each piece of equipment. This will allow you to spot systematic errors due to badly calibrated or otherwise faulty equipment. If you repeat the experiment using a different technique, you can again compare the two sets of results. This might spot systematic errors due to technique, or faulty equipment if you use different equipment as well. Note that checking reliability does not always mean checking accuracy. If your equipment has a systematic error, you won't spot this in your table, and may not spot it in your graph. For example, if I measured someone's height using a tape measure, I might use the side with the marking in feet and inches but think that I am measuring in metres. I might repeatedly measure a student's height as 6m tall (when they are actually 6 feet tall), making the same mistake every time. My results are reliable, but not accurate.
Improving reliability
This is about minimising random errors or spotting systematic errors. - Taking more repeat readings will improve the average measurement. If you have more results, your average is more likely to be correct as the random variations will (hopefully) cancel each other out. - Repeating the experiment with different technique, equipment, or getting someone else to repeat it will hopefully highlight a systematic error and hence improve the reliability.
The classic example of calibration usually involves a thermometer. If you were given a thermometer that had no scale on it (and we have some in Physics), then it is easy to add a scale to the thermometer. - Place the thermometer in melting ice. Mark this on the thermometer scale as zero degrees centigrade. - Place the thermometer in boiling water. Mark this as 100 degrees centigrade. - Construct a scale between these two points by dividing it up equally. (This does presume a linear response between temperature and the length of the mercury column.)
Ideally, you would do this test with at least two different current values at both ends of the scale, and maybe one or two in the middle for thoroughness.
Uncertainties
An uncertainty is sometimes called a probable error. No practical measurement will be perfect and an uncertainty is an attempt to estimate how wrong the measurement might reasonably be. An uncertainty estimate is usually shown using notation. A measurement recorded as (4.8 0.2)N means that the best estimate of the value is 4.8N, but it could reasonably lie within 0.2N of this. This means that the actual value is (probably) between 4.6N and 5.0N. A time measurement of (10.3 0.2) seconds means that the actual value is somewhere between 10.1 seconds and 10.5 seconds, and is hopefully 10.3 seconds. Note that an uncertainty is useless without a unit. If I told you that I had measured a length and that my uncertainty is 5, then this unhelpful: does this mean 5mm or 5 miles?
Estimating uncertainties
This involves one of two techniques. Method 1 - Repeated readings. Take an average to find the best estimate, then look at the spread of readings. The estimated uncertainty will be half the range (highest value lowest value). E.g. if you measure the length of a room twice, and got measurements of 12.65m and 12.73m, you have an average value of 12.69m and a range of 0.08m. You would write down (12.69 0.04) m. If you have a lot of results, you can probably discard the most extreme values when estimating the range. E.g. you record time measurements of 2.6s, 2.2s 2.3s, 2.6s, 2.4s, 2.4s, 2.8s, 2.5s, 2.5s, 2.7s,
Ignoring the extreme values of 2.2 and 2.8, the range of the remainder is 0.4 seconds. You could reasonably claim a value of (2.5 0.2) seconds. The majority of the values lie within 0.2 seconds of the average of 2.5s. Method 2 - Equipment limitations.
Please send comments/other information required: sudip10in@gmail.com or www.royphysics.netau.net Informations are collected and compiled from websites mentioned at the end. Page 11
You cannot hope to measure more accurately than the smallest reading you could take from your equipment (its precision). If you are using a standard mm division ruler then your uncertainty is probably at least 1mm. Methods 1 or 2 may allow you to assign a larger uncertainty than this, but this method should give a minimum value that reflects manufacturing error of the equipment and your ability to read the scale. If the scale divisions are larger, you may be able to judge between divisions. You can usually read a thermometer to 0.5oC by judging between the marked 1oC divisions. This 0.5oC then becomes your minimum uncertainty. This last method is useful when using digital equipment. If you have an ammeter that measures to 0.01A then this will be a reasonable estimate of the uncertainty. It is also the method used if your repeated values are identical and method 1 gives you an uncertainty of 0. Under those conditions, the uncertainty is taken to be equal to the precision of the measuring device.
Calculations If you are adding or subtracting values, then you always add the absolute uncertainties. You assume that the problems reinforce each other. E.g. X = (10 1) cm i.e. X lies between 9 and 11 cm, probably at 10 cm Y = (5 2) cm i.e. Y lies between 3 and 7 cm, probably at 5 cm Calculating X + Y = (15 3) cm This means that our total could be as large as 18 cm (if X =11cm and Y = 7cm).
Please send comments/other information required: sudip10in@gmail.com or www.royphysics.netau.net Informations are collected and compiled from websites mentioned at the end. Page 12
It could be as small as 12 cm (if X = 9 cm and Y = 3 cm). Calculating X Y = (5 3) cm This means that our total could be as large as 8 cm (if X =11cm and Y = 3cm). It could be as small as 2 cm (if X = 9 cm and Y = 7 cm). If you are multiplying or dividing, you add the percentage uncertainties E.g. X = 10 cm 10% Y = 5 cm 40% Calculating XY = 50 cm2 50% We often convert this back into an absolute uncertainty i.e. XY = 50 cm 2 25 cm2 Calculating X/Y = 2 50% Converting back to absolute uncertainties, X/Y = 2 1 Note that squaring a number involves multiplying the number by itself. This will double the percentage uncertainty. Similar things happen with cubes etc. E.g. A = (10 0.5) cm. Calculate A3 and its associated uncertainty. A = 10 cm 5%, and A3 = A x A x A = 1000 cm3 15% = (1000 150) cm3
{The treatment of errors in calculations shown above is not strictly correct, but is a very handy first step towards the proper way of doing it. If you wish to know the real way, come and talk to me about standard deviation, variance and partial differentiation. Or just trust us at this stage ADH)
Reference: kent.sch.uk /Cambridge University website/Oxford University website/Wikipedia website/Google search/ kent.sch.uk website
Please send comments/other information required: sudip10in@gmail.com or www.royphysics.netau.net Informations are collected and compiled from websites mentioned at the end. Page 13