Statistics in Nutrition and Dietetics. Michael Nelson
Читать онлайн книгу.is research design and statistics in a nutshell. Let me elaborate.
1.4 THE BASICS OF RESEARCH DESIGN
According to the Dodecahedron, the basic elements of research are as shown in Box 1.1:
He may be a little confused, but trust me, all the elements are there.
1.4.1 Developing the Hypothesis
The Dodecahedron: ‘As long as the answer is right, who cares if the question is wrong?’
The Dodecahedron has clearly lost the plot here. Formulating the question correctly is the key starting point. If the question is wrong, no amount of experimentation or measuring will provide you with an answer.
The purpose of most research is to try and provide evidence in support of a general statement of what one believes to be true. The first step in this process is to establish a hypothesis. A hypothesis is a clear statement of what one believes to be true. The way in which the hypothesis is stated will also have an impact on which measurements are needed. The formulation of a clear hypothesis is the critical first step in the development of research. Even if we can’t make measurements that reflect the truth, the hypothesis should always be a statement of what you believe to be true. Coping with the difference between what the hypothesis says is true and what we can measure is at the heart of research design and statistics.
BOX 1.1 The four key elements of research
Hypothesis | ||
Design | Statistics | |
Interpretation |
TIP
Your first attempts at formulating hypotheses may not be very good. Always discuss your ideas with fellow students or researchers, or your tutor, or your friendly neighbourhood statistician. Then be prepared to make changes until your hypothesis is a clear statement of what you believe to be true. It takes practice – and don’t think you should be able to do it on your own, or get it right first time. The best research is collaborative, and developing a clear hypothesis is a group activity.
We can test a hypothesis using both inductive and deductive logic. Inductive logic says that if we can demonstrate that something is true in a particular individual or group, we might argue that it is true generally in the population from which the individual or group was drawn. The evidence will always be relatively weak, however, and the truth of the hypothesis hard to test. Because we started with the individual or group, rather than the population, we are less certain that the person or group that we studied is representative of the population with similar characteristics. Generalizability remains an issue.
Deductive logic requires us to draw a sample from a defined population. It argues that if the sample in which we carry out our measurements can be shown to be representative of the population, then we can generalize our findings from our sample to the population as a whole. This is a much more powerful model for testing hypotheses.
As we shall see, these distinctions become important when we consider the generalizability of our findings and how we go about testing our hypothesis.
1.4.2 Developing the ‘Null’ Hypothesis
In thinking about how to establish the ‘truth’6 of a hypothesis, Ronald Fisher considered a series of statements:
No amount of experimentation can ‘prove’ an inexact hypothesis.
The first task is to get the question right! Formulating a hypothesis takes time. It needs to be a clear, concise statement of what we believe to be true,7 with no ambiguity. If our aim is to evaluate the effect of a new diet on reducing cholesterol levels in serum, we need to say specifically that the new diet will ‘lower’ cholesterol, not simply that it will ‘affect’ or ‘change’ it. If we are comparing growth in two groups of children living in different circumstances, we need to say in which group we think growth will be better, not simply that it will be ‘different’ between the two groups.
The hypothesis that we formulate will determine what we choose to measure. If we take the time to discuss the formulation of our hypothesis with colleagues, we are more likely to develop a robust hypothesis and to choose the appropriate measurements. Failure to get the hypothesis right may result in the wrong measurements being taken, in which case all your efforts will be wasted. For example, if the hypothesis relates to the effect of diet on serum cholesterol, there may be a particular cholesterol fraction that is altered. If this is stated clearly in the hypothesis, then we must measure the relevant cholesterol fraction in order to provide appropriate evidence to test the hypothesis.
No finite amount of experimentation can ‘prove’ an exact hypothesis.
Suppose that we carry out a series of four studies with different samples, and we find that in each case our hypothesis is ‘proven’ (our findings are consistent with our beliefs). But what do we do if in a fifth study we get a different result which does not support the hypothesis? Do we ignore the unusual finding? Do we say, ‘It is the exception that proves the rule?’ Do we abandon the hypothesis? What would we have done if the first study which was carried out appeared not to support our hypothesis? Would we have abandoned the hypothesis, when all the subsequent studies would have suggested that it was true?
There are no simple answers to these questions. We can conclude that any system that we use to evaluate a hypothesis must take into account the possibility that there may be times when our hypothesis appears to be false when in fact it is true (and conversely, that it may appear to be true when in fact it is false). These potentially contradictory results may arise because of sampling variations (every sample drawn from the population will be different from the next, and because of sampling variation, not every set of observations will necessarily support a true hypothesis), and because our measurements can never be 100% accurate.
A finite amount of experimentation can disprove an exact hypothesis.
It is easier to disprove something than prove it. If we can devise a hypothesis which is the negation of what we believe to be true (rather than its opposite), and then disprove it, we could reasonably conclude that our hypothesis was true (that what we observe, for the moment, seems to be consistent with what we believe).
This negation of the hypothesis is called the ‘null’ hypothesis. The ability to refute the null hypothesis lies at the heart of our ability to develop knowledge. A good null hypothesis, therefore, is one which can be tested and refuted. If I can refute (disprove) my null hypothesis, then I will accept my hypothesis.
A theory which is not refutable by any conceivable event is non‐scientific. Irrefutability is not a virtue of a theory (as people often think) but a vice. [1, p. 36]
Let us take an example. Suppose we want to know whether giving a mixture of three anti‐oxidant vitamins (β‐carotene, vitamin C, and vitamin E) will improve walking distance in patients with Peripheral Artery Disease (PAD), an atherosclerotic disease of the lower limbs. The hypothesis (which we denote by the symbol H1) would be: