15. Inspiration and perspiration
what every researcher needs to know before they start
Andrew J. Vickers
Introduction
Art, we are often told, is 10% inspiration and 90% perspiration. The moment of brilliance on stage, or on the art gallery wall, is the result of thousands of hours of learning, planning, rehearsal and laborious attention to fine detail.
A similar adage for science might be: ‘10% statistics, 90% logistics’. As a clinical researcher, I spend less time devising protocols, analysing results and writing papers than I do organizing staff, checking work that I have delegated, sorting out financial administration and programming databases.
Research is a practical business, and unless you get the practicalities right, your research will fail, no matter how brilliant its design or conception. Unfortunately, few methodology books will tell you this. This chapter will review some of the practical aspects of research. My aim is to give some simple advice and guidelines based on my own personal experience as a researcher specializing in quantitative research. Some, though not all, of these considerations will also be relevant to qualitative research.
Questions
Defining the question
Research is a tool for answering questions. Unless you know what your question is, you will be unable to design your research. Novice researchers are often unable to frame a question to illustrate their research interest, even after repeated prompting. For example, a researcher at a conference reported that his question was ‘to demonstrate the effectiveness of herbal medicine for cystitis’; another cited a wish ‘to investigate massage for cancer patients’. These are clearly not questions.
There are three general guidelines for defining a research question: (1) questions should be in four parts; (2) questions should be focused; (3) questions should be explicit.
Questions should be in four parts
Often, research questions are at first stage too general to be useful. It is helpful to make them more precise. When analysed carefully, many questions in health research can be formulated in four parts.
1. an intervention
2. a comparison
3. an outcome measure
4. a population.
Each of these four parts can be seen easily in this question for a clinical trial: ‘What are the effects of acupuncture compared to no acupuncture on headache, health status, days off sick and resource use in patients with headache in primary care?’ The four-part question can be applied to many other types of research, including prognosis (e.g. ‘What proportion of coronary heart disease patients who develop heart failure will die compared to those who do not develop heart failure?’) and diagnosis (‘What is the reproducibility of IgG/IgE testing for food intolerance in patients with chronic disease?’). In the last case, the ‘intervention’ is IgG/IgE testing and the ‘comparison’ is a second test.
Not all forms of research involve a comparison and in these cases the four-part question becomes a three-part question. Examples include case series (‘What is the average reduction in pain scores [outcome measure] in pain clinic patients [population] undertaking an integrated package of care [intervention]?’) or surveys (‘What proportion of UK adults [population] have seen a practitioner [outcome measure] of complementary medicine [intervention]?’) Some research questions, particularly qualitative studies, are difficult to put in the three- or four-part format; nonetheless, the format remains a useful rule of thumb.
Questions should be focused
A colleague of mine was once asked to advise a researcher who wanted to know ‘What forms of discourse were used by doctors in discussing cancer with their patients, what did patients think of this and what were the effects on outcome of the different styles of discourse?’ A vague question of this sort will often produce a vague answer and, accordingly, research which benefits no one. Your question must be focused. A quick test of focus: give an imaginary answer to your question; the shorter the answer, the more focused your question (note that a possible exception to this rule of thumb is qualitative research, the results of which can sometimes be difficult to summarize).
Questions should be explicit
A quick test of explicitness: a research question is explicit if it immediately suggests a research design.
One question at a time
Research, like a ‘journey of a thousand miles’, goes one step at a time. Broad questions such as ‘Does hypnosis work?’ or ‘What are the effects of patient expectations on outcome?’ will not be resolved by a single research study. The researcher needs to break down large, global questions into manageable stages: a series of questions each associated with a single study.
Build on existing research
Science is a cumulative enterprise. A review of the background can help you define a study topic, identify appropriate research designs and avoid the mistakes of previous workers. Locate as many studies as possible on both the therapy and the condition under investigation and look up other examples of the type of research that you would like to conduct (have a look at some classic surveys if you want to conduct a survey, for instance). Indepth reading of the research literature is one of the most important preparatory steps for a prospective researcher.
Keep things simple
Given that research will almost inevitably turn out much more complicated than you could possibly imagine, it is a good idea to keep things simple to start with. Studies involving multiple endpoints, complex designs or large numbers of participants should be avoided by all but the most experienced researchers.
One ‘red flag’ to watch out for: if anyone comes up to you while you are planning your research and says ‘wouldn’t it be nice to know …’, panic! Of course it would be ‘interesting to know’ all sorts of things. The point is that you cannot answer all of them in your research. This seems a particular problem in questionnaire surveys, where the temptation is always to add just one more question. The problem is that, the longer a questionnaire is, the less likely you are to get good-quality data from any particular respondent.
The importance of keeping things simple is often inadequately recognized at present in the medical research community. For example, a first-time researcher was recently told by a funding committee to change a simple comparison of a physical therapy technique to no extra treatment to a complex three-way trial of the physical therapy technique, contact with the physical therapist but no use of the technique, and no contact. This was on the grounds that ‘it would be interesting to know’ which components of the physical therapy technique were of value: was it the actual technique itself or just the time and touch of the teacher? This more than doubled the cost of the trial, and because the funding committee was unwilling to provide the extra money, the trial was never conducted. My own view is that the initial two-arm trial should have been funded with further research questions addressed subsequently.
Protocols
What is a study protocol?
A study protocol is a precise description of all methodologically pertinent features of a study. An example of a protocol is given in the further reading section, below. The point of a protocol is that it should provide a complete guide to all aspects of trial management and analysis. It should be extremely detailed and explicit: a protocol I recently wrote, for instance, was nearly 8000 words long. As an illustration, here is a short section, chosen more or less at random, which describes the rules for data entry if data are ambiguous:
• chronicity: rounded to nearest year; if a range is given, the highest number will be taken
• age: rounded to nearest year on date of recruitment
• severity: if two tick boxes are marked, take the higher.
Many protocols are inadequately detailed, particularly the plan for statistical analysis. Examples I have seen recently include: ‘we expect to measure the proportion experiencing pain relief of at least 35–50%’ and ‘the main outcome measure will be forced expiratory volume’. Compare this to ‘the primary outcome measure will be the change in mean daily headache score between baseline and the 1-year follow-up’. This is an absolutely explicit guide: you do not have to guess whether to choose 35% or 50% as a cut-off nor guess at which follow-up point forced expiratory volume should be assessed.
In centres conducting many studies, certain aspects of methodology can be defined by standard operating procedures (SOPs). SOPs describe certain aspects of research, such as how to chase up missing data by telephone, or how to deal with ambiguous data. The SOPs can be summarized in a single, standalone document that is then referred to in protocols.
Checklists
A good way to make sure that you have incorporated all appropriate detail in your protocol is to use a methodological checklist. An example of a checklist, in this case for clinical trials, is given in Box 15.1. One useful trick is to go through the checklist a second time, this time asking ‘who?’ rather than ‘what?’ (e.g. ‘What measurements will be taken before treatment?’ becomes ‘Who will take measurements on participants before treatment?’).
BOX 15.1
▪ From where will patients be recruited?
▪ How many patients will be recruited and what is the justification for this sample size?
▪ What are the criteria for including patients?
▪ What are the criteria for excluding patients?
▪ How will ethical approval for the study be obtained?
▪ How will informed consent be obtained?
▪ What control group will be used?
▪ How will patients be assigned to treatment and control groups?
▪ For a randomized trial: What method will be used to generate the randomization schedule? What method will be used to conceal allocation until participant is entered into the trial? What method will be used to ensure that treatment allocation cannot be changed after participant had been entered into trial?
▪ What measurements will be taken before treatment?
▪ What other information will be taken before treatment?
▪ What is the treatment to be given to patients in each group?
▪ Will treatment be standardized, or might it vary?
▪ How many treatments can be given over what length of time?
▪ Who will give the treatment? Are providers of treatment sufficiently trained and qualified?
▪ How will the quality be ensured of any medicines used?
▪ Is there evidence that the treatment used will be effective in the patients?
▪ Will patients be blinded to their treatment allocation? If so, how?
▪ How will blinding be checked?
▪ Will statistical analysis be conducted blind?
▪ Will the researcher assessing outcome be blinded to patient treatment allocation? If so, how?
▪ What outcome measures will be used?
▪ When will the outcome measurement be made?
▪ What is the primary outcome measure?
▪ How will you monitor flow of participants through the trial: number eligible; number randomized; number receiving intended intervention; number not receiving intended intervention; number providing data for each outcome measure; number withdrawing for each of the following reasons: intervention thought harmful or ineffective, lost to follow-up (e.g. moved away), other; number completing trial
▪ What will you do about missing or illegible data?
▪ What statistical methods will be used to test each of the study hypotheses?
Getting the basics right
There has been a good deal of discussion about the importance of adapting research methodologies to the ‘special needs’ of the therapy under consideration. Though I do believe that this is important, I also believe the foundations of any research study have to be absolutely sound. Research with basic methodological flaws is as good as worthless: if you are undertaking a survey, make sure you get a high response rate; if you conduct a randomized trial, make sure you randomize properly; if you analyse any numerical data, make sure that you have a good statistician on hand. You should have a thorough understanding of the criteria used to assess methodological rigour for the type of research you wish to undertake well before you start.
People
Collaboration
A wide range of skills is needed for research
Research requires specialist skills and knowledge. More often than not, you will need skills or knowledge that you do not have. Here are some of the areas of skill or knowledge that might be needed for a typical acupuncture clinical trial:
1. searching bibliographic databases: to find relevant background material
2. clinical trial design: to write the protocol
3. acupuncture: to advise of the acupuncture content of the protocol
4. primary care: to understand the perspective of the doctors in the trial
5. practice management: to access practice databases for recruitment
6. consumer perspective: to ensure that the trial meets consumer needs
7. statistics: to aid protocol design and analyse data
8. typing and word-processing: to put together the funding application
9. financial administration: to prepare estimates of costs for the funding application
10. database programming: to design the databases to manage trial data
11. questionnaire design: to design or choose from existing questionnaires
12. typesetting: to produce questionnaires
13. general practice computer systems: to search for data needed for the trial.
If we did not have these skills on the study team, we would not be able to undertake the trial. The most difficult thing for most first-time researchers is that they have few skills. For example, few, if any, of the researchers I have advised had good knowledge and experience of statistics; similarly, very few have programmed a database. How are you going to analyse your results if you are not confident with statistics? How are you going to manage the huge amounts of data which a study produces without a database? The answer is that you need to collaborate, particularly with experienced researchers.
Indepth knowledge is needed for research
I am often asked to talk at ‘research days’ which aim to teach clinicians how to do research. I sometimes start by saying: ‘I am a researcher but I have become interested in [say] surgery. Could you quickly show me in a day or so how to do surgery?’ Just as it takes many years of training and practice to become a good clinician, so it takes many years to become a good researcher.
I would expect any researcher to have formal training in research design and statistics; a research-related higher degree; a track record of publishing original research in health-related, peer-reviewed journals; and at least 3 years’ experience of original research. This seems a fairly basic requirement: as an analogy, would it be unreasonable to expect a surgeon to have formal training in medical diagnosis and treatment, surgical board certification, a track record of satisfied patients and at least 3 years’ experience of giving treatments?
Anyone considering research should either fulfil these criteria or work with someone who does.
Study manual
A study manual describes, in detail, the practical procedures for a trial. It includes information on issues such as running the study databases, how to file and check study materials, interviewing participants and the like. Box 15.2 includes some example sections from a study manual of a trial I recently completed. To give an idea of the sort of detail you need to go into, the study manual of this relatively simple trial was nearly 5000 words long.
BOX 15.2
Preparing for interviews
1. Add data from ‘patients agreeing to interview form’ to the database. As data for each participant are added, tick by the patient’s name. Double-check that the data on screen match the data on the form and then write the date added and your initials on each sheet. File in the ‘pain trial recruitment pre-interview’ folder.
2. On Friday of every week, run the macro ‘information letters needed’ on the database (the green icon). This finds recruits who have not been sent confirmation of their interview date and prints out a cover letter and audit sheet of letters sent, changes trial status field to ‘information letters sent (date)’. Send these letters on hospital headed note paper with a patient information sheet (blue) and the map showing how to get to the hospital. File audit sheet in ‘pain trial recruitment folder’ stapled to appropriate ‘participants agreeing to interview’ form.
3. On Friday of each week, run macro ‘print out study appointment sheets’ (the blue icon). This prints out the appointment details and contact details for each patient scheduled to attend a recruitment interview in the next week. Give these to the study nurse.
4. On Friday of every week, run the macro ‘reminder calls to make’ (the red icon). This prints out a list of forthcoming appointments with participant names and numbers.
5. Every day, check whether there are any reminder calls to make. The day before each recruitment interview, call relevant participants and say:
‘My name is … and I am calling from the pain trial. You may remember that you have an appointment tomorrow at …. a.m. / p.m. This is partly a reminder call and partly a chance to check if you have any questions or difficulties.’
If the patient is unable to come, or wants to come at a different time, write a note to the study nurse specifying the person’s name and the time of the appointment. Do not make changes to the database (do this when you get the study appointment sheets back).
Management
Research requires careful management. There should be regular team meetings where problems are discussed and progress reviewed. The principal investigator needs to keep careful track of financial expenditure and recruitment. There is also a need for monitoring of a trial, to check that what should be happening is happening. The following is an excerpt from a study protocol about trial monitoring:
The principal investigator will conduct an internal audit at the study centre every 3 months to ensure: confidentiality and integrity of databases; effectiveness of database backup systems; confidentiality and integrity of paper records; reconciliation of enquiries with enquiry outcome; number of treatments received by each patient; data entry procedures; minimization algorithm and numbers allocated to each group; comparison of paper records and electronic records. Each month, the principal investigator will review the progress of recruitment by recording number of letters sent, number of enquiries received, number of calls made, number of patients entered, number of migraine patients randomized, number of patients without migraine randomized.
The dry run
I always undertake a ‘dry run’ of a study to test all procedures before involving any patients. I get the study team to go through the trial day by day (‘OK, it is now Monday 15 November 2010’) with imaginary participants (‘Joe Blow has now completed treatment’) and imaginary events (‘Jo Schmo has telephoned to say that she has decided not to take part’). We add appropriate data to the study databases, print off letters and forms, fill in questionnaires and role-play interviews. One particularly important aspect of the dry run is to play devil’s advocate. We imagine situations such as: What do you do if a patient withdraws after baseline but before randomization? What if a patient telephones to say that he will be on holiday during follow-up? Or: What happens if a patient forgets to send back a questionnaire? We then see what, if any, problems this causes our procedures.
Data
Databases
Research involves managing large amounts of data. This applies not only to results and outcomes but also to details of participants’ names and addresses, the stage they are at in the study, their doctor’s name and so on. It is absolutely essential to manage this data with a computer database. You need a good computer, good software (I personally recommend Filemaker Pro) and someone experienced at database design.
One of the best things about databases is that they can be used to maintain study quality. For example, you can program a database to prompt you to send out follow-up questionnaires at the right time or print out the telephone numbers of participants who are late returning data. You can also program a database to check your data for you (see section on data checking, below).
A word of warning, however: it is all too easy to end up with a database with hundreds of fields, dozens of macros and scores of layouts, many of which you have forgotten what they do and why you added them. When you are working on a database you should document your design in what is known as a ‘codebook’. This contains the name of a field, macro or layout and a description of its function. For example: ‘field name: ‘date q2 late’. Type of field: calculation date field. Description: created by calculation from the field ‘date q2 sent’ plus 16 days. Contains the date on which questionnaire 2 should be received by the office. Used in the macro ‘find all late questionnaires’.’
Don’t take too much data
Researchers often design studies without sufficient consideration of how they will analyse the information once it has been collected. An easy trap is to ask too many questions and take too much data. For example, an eight-item questionnaire completed daily for 10 weeks by 30 subjects will generate 16 800 data points. Who will type in all this information? How will it be analysed?
Randomization
If you wish to conduct a randomized trial, do make sure that you get randomization right. One of the most important but least understood functions of randomization is to prevent triallists being able to influence treatment assignment. To achieve this, it is important that treatment allocation is concealed. ‘Concealment of allocation’ means that the researchers and clinicians should not be able to guess the group to which a patient will be randomized before he or she is entered into the trial. If allocation is not concealed, it is possible that researchers may interfere with the randomization process, consciously or otherwise. For example, a surgeon who knew that the next patient to be entered into the trial was to be randomized to surgery might try to avoid recruiting a patient thought to have a poor prognosis. A typical example of unconcealed allocation is when a statistician produces a randomized list using a sophisticated computer program that is then posted in the research office for all to see. There is now good evidence that clinicians involved in a trial often subvert randomization unless it is properly concealed, and that trials without adequate allocation concealment are subject to bias.
Randomization should also be designed in such a way that it should be impossible to change treatment assignments once patients have been entered into the trial. This can prevent the sort of problem reported in one case, where randomization took place by removing a coloured or plain marble from a black bag. The principal investigators found that the study nurses were replacing marbles and choosing again if they felt that patients needed a different treatment to the one assigned.
One common method of randomization is the ‘sealed envelope’ method. You take 100 pieces of card, write ‘treatment group’ on 50 and ‘control’ on the remainder, then place them in 100 opaque envelopes and shuffle. The problem with this method is that there is nothing to stop a researcher opening and resealing an envelope or opening a second envelope if the first contained the ‘wrong’ allocation.
The best method of randomization is to use a secure database system. In brief, you have two databases: a registration database that records patient details and a randomization database that conducts the random allocation. Investigators can access the registration database, adding the name and details of a patient after informed consent. After these data are entered, the registration database automatically accesses an assignment from the randomization database and reveals the assignment to the triallist. Only the computer programmer and a backup programmer, in case of emergency, have the password to access the randomization database or the ability to make changes to the registration databases. Investigators cannot guess an allocation before registration, because they have no access to the randomization database, and cannot change allocation subsequently because they cannot modify data on the registration database after data entry.
Data checking
You need to check, double-check and recheck your data. A few hints and tips:
• Sign off on data. When a researcher completes a form, standard procedure should be to check that it is complete, accurate and legible and then date and initial or sign. The signature means: ‘I have checked these data’. Similarly, when data (e.g. the name and address of a new participant) are added from a form on to a database, the data entry should be visually checked and the form dated and initialled.
• Double-check critical steps. Certain parts of a study are absolutely critical to its quality. For example, if a researcher slightly misspells a participant’s surname on the study database, this is sloppy, but not disastrous. If, however, a researcher needs to add the information that a certain participant has withdrawn from a study, but calls up the wrong participant on the database, the result will be that a participant who should be in the trial is not contacted again, reducing sample size and statistical power. A typical procedure to avoid this type of error might be to add all the results of all patient interviews at one time, have the database to print out a list of the data added, check the list against the interview sheets and sign and date the sheet to confirm that a check has been made.
• Guard your code numbers. In most studies, code numbers rather than names are used to identify participants. Given the importance of code numbers, it is surprising to find them treated in a cavalier fashion by some researchers. In a trial I was asked to advise on, the code numbers were written by hand on to the top sheet of a questionnaire. Once the questionnaires had been photocopied it was impossible to check which participant had filled out any given page 2 of the questionnaire. Moreover, some code numbers were difficult to read. Make sure that code numbers are clearly stamped on every sheet of paper and no patient’s code number can be changed on the database.
• Automated consistency checks. It is possible to program a database to find data which are illogical. What I have done in the past is to get the study team to think of all possible inconsistencies in the data (e.g. patients recorded as having returned data but who are not yet in the trial, or those who have had study appointments pending but who are recorded as having withdrawn) and then program suitable searches into the study database. These searches are then run every day to alert the study team to possible errors.
• Automate data entry if possible. In place of written questionnaires, forms can be designed to be read by optical scanners. Patients can also complete questionnaires online or via automatic telephone systems (‘Enter 0 for no pain, 1 for mild pain, 2 for moderate pain and 3 for severe pain’).
• Data entry rules. Provide those entering data with a complete set of rules for missing, illegible or ambiguous data. You should also provide rules for data which might be presented in different ways. In one study, for instance, we asked runners to report how long it had taken them to complete a marathon. Because I didn’t give those entering data sufficient guidance, race time was variously entered as ‘3 hours and 15 minutes’, ‘3.15’ and ‘315’.
• Double-data entry. Any data (e.g. from questionnaires) that you will analyse and report in the published paper of your research should be entered twice, on two separate databases, with automatic checking set up between the two.
• Make extra checks on group assignments. In controlled trials, make extensive checks that every patient code number has been given the correct allocation (treatment or control) and that any codes you use for treatment assignments (e.g. 1 for treatment, 0 for control) are correct.
• Check your finished paper against the original output from your statistical software. It is not uncommon for an experienced researcher to go to extreme lengths to ensure accuracy of data during a study but then to make a typographical error when typing up the final report.
Analysis
Many statistical software packages are programmable: you can load in a set of analyses and run them on whatever data are in memory. I recommend generating a set of simulated data, programming in your analyses as specified in your study protocol, checking that the analyses do what you want them to do and then saving your work. When it comes to adding the real data from your trial, all you have to do is load up your analysis and press the ‘go’ button. Incidentally, if you hear anyone say ‘it took me ages to analyse my results’ it is probable that they were not following a predefined protocol.
Some researchers suggest that you write up the results section of your paper, including tables and graphs, before you have any data, for example: ‘Mean pain scores (xsdy) in the treatment/control group were z points (95% confidence interval a, b) lower than those in the treatment/control group (csdd).’ This helps to ensure that you are doing the analyses correctly and not merely ‘data-dredging’ (more colourfully known as ‘torturing the data until it confesses’).
Patients
Recruitment is generally cited as the number one problem experienced by researchers. A large number of studies run into difficulties because insufficient numbers of participants are recruited. Given below are a few brief hints and tips on study recruitment.
Getting participants into a study
• Motivating gatekeepers. Many trials rely on a ‘gatekeeper’, such as a patient’s doctor, to refer patients to a study. An extremely common problem is that gatekeepers lose motivation and refer far fewer patients than expected. They forget to tell appropriate patients, or fail to do so because of lack of time or simply decide not to because it is not a priority. It is essential to keep gatekeepers involved, to help them feel ownership of a project: arrange regular face-to-face meetings, send a ‘thank you’ letter when a patient is referred and send a regular newsletter with information about the progress of the trial.
• Keep things as simple as possible for gatekeepers. Use clear and simple forms with tick boxes and a place for a signature at the bottom.
• Remind gatekeepers about the study. Some studies have used complex reminder systems to prompt doctors to refer patients. These include computerized reminders or placing a reminder sticker on notes of patients who could be eligible for a study. Other studies have used simpler approaches, using cups, pens and posters with the trial logo (e.g. ‘MI? < 6 hours? Think ISIS trial’).
• Recruitment materials should be attractive and professional. Remember that ‘the medium is the message’: sending poorly photocopied or badly laid out information will not help recruitment.
• Recruitment materials should be personalized. Participants normally prefer a personalized letter, with their name and address.
• Recruitment materials should be simple. In one trial I worked on, we improved recruitment rates significantly by changing the recruitment letter. We replaced a letter which gave a detailed and complex description of the trial with one which just said: ‘we are doing some research and it might interest you; give us a call if you want to find out more.’
• Use the media. A study can be a newsworthy event (‘Though arthritis patients have used herbs for thousands of years, it is not really known if they can help. Scientists at the University of London are starting to look into this question…’). Call around journalists and see if any are interested in the story. Ask them to add details of the study telephone line or website.
Keeping participants in a study and getting good-quality data
• Make participants feel important. Study materials should carefully explain why you are doing a study and why the information you will get is important. Recruitment interviews and study materials should include phrases such as: ‘Your responses will be very helpful to us’ or ‘Your responses are very important for the study’.
• Keep in contact with participants. Calling participants (‘we just wanted to check you were getting on OK’) and sending thank you letters or newsletters are good ways of keeping participants motivated, particularly in longer studies. A hint: in any extended study, ask participants to give you the telephone number of a close friend or relative. If you cannot get hold of the participant (e.g. he or she has moved), you can contact the friend or relative for new contact details.
Conclusion
My initial response to someone asking for advice on research is often: ‘Think about whether you really want this career change.’ Research has to be done properly, or not at all. Research cannot be done properly on an amateur basis, without an impact on the rest of a person’s professional life. So if you want to do research, bear in mind first the immediate practical implications for yourself and your work. Often, a good solution is to team up with someone experienced in research.
I believe that research is important and that it has practical benefits which improve the lives of those suffering ill health. That means that all health professionals should support the conduct of research and use the results of research in making decisions. It does not mean, however, that everyone should do research. If you are considering research, think first not about methodology, but about the day-to-day practicalities of managing the complex system of data-gathering and analysis that comprises health research.
Further reading
Reporting of randomized trials, seewww.consort-statement.org.
Vickers, A.J., How to measure quality of life in integrative oncology research, J. Soc. Integr. Oncol. 4 (2006) 100–103.
Vickers, A.J., How to randomize, J. Soc. Integr. Oncol. 4 (4) (2006) 194–198.
Vickers, A.J., Multiple assessment in quality of life trials: how many questionnaires? How often should they be given, J. Soc. Integr. Oncol. 4 (2006) 135–139.
Vickers, A.J., How to improve accrual to clinical trials of symptom control 1: recruitment strategies, J. Soc. Integr. Oncol. 5 (1) (2007) 38–42.
Vickers, A.J., How to improve accrual to clinical trials of symptom control 2: design issues, J. Soc. Integr. Oncol. 5 (2) (2007) 61–64.
Vickers, A.J., Basic introduction to research: how not to do research, J. Soc. Integr. Oncol. 6 (2) (2008) 82–85.