Research Foundations, Methods, and Issues in Developmental-Behavioral Pediatrics

Published on 21/03/2015 by admin

Filed under Pediatrics

Last modified 21/03/2015

Print this page

rate 1 star rate 2 star rate 3 star rate 4 star rate 5 star
Your rating: none, Average: 0 (0 votes)

This article have been viewed 1146 times

CHAPTER 3 Research Foundations, Methods, and Issues in Developmental-Behavioral Pediatrics

THE UNIQUE NATURE OF DEVELOPMENTAL-BEHAVIORAL PEDIATRIC RESEARCH

The scope of research in the field of developmental-behavioral pediatrics (DBP) is as diverse and rich as the clinical field itself. A wide range of research methods and analytical techniques accounts for both its depth and its complexity. The same characteristics of research in the field that render the potential for its findings to be of such practical significance and relevance often pose critical challenges to ensuring its scientific validity.

The research and the associated research teams are often multidisciplinary, permitting an application of various methodological approaches. The field of DBP permits integration of complementary theoretical perspectives and methods, such as the blending or juxtaposition of quantitative methods characteristic of medical science with qualitative approaches more typical of social science research. Research training in the field is therefore more eclectic and broader than in subspecialties that rely almost exclusively on basic science techniques. The field does not have one well-circumscribed set of research methods that can be mastered in a relatively short time. For quality research in DBP, multidisciplinary teams must consist of individuals who can each contribute their own perspective and skills, and each team member must be adequately informed of the basic principles inherent in the research approaches of the other disciplines.

DBP research often aims to study the full spectrum of child development and behavior: from normal variations to concerns or problems to clinical disorders. One of the driving forces for establishing the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders was to standardize the diagnostic criteria for disorders to foster consistency in research in mental illness.1 Research that incorporates the continuum of developmental and behavioral difficulties must establish reliable and valid outcome measures for subthreshold or problem conditions or criteria for identifying where on the bell-shape curve of behavior or development is the appropriate cutoff for defining a concern or a problem. Although achieving reliability in delineating the diagnostic criteria for a mental illness may be challenging, it is often even more elusive for a behavioral problem or personality trait. One common approach is to inquire whether the characteristic of interest (e.g., attention) is believed to occur significantly more often in one person than in typical peers of the same age or developmental level and to require an association with some perceived impairment (e.g., attention-deficit/hyperactivity disorder [ADHD]). This approach often introduces a reliance on subjective, self-reported measures of perceived impairment or relative deviation from perceived norms that can compromise validity and produce a reporting bias.

Research in DBP often addresses more abstract issues, such as community support or adjustment to illness. Because much of the research addresses such common topics, the researcher may assume that the methodology is therefore “simple.” But, in fact, operationalizing these variables and developing and validating relevant measures are difficult. Much of the research in DBP involves measuring constructs for which validated measures do not already exist and for which objective, concrete biological outcome measures are not feasible.

Because DBP often assumes an ecological perspective, researchers are more apt to look critically at sociocultural influences on child development and behavior. Such factors are difficult to measure, even harder to report accurately, and far more difficult to interpret or explain. The use of race and ethnicity as explanatory variables illustrate the complexity of this issue.2 Researchers who understand the complexity of social and cultural influences appreciate the futility of controlling for all relevant influences within an ecological model.

Despite these challenges, the complexity of research design issues in DBP fosters its richness. The multiple perspectives and theories and the diversity of available methodological approaches enable the construction of rich, multidimensional theoretical models. Researchers must necessarily explore not only outcome measures but also mediators and moderators (see Chapter 2). The complexity is increased by the factor of time and the challenges inherent in measuring one construct in the context of a child’s developmental trajectory. For example, in studies of the influences of early childhood experiences on later language outcomes, investigators need to consider not only the multiple environmental, familial, cultural, and community factors that may influence language development but also the reality that developmental processes are not static in the individual child. Parsing out how much of change in language development is attributable to the normative process of child development ot to inherent deficits in the child, social, environmental, family or community factors, or the unanticipated effect of uncontrolled historical events (such as changes in preschool policy or educational interventions) can be daunting.

CROSS-CUTTING METHODOLOGICAL AND THEORETICAL ISSUES

The nature of DBP research introduces a range of cross-cutting methodological and theoretical concerns that must be addressed to ensure the validity of the findings. This section highlights select examples that illustrate the complexity of the issues that are involved.

Incorporating Child Development within Child Development Research

Central to any research in the area of child development is an appreciation that children’s capabilities and behavior change over time as a result of developmental processes, independent of other factors or interventions. Measures of skills or capabilities therefore need to be adjusted and compared with norms for different ages/stages, introducing analytical concerns for cross-sectional studies involving children of different ages or developmental stages. Measurements of the effect of interventions provided over time may also be compromised by analytical concerns inherent in measuring the same domain at different developmental stages, which may necessitate the use of different age/stage-appropriate instruments or, at the very least, correction for age/stage. In addition, measurement of children’s abilities may be confounded by the child’s developmental capacity to understand instructions and communicate comprehension. For example, young children have been described as having difficulty appreciating the perspective of someone else. It is possible that such difficulty may result, at least in part, from limitations in their ability to comprehend the task requested, their language ability to communicate their understanding, or the researcher’s ability to communicate the task required. Research on young children’s understanding of the concepts of human immunodeficiency virus (HIV) and acquired immunodeficiency syndrome (AIDS) initially suggested that young children’s understanding of core concepts of illness was significantly limited developmentally, which seriously constrained their capacity to benefit from educational interventions; however, subsequent research demonstrated that a developmentally based educational intervention could result in dramatic gains in young children’s conceptual understanding in this area.3 In other words, what appeared at first to be a limitation in children’s ability to learn was subsequently found to represent limitations in adults’ understanding of how to teach effectively and/or in researchers’ ability to measure validly children’s underlying comprehension.

Qualitative Methods

Qualitative research methods are most appropriate in situations in which little is known about a phenomenon or when attempts are being made to generate new theories or revise preexisting theories. Qualitative research is inductive rather than deductive and is used to describe phenomena in detail, without answering questions of causality or demonstrating clear relationships among variables. Researchers in DBP should be familiar with common ethnographic methods, such as participant observation (useful for studying interactions and behavior), ethnographic interviewing (useful for studying personal experiences and perspectives), and focus groups (involving moderated discussion to glean information about a specific area of interest relatively rapidly). In comparison with quantitative research, qualitative methods entail different sampling procedures (e.g., purposive rather than random or consecutive sampling; “snow-balling,” which involves identifying cases with connections to other cases), different sample size requirements (e.g., the researcher may sample and analyze in an iterative manner until data saturation occurs, so that no new themes or hypotheses are generated on subsequent analysis), different data management and analytic techniques (e.g., reduction of data to key themes and ideas, which are then coded and organized into domains that yield tentative impressions and hypotheses, which serve as the basis of the next set of data collection, continuing until data saturation occurs and final concepts are generated), and different conventions for writing up and presenting data and analyses. The strength of the findings is maximized through triangulation of data, investigator (e.g., use of researchers from different disciplines and perspectives or several researchers to independently code the same data), theory (i.e., use of multiple perspectives), or method (e.g., use of focus groups and individual interviews to obtain complementary data).

Intervention Fidelity and Treatment Dose

Interventions are often delivered in naturalistic and group settings by individuals who are not part of the research team, such as teachers, parents, and home visitors. Although this allows for the testing of interventions that are much more likely to generalize to the general population, distortions in the delivery of the intervention may occur. Research requires measures of the intervention fidelity (i.e., the degree to which the intervention is delivered in the manner intended by the researcher) and treatment dose (the extent to which the subject participates in or receives the full intervention). A study of a school-based intervention delivered by regular classroom teachers needs not only a strong method for teacher training and monitoring but also explicit measures of how the teachers delivered the intervention and the degree to which students attended and/or received the full intervention. Such monitoring may include a mix of quantitative measures (e.g., curriculum checklists, student attendance records, self-reports of teacher satisfaction with the intervention) and qualitative assessments (e.g., ethnographic observations of classrooms while lessons are being taught, focus groups of teachers, or individual interviews). Other measures (i.e., triangulation) may be used to confirm teacher reports of intervention fidelity or treatment dose, such as asking students to complete a questionnaire about simple concepts or facts from the intervention, to test whether children were exposed to the relevant lessons.

Buy Membership for Pediatrics Category to continue reading. Learn more here