Distal Radius Study Group: Purpose, Method, Results, and Implications for Reporting Outcomes

Published on 18/03/2015 by admin

Filed under Orthopaedics

Last modified 18/03/2015

Print this page

rate 1 star rate 2 star rate 3 star rate 4 star rate 5 star
Your rating: none, Average: 0 (0 votes)

This article have been viewed 1677 times

CHAPTER 4 Distal Radius Study Group: Purpose, Method, Results, and Implications for Reporting Outcomes

Distal radius fracture care has undergone substantial changes in tools and methods since the first commercial introduction of dorsal and volar fixed angle subchondral fixation plates in 1996.1 Are these changes good? Is plating “good” in general and specifically?

What is “quality care” as related to distal radius fractures? In Zen and The Art of Motorcycle Maintenance, Pirzig2 examines the concept of Quality. His argument cannot be restated within the context of this chapter; however, his premise of Quality as a unification of all values implies that the true “Quality” of distal radius fracture care can be found only in understanding the whole problem. There are participants (insurers, hospital administrators, equipment manufacturers) in the care process that measure and assign value to only a segment or segments of the “results pie.” For the purpose of this chapter, we contend that the physician is the participant with sufficiently broad and specific training to enable “asking the big question”: that is, does the addition of new technology matter to the patient, and where should the technology be delivered? We posit that “asking this big question” is the physician’s obligation, and that doing so requires ongoing comparative examination of outcome.

In assessing the utility of a new method or technology as it relates to care of a diagnosed condition, we hypothesize the following:

The above-listed hypotheses assume the reader is familiar with basic concepts of outcomes medicine and some concepts related to manufacturing. Specifically, the reader should be aware that a primary outcome can be selected from three broad categories: (1) general health, (2) condition-specific health, and (3) satisfaction.3 It also is assumed that the reader accepts the validity of structured patient questionnaires to measure outcomes within the three broad categories. As related to manufacturing, the second hypothesis examines the generalized utility of the method/technology; this is crucial when a tool requires a manipulation/skill-set. Physicians are generally familiar with the need to include patient compliance as a factor to consider when testing medication effectiveness. In physician-administered, “tool-based” medicine, it is essential to examine the ability of physicians to “comply” with a method/device requirement to achieve a desired outcome, and not to assume that success can be generalized outward from a motivated cohort (the surgeon designer’s experience). The need to prove treatment efficacy by providers can be assumed to vary directly in relationship to the potential variation in skill and treatment risk. Although large outcome differences cannot be categorically stated to occur in all of medicine, it is known that significant differences in treatment response to specific musculoskeletal diagnoses do occur throughout the United States, and so it seems likely that physician/region specific differences in outcomes already exist.4

Traditionally, the best medical experiments manipulate a single variable with the experiment’s designer separated from the introduction of the variable and the data collection process. This type of randomized clinical trial (RCT) has many qualities to recommend its use whenever possible. Clinical work makes such studies expensive, however, at a level that practically cannot be sustained as a means to reconfirm and monitor surgeon/system efficacy. Also, randomization requires patient enrollment that is predicated on the physician’s belief that selected treatments are equivalent. Often, it is impossible to find providers without treatment prejudice. Prejudiced physicians could not follow the principles of clinical equipoise (a state of genuine uncertainty on the part of the clinical investigator regarding the comparative therapeutic merits of each arm in a trial).5 Taken together, cost and the need for clinical equipoise argue for an alternative method of prospective enrollment and clinical data acquisition.

“Practical clinical trials” (PCT) is a phrase coined by Tunis and colleagues.6 By the authors’ definition, such trials (1) select relevant information/interventions to compare, (2) include a diverse population of study participants (providers), (3) include patients from a heterogeneous mix of practice settings, and (4) collect a broad range of health outcomes.

For the purpose of studying the effects of distal radius fracture treatment, these instructions are particularly useful. Using this set of instructions enables the study participants to choose the treatment that they believe, based on their skill and experience, would best meet a specific patient’s needs. Also, a “high” percentage of eligible patients can be expected to enroll because the relative risk to their health is clinically unchanged compared with not enrolling. Most important in the context of understanding the results of clinical care, such a study can still yield meaningful future treatment guidance assuming that data input is structured and standardized, and assuming that treatment selection bias is low.

Conclusions reached from observational studies have been shown to resemble RCTs when available treatments carry similar indications and risk.7 RCTs are valuable, but, assuming that the patients who “fall” by their choice or physician’s recommendation into specific treatment groups in the PCT are similar (equivalent propensity after analyses), the PCT is able to provide equal or better guidance regarding treatment efficacy (real-world applicability) compared with the effectiveness guidance gained from the RCT (result in an expert world within a specified set of patients willing to be randomly assigned).8

The final point just mentioned leads to the issues of privacy and scalability. That is, the size of the observational trial matters as well as its longevity. A meaningful analogy might be as follows: Imagine that we know the number of small planes that took off and landed (without crashing) in Florida in the spring of 2007. What can this tell us about the spring of 2008 in Florida and in the United States? The obvious answer is “not much.” We need more data. To acquire these greater data on an ongoing basis within the budget allowed for many studies would be impossible. Hence, the concept of registries has evolved from processes concerned mostly with larger public health issues to registries that “are now vital to quality-improvement programs that assess the safety of new drugs and procedures, identify best clinical practice and compare healthcare systems.”9

The problem discussed by Williamson and colleagues9 concerns the clear need for a tracking mechanism across systems that while being respectful of privacy also addresses the need to “see” the patient specifically across systems and across time. In conjunction with the foregoing, the need for such a system to use an open and scalable means of data display, acquisition, and storage is obvious. That such a system must be deployed via the Internet to be affordable and universally available should go without saying.

Methods

The Distal Radius Study Group (DRSG) was formed in 2003. Original group membership included four academic centers, one single-specialty group practice with academic affiliation, and one single-specialty group with no academic experience. The Primary Investigator (PI) acted in an oversight capacity and did not enroll patients.

The group was formed with the intent to examine the possibility of an Internet-based data set as a means of detecting differences in distal radius care. The data to be collected had been agreed on by group members at the outset and were presented from the fixed data set via the Internet to document enrollment, baseline data, treatment data, and follow-up data at specified intervals. The data set was kept constant throughout the study interval (December 2003 to February 2007). Two of the six sites never entered patients (the site housing the oversight PI and the single-specialty group with no academic experience). In 2006, a seventh site (fifth active) joined the DRSG and has successfully enrolled patients using the complete functionality of the collection system (real-time use with direct data transfer into an electronic medical record).

The study was designed in the manner of a PCT. As such, the inclusion and exclusion criteria were kept as broad as possible. The primary exclusion criteria were (1) open epiphysis, (2) open fracture, (3) history of inflammatory arthritic process, (4) fractures with associated nerve or tendon injuries requiring surgical repair, and (5) inability to read or write English. For the DRSG, institutional review board approval has been obtained and maintained at all sites.

The purpose of our choosing an observational model for the study was threefold:

Previously reported variables were used as a guideline to create structured data sets for demographics, baseline, operative (pathology, equipment, complications), outcomes, and adverse events. The Disabilities of the Arm, Shoulder, and Hand (DASH) score was selected as the patient-reported instrument and designated as primary outcome for comparison. Fracture type as designated by the Orthopaedic Trauma Association classification system for distal radius fractures was selected as the primary means of fracture/disease classification.10 A conceptual model for treatment of distal radius fractures was designed (Fig. 4-1). The model depicts the basic treatment/measurement process and identifies the primary outcome and independent variables.

A study booklet was created for each site that summarized the data entry process and was used to familiarize the users with structure of the “screen” that would be seen during the process of data entry. This included a view of all patients enrolled and their follow-up schedule (divided into five postoperative intervals: 2 weeks, 6 weeks, 13 weeks, 26 weeks, and 52 weeks [±20%]) (Fig. 4-2). The type of data element to be collected was seen by the user and enforced by the system (range with specific end point using slider bars, single possible choice of data element from a list, multiple possible choice data element or elements from a list, and open text entry) (Fig. 4-3).

To enable individual treatment site data entry, an Internet-based data entry method was designed. BoundaryMedical Incorporated (www.boundarymedical.com) developed, deployed, and hosted the method as employed. No software applications or databases were installed at any remote site computers. A monitor, Internet connection, and browser (any) were the only required products at the remote site. The methods met Health Insurance Portability and Accountability Act (HIPAA) compliance and data integrity and security standards.

Buy Membership for Orthopaedics Category to continue reading. Learn more here