Assessment of hepatic function: Implications for the surgical patient

Published on 09/04/2015 by admin

Filed under Surgery

Last modified 09/04/2015

Print this page

rate 1 star rate 2 star rate 3 star rate 4 star rate 5 star
Your rating: none, Average: 0 (0 votes)

This article have been viewed 1711 times

Chapter 2 Assessment of hepatic function

Implications for the surgical patient

Preoperative Considerations

The proper preoperative evaluation of liver function is arguably one of the most important and challenging assessments to make prior to hepatic resection. It requires analysis and interpretation of a wide variety of factors, including the determination of baseline parenchymal disease (fibrosis, steatosis, cirrhosis), the appropriate extent of resection required, and the predicted volume of the remnant liver, culminating in the prediction of risk of postoperative hepatic failure. The multiple and diverse functions of the liver preclude easy evaluation; therefore it is not difficult to understand that no individual test, or even a panel of tests, completely and accurately assesses preoperative liver function prior to resection.

In recent years, surgeons have increasingly pushed the limits of hepatic resection for a number of diseases, given the improvements in long-term outcome that can be achieved; a particularly striking example is resection for metastatic colorectal cancer (see Chapters 81A and 90A). Additionally, laparoscopic and nonanatomic resections are being performed with increasing frequency, as excellent oncologic outcome is being reported for both primary and secondary hepatic neoplasms. These liver parenchymal–sparing procedures are often performed in the setting of cirrhosis or treatment with potentially hepatotoxic chemotherapy and therefore require a very precise assessment of both preoperative and predicted postoperative liver function.

An ideal assessment of liver reserve would allow surgeons to make real-time decisions, when unexpected lesions or other findings are encountered intraoperatively that require a greater extent of parenchymal sacrifice than anticipated. Preoperative assessment of the functional liver remnant following resection is clearly critical, as it may help select high-risk patients for alternative procedures, such as staged resection, nonanatomic resection, or the incorporation of ablative procedures. Furthermore, such assessment will help select patients for preoperative portal vein embolization (PVE) to induce hypertrophy of the remnant liver prior to resection.

Despite a long list of available functional studies, imaging techniques, and markers to assess preoperative liver function (Table 2.1), no single measure or combination of measures has been demonstrated to be more predictive of outcome following resection than clinical assessment by an experienced hepatic surgeon. With improvements in surgical technique, anesthesia support, and ICU care, surgical mortality following hepatic resection has fallen to less than 3% at many institutions. However, death or complications related to liver failure following resection remains a significant cause of overall morbidity and mortality, with 16% to 50% of all mortality the result of liver failure, even in the setting of metastatic colorectal cancer (Nagao et al, 1987; Nagasue et al, 1993; Doci et al, 1995; Nordlinger et al, 1996). The continued search for an ideal measure of liver function before and after resection promises to allow for better selection of patients as well as better selection of operative procedures. This chapter reviews the measures of liver reserve that are currently available, which can broadly be divided into four categories: 1) clinical scoring systems using standard assessment and laboratory values, 2) measurements of hepatic uptake and excretion, 3) measurements of hepatic metabolism and excretion, and 4) measurements of predicted postoperative liver volume.

Table 2.1 Overview of Assessment of Hepatic Function


Scintigraphy Volumetry

Clinical Scoring Systems

The clinical assessment of hepatic function has evolved from the Child’s system (Child et al, 1964) and its subsequent modification, the Child-Pugh score (Pugh et al, 1973) as shown in Table 2.2. This scoring system was originally developed to predict mortality in patients with portal hypertension after portocaval shunt operations, and it has evolved to become a useful predictor of liver-related mortality in patients with cirrhosis.

Using a combination of clinical assessments and standard laboratory values, the Child-Pugh score is a quick assessment that can be easily performed on a preoperative patient (see Chapter 70B). The laboratory values included in the scoring system include total bilirubin, serum albumin, and prothrombin time (PT)/international normalized ratio (INR) levels. Bilirubin alone is neither sensitive nor specific for intrinsic liver disease but serves as an indirect measure of the ability of the liver to take up and conjugate bilirubin and to eventually secrete it. Other causes of bilirubin elevations include biliary obstruction, primary biliary cirrhosis, sclerosing cholangitis, and hemolytic conditions. Serum albumin level is a measure of the liver’s synthetic ability, as albumin is produced exclusively in the liver. Its half-life is approximately 20 days; therefore it is not sensitive in detecting acute hepatic decompensation. Additionally, protein-losing enteropathies, malnutrition, renal disease, and burns can lead to hypoalbuminemia. PT/INR is also an important measure of liver synthetic function, because the liver is the major site for synthesis of blood clotting factors. Clotting factors V, VII, and prothrombin are critical components of the intrinsic clotting cascade, which the PT directly measures. The half-life of factor VII is around 2 hours; therefore INR is a much more dynamic measure of liver synthetic function than serum albumin levels. Problems with vitamin K absorption and intravascular consumption of clotting factors can lead to elevations in PT/INR as well.

Two additional components of the Child-Pugh classification include a subjective assessment of ascites and encephalopathy. Patients are divided into three classes based on points given for each category. In patients with cirrhosis, the 1-year mortality related to liver failure is minimal (<5%) for Child-Pugh class A patients, compared with 20% and 55% for class B and C patients, respectively (Conn, 1981; Table 2.3).

When used for risk stratification prior to hepatic resection, class B and C patients clearly have a higher mortality rate and lower survival rate compared with class A patients (Franco et al, 1990; Nonami et al, 1997). Indeed, very few patients with Child-Pugh class B cirrhosis would be considered candidates for even limited hepatic resection, and none would tolerate a major hepatectomy; class C cirrhosis is an absolute contraindication for even the most minor intervention. The major limitation of the Child-Pugh system is that it is not considered applicable to noncirrhotic patients or to those with other conditions, such as steatosis (see Chapters 65 and 87). Furthermore, there is variability among class A patients in terms of their ability to tolerate a major hepatectomy, but a few good-risk class B patients may also be candidates. Therefore surgeons are faced with using their best judgment to determine “poor-risk” class A patients and exclude or alter plans for resection and to identify “good-risk” class B patients.

Within the class A category, an additional means of determining risk of postoperative hepatic decompensation is to assess for subtle signs of portal hypertension prior to resection. In the absence of splenomegaly or varices on imaging, which are rarely present in such patients, the platelet count may be used as a surrogate marker of subclinical portal hypertension. In a cirrhotic patient with otherwise good hepatic function, thrombocytopenia is highly suggestive of significant hypersplenism and portal hypertension severe enough to contraindicate resection in most cases. A direct way to assess for this is measurement of portal venous wedge pressure (Bruix et al, 1996), which can effectively stratify class A patients into high- and low-risk groups; however, this test requires expertise that is not widely available and is not routinely used.

The model for end-stage liver disease (MELD) is a clinical scoring system currently used to predict survival of patients with end-stage liver disease awaiting liver transplantation (Kamath et al, 2001; Wiesner et al, 2001). The MELD score considers bilirubin, INR, and creatinine to assess kidney function. The MELD score is derived from a complex calculation as follows:


In cirrhotic patients, the MELD score predicts the development of postoperative liver failure following hepatectomy for hepatocellular carcinoma (HCC), with a cutoff score greater than 11 predictive of worse outcome (Freeman et al, 2002; Teh et al, 2005; Cucchetti et al, 2006; see Chapter 70B, Chapter 90A, Chapter 90F, Chapter 97A ).

Both the MELD and Child-Pugh scores are meant to assess cirrhotic patients in particular and are considered unreliable in noncirrhotic patients (see Chapters 65 and 87). Furthermore, the subjective assessment of ascites and encephalopathy requires considerable input from the hepatic surgeon with regard to the suitability for resection. It is likely that it is this input that allows the experienced surgeon to impart clinical judgment to selected patients for resection, and it is why the Child-Pugh score alone has been demonstrated to be equivalent to most of the functional dynamic liver reserve studies published (Albers et al, 1989).

Dynamic Liver Tests

Measurement of Hepatic Uptake and Elimination

Indocyanine green (ICG) is an anionic organic dye that is protein bound and selectively taken up by hepatocytes and excreted unchanged via the bile. Plasma extraction of ICG by the liver is an active process that can be saturated with high doses of ICG (Faybik et al, 2006). Therefore maximal removal of ICG can be measured, and it reflects the uptake and excretional capabilities of the liver, which can be extrapolated to reflect hepatocyte blood flow and functional hepatocyte mass (Caesar et al, 1961).

Assessment of ICG clearance typically involves measuring serum levels at several points following intravenous administration (see Table 2.3). Clearance at 15 minutes is the most commonly used measurement; ICG retention values above 10% to 15% at 15 minutes are considered abnormal and are used as a cutoff to identify patients at high risk for liver failure following liver resection (Lam et al, 1999; Fig. 2.1). Poon and colleagues have recently recommended a safe limit as high as 20% in Child-Pugh class A patients (Poon et al, 2004). Pulse infrared spectroscopy can also be used detect real-time, continuous blood levels, similar to pulse oximetry, without the need to obtain serum measurements (Okochi et al, 2002).

ICG elimination is by far the most widely used and published functional assessment of liver reserve worldwide (Yumoto et al, 1994; Kwon et al, 1997; Fujioka et al, 1999; Tanaka et al, 1999; Wakabayashi et al, 2002; Kokudo et al, 2003; Satoh et al, 2003; Bennink et al, 2004). Imamura and colleagues (2003) reported a 0% mortality rate in a series of 1056 hepatic resections for HCC and cirrhosis using an ICG-15 cutoff of less than 10% for a safe extended resection, 10% to 20% for hemihepatectomy, 20% to 30% for segmentectomy, and only enucleation for an ICG-15 greater than 40%. Noguchi and colleagues (1990) used the ICG maximal removal rate of 0.8 µg/kg cm3 as a cutoff for safe resection, and Ohwada and collagues (2006) found real-time monitoring of ICG beneficial in evaluating hepatic reserve before, during, and after hepatic resection. ICG clearance has also been useful in predicting short-term prognosis in liver transplant patients (Clements et al, 1988; Lamesch et al, 1990; Oellerich et al, 1991; Yamanaka et al, 1992; Jalan et al, 1994; Tsubono et al, 1996; Igea et al, 1999; Plevris et al, 1999; Jochum et al, 2006). Tsubono and colleagues (1996) noted that a day 1 ICG elimination constant was a better predictor of graft outcome than any other conventional liver function test, and Clements and colleagues (1988) found ICG clearance to be impaired with graft rejection, but it improved as rejection resolved.

Unfortunately, the findings for ICG have not been consistent, with many authors finding a lack of correlation between ICG levels and successful liver resection (Wakabayashi et al, 1997; Lam et al, 1999). This has been particularly true in the setting of PVE, and this variability is thought to be the result of the dependence of ICG clearance on total hepatic blood flow, with regional variations markedly altering retention values (Cherrick et al, 1960). This has led a number of surgeons to suggest that no quantitative liver function tests provide a clear advantage beyond Child-Pugh score for predicting outcome following resection (Albers et al, 1989; Bennett et al, 2005). In a recent study of 111 patients, Child-Pugh score better predicted postoperative morbidity secondary to transient liver failure compared with ICG clearance tests, and this lack of superiority of ICG clearance compared with Child-Pugh score has been confirmed by others (Albers et al, 1989; Merkel et al, 1989; Kokudo et al, 2002). Bennett and Blumgart noted significant overlap in ICG parameters between patients with cirrhosis and healthy controls, and in their institutional experience of 1800 liver resections, there were only six deaths without the reliance on any dynamic functional tests (Jarnagin et al, 2002; Bennett et al, 2005).

Hepatobiliary Scintigraphy and Single-Photon Emission Computed Tomography

Radionuclide scintigraphy of the liver can provide information far beyond basic liver anatomy, allowing for assessment of functioning hepatocyte mass as well as hepatic hemodynamics. Uptake of liver-specific radionuclides can be used to quantify overall hepatic function, and calculations of the relative contributions to perfusion by the portal vein and hepatic artery can be used to estimate the presence of liver metastases and cirrhosis. Beyond the classic sulfur and gold colloid scans and liver uptake and excretion imaging of aminodiacetic acid derivatives (HIDA, DISIDA), more recent synthetic, radiolabeled asialoglycoproteins have been developed that are taken up by hepatocyte-specific receptors through an active transport process (Kudo et al, 1990). Reduction in receptor numbers is seen in patients with chronic liver disease; because receptors are absent from the surface of hepatocellular carcinoma cells, scintigraphy can be used to provide a functional assessment of the liver through its ability to take up and clear these compounds. One such asialoglycoprotein is technetium-99m-galactosyl human serum albumin (99mTc-GSA). Patients undergoing a Tc-GSA scan receive a bolus injection of 185 MBq of 99mTc-GSA, and a dynamic scintigraph is obtained with gamma cameras located over the heart and liver. Typical curves are shown in Figure 2.2 (Ha-Kawa et al, 1997).

Kwon and colleagues (1995)

Buy Membership for Surgery Category to continue reading. Learn more here