As we wait for Senate action on food safety legislation and on the cusp of Fresh Summit, here is the link to the PMA submission to the FDA on produce safety earlier this year.



An excerpt:

10. Microbial testing

The produce industry has historically employed microbial testing to measure irrigation water quality, verify wash water sanitation practices and measure the effectiveness of equipment and facility sanitation practices. While this type of testing is not yet universal across the whole spectrum of the industry, more and more operators have been implementing these verification tests as the science has evolved, operators have sought to develop comprehensive food safety programs and the buying community has requested verification measures. Microbial testing in these areas falls out logically from risk assessment and management. The identification of a specific risk and the selection of a group a management practices should lead to the next question of how to measure that the management practice is effective. Therefore, if a grower or packer identifies irrigation or wash water as a potential risk factor, the logical step would be to employ management practices to insure the water used is free of pathogens and the way to verify those management are effective is to test for microbial quality. The same logic extends to verifying whether sanitation practices are effective or if a composting process has been sufficient to kill potentially available pathogens. It reasonable for FDA to expect industry to employ microbial testing as a tool to verify specific risk management practices. The old saying that “you cannot manage what you cannot measure” is likely applicable here.

The other old saying that also has applicability is “the devil is always in the details”. If microbial testing is used as a tool to verify a management practice, it is important that the test has the proper selectivity and sensitivity to detect the target organism in the biological, chemical or physical environment of the sample. It is equally important that the sampling method and frequency are appropriate to provide confidence that a “negative” result is really a negative result. In some instances, it is more economically feasible and faster to check for indicators instead of the actual pathogen, e.g. generic E. coli in water samples versus E. coli O157:H7.

While it is reasonable for FDA to expect the produce industry to use microbial testing to verify risk management practices, PMA believes that the selection of the type of test and the sampling method needs to be at the discretion of the individual operators. The selection of the type of test and the sampling protocol should be part of the original risk assessment and be focused on providing the verification answer desired. For example, if an operator wants to monitor the effectiveness of his/her sanitation program for a piece of harvest equipment, he/she simply need to know if there are any bacteria on the equipment or not. The sanitation program is certainly not selective, but should be stringent enough to kill all bacteria on the contact surfaces. Therefore, while the operator is worried about pathogenic bacteria, a simple test that identifies all bacteria is sufficient. So instead of a $70 test for E. coli O157:H7, the operator can select a $2 adenosine triphosphate (ATP)-based bioluminescence test, and get the verification and speed to results he/she needs.

Similarly, the operator should determine testing frequency and method based on the risk, the management practice being verified and historical data the operator or the industry has available. For example, testing irrigation water has been a priority for some sectors of the produce in the last 4-5 years. Of course, irrigation water can be sourced from a number of sources including wells, municipalities, public reservoirs, canals, rivers, ponds, on-farm reservoirs, etc. Based on the producer’s risk assessment for a commodity and the producer’s knowledge of the quality of the irrigation water source, testing frequency may vary. For instance, a leafy greens grower using sprinkle irrigation from a deep well that has been tested monthly for 3- 4 years without a significant generic E. coli test might choose to manage the risk of irrigation water contamination on their farm by using a generic E. coli test at the beginning of a season and sporadically throughout the season, along with a physical inspection of the well head weekly. Alternatively, that same leafy greens grower growing on a different ranch that uses an on-farm reservoir to irrigate the crop and has 3-4 years of data that show fluctuating populations of generic E. coli throughout the season might chose to manage the risk of irrigation water contamination by frequent testing of the reservoir during the periods where the data indicate conditions might exist that support generic E. coli growth – perhaps employing drip irrigation during periods of risk and performing frequent physical inspections of the reservoir to monitor potential sources of the contamination.

In the end, the type of test used and the sampling protocols employed need to be tailored to the situation. and should not be generally defined by rule. From a practical perspective it would be unrealistic for FDA to try to define methodologies and sampling protocols for each scenario where testing might be used to validate an on-farm, packinghouse, cooling facility or processing risk management practice. A risk x commodity x production process assessment should drive the decision making on microbial testing. and can best be performed by the producer or regional commodity groups as they develop standards and metrics.

As described earlier under Section 2, FDA can assume leadership of this process by developing a mechanism to recognize food safety standards developed by various commodity groups, associations, etc. In addition, the technology for pathogen testing is evolving rapidly in both the private and public sectors. Codifying rules around testing as it exists today may prevent new innovations in testing from coming to market. or cause FDA to have to review these rules to accommodate new technologies. Lastly, FDA may want to consider offering guidance to accompany any requirement to use microbial testing to verify best practices to help producers develop or enhance testing programs. If this approach is chosen, the produce industry would be a willing partner in assisting FDA to prepare this guidance.

By far, the industry discussion around microbial testing to date is dominated by the value of raw or finished product testing. For some of the commodities that have been historically related to illness outbreaks, producers and buyers have developed raw and/or finished product testing programs for pathogens. Indeed, FDA, USDA and others have also implemented redundant testing programs focused on these commodities and targeted to various points in the supply chain. However, product testing represents a very different set of challenges compared to the process or practice verification testing discussed above. Some of these differences are:

• In contrast to the above examples where the presence of indicators or general presence or absence of any microorganism meets the need of the verification test, most product testing is directed at pathogen detection. These tests are often done in two parts: rapid screening via unique DNA sequences, followed by confirmation by Bacteriological Analytical Manual (BAM) microbial culture methods. These tests are invariable more time consuming and considerably more expensive.

• Unlike testing water or equipment surfaces, testing plant tissues is much more complex owing to the presence of chemicals and other organic matter that frequently interfere with current molecular methods employed to isolate pathogen DNA or proteins. In a very real sense, test methods need to be optimized based on the crop to account for interference.

• There is generally less time pressure to perform microbial testing when it is being used to verify that a process or practice is performing properly. Sampling can be performed at specific time intervals a part of a routine protocol. Positive tests results for indicators do not trigger recalls and corrective actions can be implemented. On the other hand, product testing introduces a very real time element, as produce is perishable and any “positive” pathogen test would almost certainly trigger destruction of raw product or the recall of finished product already in commerce.

• While not trivial, sampling protocols for water and food contact surfaces have been established and their limitations and significance are understood. By comparison, achieving statistical significance for raw product testing at the field level or finished product testing is functionally impossible, since the testing is destructive and the sheer numbers of individual plants in a production lot and the apparent very low frequency of contamination renders product testing analogous to “finding a needle in a haystack”.

Given the developmental status of testing methodologies for produce and the current absence of validated sampling methods, FDA-mandated product testing would prove difficult to craft and enforce and would create confusion in the industry.

As a food category, produce is unique. Therefore, when considering how or even if pathogen testing in raw or finished products has value as a food safety tool and whether it should become a mandated component of food safety regulation, it is important to account for these unique characteristics. The following are some of the factors that merit consideration.

• Fruits and vegetables are perishable. The perishable nature of many fruits and vegetables dictates that these products must be harvested and shipped within 12-72 hours so that they can be received in distribution centers around the country with approximately 10 days of shelf life remaining. This permits adequate time to distribute produce to retail outlets and foodservice operations to be purchased by consumers. Failure to deliver products within these time constraints and with consistent quality can result in product being rejected at distribution centers, forcing its destruction and waste.

As FDA well knows, to reduce testing time the produce industry uses rapid, DNA-based polymerase chain reaction (PCR) tests based on sequences that are unique to specific pathogens to rapidly screen samples for contamination. Unfortunately, although these tests can be very useful, they have proven to be less than 100 percent conclusive. It turns out that “positives” are not always positive and, in some cases samples that are positive can be missed. As a result, positive samples must often be subjected to follow-on confirmation testing using proven FDA BAM methods. To offer perspective on timing of these activities:

• Products are sampled and these samples are shipped to a microbiology testing laboratory, which can take up to one day (depending on where the field or production facility is located relative to the testing lab and if express delivery systems can be used).

• Once received, the testing lab prepares the sample and generally uses an enrichment step to improve detection. This step generally takes 1-2 days, depending on the procedures used and the time needed to review results and transmit them to the produce company.

• If positive results are obtained by PCR test, further confirmatory testing by traditional BAM methodologies follows. This generally requires another 3-4 days as the potential pathogens must be cultured or grown out on plates containing growth media that use color changes and other factors to finally identify if the bacteria are indeed human pathogens.

With many produce commodities, 2-3 days – let alone another 3-4 days for confirmation testing – can mean the difference between product that can be sold into the market at market value and product that has to be disposed of owing to advanced age and/or post-harvest quality defects that develop over time. Operators have had to write off thousands of dollars of high-value product time and again, because product testing took valuable time that either diminished product quality or resulted in product that did not have sufficient time left on its shelf life to permit distribution. Any delay in a producer’s ability to ship finished products or commodities can be devastating whether the testing is generated by buyer requests or regulatory surveillance testing programs.

• Not all pathogens and tests are equal. Test sensitivity and selectivity are important factors when choosing an assay method. There are a variety of tests available commercially today for E. coli O157:H7, Salmonella and other potential pathogens. They can cost anywhere from $8-10 to nearly $100 per test, depending on the technology employed and the testing objective desired. When evaluating the appropriateness of a specific test type or protocol, it is important to consider whether a test meets one’s particular needs in its specificity (the ability to distinguish between closely related bacteria) and/or its sensitivity (the ability to detect various bacterial species at a required level or concentration). If product testing were to become an FDA requirement, it would be incumbent on the agency to specify a standard test methodology for each commodity/pathogen combination. Failure to set very specific test criteria could result in producers using a test that might not have the specificity or sensitivity to achieve the intended objective. For example, if the objective of a mandated product testing program were to test all raw products for Salmonella, without further direction provided, a technically-inexperienced producer might opt for any one of the many immunological test kits developed for Salmonella detection as they are relatively inexpensive and generally simple to use. While these types of test kits are a reasonable choice for testing a sterilized food, they cannot be used reliably in raw produce because of the natural presence of closely related but non-pathogenic relatives of Salmonella that can cross-react with many of these tests.

• Produce is a complex food matrix. Further complicating the preceding discussion is the fact that produce represents a very complex chemical, physical and biological food matrix. Most obvious is the fact that commodities vary substantially in terms of chemical composition; for example a tomato is chemically very different from iceberg lettuce, which in turn is quite different from a green onion. The produce industry has witnessed several instances in recent years where pathogen testing procedures had to be modified to account for these compositional differences. It has been shown that specific plant metabolites (most often pigments associated with product color) can interfere with PCR reactions and can cause “false negatives”. For example, some plant chemicals may interfere with the test such that pathogens are not detected when they are indeed present. In effect, this means that testing procedures would need to be optimized for each pathogen/commodity combination. It is important to note that very few commercial pathogen tests have been validated on a commodity-specific basis.

• Produce has a diverse microbial ecology. Another important aspect of the complexity of testing for pathogens in raw produce is the fact that the exterior surfaces of fruits and vegetables have a vibrant microbial ecology; a number of microbial species are natural inhabitants of fruits and vegetables. Many of these are beneficial bacterial species that can actually protect its host from infection by plant pathogens, and perhaps even human pathogens. As already noted, the sensitivity and selectivity of a test is a very important consideration.

Since buyer-driven product testing has been introduced in the produce industry (especially in leafy greens), we have seen numerous instances where the rapid DNA-based screening methods like PCR have yielded “molecular positive” results. When these samples were further tested to confirm these putative test results using standard microbial plating techniques, those initial results were not verified. Indeed, often what triggered a positive result in a rapid test for a human pathogen like Salmonella actually turned out to be a common nonpathogenic bacteria, like Klebsiella or Citrobacter that are phyllogenitically related to Salmonella but not harmful to humans.

In other words, commonly-employed rapid test methods intended to minimize disruption to the supply chain and preserve product quality can actually result in false positive results if they are not selective enough to unequivocally target the desired pathogen’s unique DNA sequences. This lack of selectivity can have significant financial consequences and logistical impact if decisions regarding use of raw or finished product are based solely on these tests. For example, a 20-acre planting lot may be deemed unusable and is plowed under at a cost of several thousand dollars. Similarly, finished product may be destroyed based on an initial positive result, only to have confirmatory testing find that the original test was erroneous 3-4 days later. Clearly, these rapid DNA-based tests hold much promise for the future, but significant research is still required to ensure that they can be reliably employed with the proper specificity and selectivity.

• Not all detected pathogens may be able to cause human illness. Research is showing that not all pathogenic bacteria found via PCR on fruits or vegetables may actually be capable of growing or subsequently causing illness. Many of those pathogens most often associated with foodborne illnesses have in fact adapted to the warm, high moisture and nutrient-rich environment of the human digestive tract, where they can exist without causing illness to their human benefactor. In contrast, the surface of a fruit or vegetable is a comparatively harsh environment. The temperature and humidity of the growing or storage environment the fruit or vegetable resides in can fluctuate dramatically, creating an inhospitable environment for the pathogen. Meanwhile, nutrients that might support the bacteria are, in contrast, much less accessible on produce than what might be encountered in the human gut. Therefore, while a human pathogen might survive for some specified period of time on the surface of a fruit or vegetable, they are not in ideal conditions, compromising their ability to grow or cause illness.

In studies where pathogens have been purposely placed on the surface of a produce item and permitted to remain there for a period of time, researchers can often detect that pathogen’s presence using a DNA-based test but cannot actually recover or culture any living cells of the pathogen. Rather than thriving on the plant surface, the pathogen either goes into a dormant state or begins to die, yet its DNA retains enough structural integrity to permit detection by PCR testing although it cannot be physically isolated by traditional culture methods. So while one might get a “positive” test result using rapid DNA-based testing, in fact the test is detecting a dead or dying bacterium that may not represent a human health risk.

• The role of enrichment. Another consideration in product testing is the practice of enrichment. Most rapid DNA-based testing methods employ an “enrichment” step in the test process. Product samples are placed in a nutrient-rich culture medium, allowing pathogen cells to grow in ideal conditions so that enough cells can be recovered and sufficient DNA extracted to perform PCR or PFGE tests. Pathogens sampled from the surface of a fruit or vegetable that are in a slowed metabolic condition or are dying may in fact recover in such enrichment conditions and be induced to grow if sufficient time is provided. Studies of various enrichment periods indicate that optimum enrichment times can vary based on the physiological condition of the pathogen, and can run anywhere from 8-20 hours. Many commercial test protocols specify the lower end of this time range, so that products that are being held pending test results can be released sooner rather than later to meet supply chain and quality demands. Clearly, if fruit or vegetable product testing was to be required by FDA, further research would be needed to permit definition of this practice so that consistent enrichment periods could be established.

When weighing the question of enrichment, FDA must also consider its implications. If a pathogen has been physiologically injured by the inhospitable environment on a plant or food surface, but can essentially be “rescued” by using laboratory culture methods, would that pathogen have actually been able to cause illness if the product had been consumed? This is another area of research that needs to be initiated to understand whether injured pathogen cells are capable, under any conditions, of causing disease in humans.

• The zero tolerance standard may not reflect today’s best science. The Federal Food, Drug and Cosmetic Act of 1931 states that the presence of human pathogens in food is considered an adulteration, and prohibits these foods from being placed in commerce. Given the scientific knowledge of that time, this seems reasonable and logical. However, today we know much more about how some human pathogens cause illness, and the dose rates required to illicit symptoms in humans. For instance, we know that new strains of E. coli have emerged in the last 30 years, most notably, E. coli O157:H7 and unlike its harmless brethren, this bacterium can cause significant human health issues or death at levels as few as 10 cells, especially in the young, old or in immuno-compromised populations. Conversely, we recognize more than 2,000 strains of Salmonella, and current thinking points to the likelihood that the dose rates to cause illness are much higher than with E. coli O157:H7 maybe requiring as many as a thousand cells or more. Further, Salmonella infection is generally not lethal (although immune-compromised individuals face increased risk).

The implication is that while a zero-tolerance approach for a pathogen such as E. coli O157:H7 is appropriate, today’s science may or may not justify such a strict standard for other, perhaps less devastating pathogens. Clearly and unequivocally, the goal of food producers, government and the public health community should always be to reduce the risk of any pathogen contamination that could conceivable occur. Within the produce industry, there is no acceptable argument against that concept. However, absent an effective kill step that can guarantee elimination of all pathogens without compromising product quality and nutrition, perhaps FDA should consider re-evaluating its zero-tolerance policy for one that is risk-based, and better reflects today’s science and epidemiological knowledge. This is an important concept when considering the issue of product testing and its production, cost and public health ramifications.

• Which pathogens should the produce industry test for? If product testing became an FDA requirement, FDA would need to determine which pathogens need to be tested on a commodity-specific, and perhaps even a location, basis. A number of bacterial, protozoan and viral pathogens have been associated with foodborne illness outbreaks linked to produce over the last 20 years. In some instances, patterns seem to emerge. For example, Salmonella is more consistently associated with tomatoes and melons, E. coli O157:H7 with leafy greens, Hepatitis A with green onions or berries, and Shigella in leafy herbs. However, there are also a number of examples where these relationships do not hold up. To manage the time element of produce logistics as described earlier in this document and to best utilize resources, it is important to avoid a “one size fits all approach.” Instead, we should employ a science- and risk-based approach to determine a commodity-specific and/or pathogen-specific strategy.

• Sampling may be the most problematic aspect of product testing. The specificity and selectivity of tests employed to identify a pathogen are only half of the equation in a product testing scheme. The other half is the sampling program. It is impractical to test every tomato in a field or every leaf in a head of lettuce as all the marketable product would be destroyed in the process. Instead, the number of samples collected, their distribution, the frequency of collection, the amount collected and other factors need to be carefully calculated if a “sample” is to be created that represents the entire production lot. This is important because the object of product testing is to create confidence that a specific production lot is not contaminated with a potentially harmful human pathogen. While there are many issues associated with actual pathogen tests, in many ways developing a sampling methodology that can achieve statistically significant confidence levels is more troublesome.

Based on the millions of pounds of produce that are harvested, packed or processed, shipped and consumed each day by millions of people throughout the country without illness, we can assume that the frequency of pathogen contamination is quite low. To add weight to this assumption, data from buyer-mandated product testing of some commodities and FDA/USDA surveillance product testing also reveal that contamination is indeed a low-frequency event. Therefore it is imperative that our sampling methods be constructed so that we can detect even these low-frequency events. Further, from some of these recent product testing programs we know that contamination, when it does occur, is not uniform. If contamination is found in a field, it tends to be random and isolated. For example, there have been occasions over the last few years where field-level raw product testing of leafy greens has resulted in a “molecular positive” test result, indicating a pathogen may be present in a specific production block or lot. Typically, a 10-20 acre lot of a leafy green is sampled by taking 60, 25-150-gram samples from across the field in a “Z pattern”. The idea is that the samples thus taken represent some of the block’s edges, and traverse the interior of the acreage. These samples are generally combined into a composite sample and tested for pathogens.

When a “molecular positive” for a specific pathogen is found in a composite sample, often the grower or processor will go back to that lot, perform an observational risk assessment and establish a formal sampling grid in an attempt to determine how widespread the contamination is and perhaps point out where it might have originated. In the overwhelming majority of these instances, despite intensive individual plant sampling and testing, the initial positive test results are not repeated; thus the “needle in a haystack” analogy often associated with product testing.

For example, a spinach field has more than 4 million plants per acre, with anywhere from 4-6 harvestable leaves per plant. That’s 20 million individual leaves per acre; the lot might be 10-20 acres in size, meaning at least 200 million leaves are contained in that block. The current 60-sample practice might utilize 2-3,000 leaves, meaning that only a fraction of the material in any production block is actually tested. So, if you don’t happen to sample the specific contaminated plant or indeed the specific contaminated leaves, i.e. find that needle in the haystack, because of the limited sample taken, you could conclude that the field was not contaminated even though your sampling program was not really sufficient to draw that conclusion.

The question then becomes, why not just test more material? The problem there is determining how much more material to test, and in what location in the field. Remember, these contamination events are random, of low frequency and isolated. One could take a thousand samples from that same production block and only minimally increase the relative amount of product tested, and could just as easily fail to sample the exact location(s) in the field where the potential contamination resides. It must also be remembered that product testing is destructive, i.e. the product is “used up” by the test so if the test comes back negative, that product nonetheless is gone and not available for harvest.

Finished-product testing is analogous to the example given here on field-level testing. Today’s automated packing machines run at speeds anywhere from 50-100 bags per minute, and sampling 10, 20 or even 100 bags per line per hour only represents a fraction of the total material being processed. From these examples one can understand the inherent problem with developing statistically significant sampling programs that permit the producer to assign confidence levels that support a conclusion that the product in question is free of contamination.

It can be argued that today’s regulatory and buyer-driven product testing programs have really only enabled the industry to identify the limits of current sampling methods. Indeed, the only instance where pathogen contamination might be consistently and reliably detected in raw or finished products with today’s technologies is if the contamination were uniformly distributed across a substantial portion of a production field. The only example where that has been observed and validated is a single instance when water used to mix pesticides was contaminated with a pathogen and then sprayed over an entire field, i.e. there was a uniform and widespread contamination event.

• When should products be sampled? As FDA considers the proper role of product testing, it is important to understand the implications of testing based on where the product is sampled; i.e. raw or pre-harvest versus finished products. Simply put, in-field raw product testing versus finished product testing can be less disruptive. As the industry has sought to diminish the business impacts of finished product testing, some have implemented raw product or preharvest testing programs as a strategy to meet selected customer requirements without starting the “biological clock” ticking on product quality and shelf life. Basically, preharvest or raw product testing programs direct that products are sampled and tested prior to their scheduled harvest date, i.e. at the “raw” product stage. As an example, many growers and processors in the primary leafy greens production areas in California and Arizona have implemented such preharvest testing programs to satisfy customer requirements. Typically, these pathogen testing programs rely on field sampling 3-7 days prior to harvest. This permits enough time to sample and test the product and get the results back to the harvester so that a “negative” result can “clear” the field for the scheduled harvest date. In the event of an initial positive result requiring further confirmatory pathogen testing, harvest can be “held” until results from this second phase of testing is complete. While delaying harvest can have negative impacts on quality for some fruits and vegetables, for many commodities this is a better logistical and cost alternative than trying to hold harvested or even finished processed product. Additionally, if confirmation testing does reveal a “confirmed positive” for a pathogen, the affected product remains in the field, permitting follow-up studies on the cause for contamination, avoiding harvest and packaging costs, and minimizing disposal costs as well as the possibility that product is inadvertently shipped to the consuming public.

While generally less disruptive than finished product testing, field-level raw product testing can still be highly disruptive to the supply chain. Harvest windows for products can often be very narrow due to rapidly changing market opportunities. Delaying harvest to permit product testing can have significant impacts on profitability. This strategy also leaves a potential window of vulnerability, i.e. if the raw product is tested in the field 3-7 days prior to harvest, any contamination that might occur after sampling but before harvest could go undetected.

Clearly there remains a number of critical issues for FDA to consider regarding microbial testing, specifically pathogen testing on raw or finished products. Simply put, it is not possible to test one’s way to safety, and it is much more prudent to use resources that might otherwise be devoted to product testing to instead develop and manage practices that mitigate contamination in the first place. Given the developmental status of product testing methodologies for produce and the current absence of validated sampling methods, FDA rules requiring product testing would prove difficult to create, present enforcement challenges and create likely confusion in the industry.

That said, FDA should be encouraged to work with the industry, testing laboratories and method certification authorities to identify research needs to improve and validate risk-based pathogen testing methods. FDA is currently conducting exciting research on new methods to more precisely and efficiently detect pathogens in foods and private companies and academic institutions are pursuing similar objectives. The produce industry and FDA should work together to bring these innovations to bear as they are validated in production environments.