Management, Economics and Policy

Variability and Reliability of Test Center and Field Data: Definition of Proven Technology From a Regulatory Viewpoint

Publication Date: September 2005
Cooperating Institution: New England Interstate Water Pollution Control Commission
Principal Investigator: Tom Groves
Project Budget: $96,000
Project Identifier: WU-HT-03-35


On-site regulators and regulatory technical review panels across the country are evaluating a growing number of manufacturers' requests for technology approvals. Technical support documentation for product approval submittals from manufacturers range from peer reviewed journal articles with attached third party research reports to simply claims that "our system works just like Product X's system that you already approved" with little (or no) supporting third party research. Test centers and demonstration projects have been and continue to be initiated throughout the country without a comprehensive assessment and national consensus regarding how much and what quality of data is necessary for decision-making regarding what constitutes a "proven technology."

At the same time, states and provinces are remaking their entire rules into more performance-based approaches. The growing environmental focus in on-site wastewater is causing a shift in emphasis from the traditional disposal aspect to more of the treatment aspect in rule revisions. The onsite wastewater program arena is rich with many existing data sources including test center, testing organizations, university test facilities, vendor sampling, state/county/local monitoring, and other sources. However, the program is lacking the assembly of valid quality data into unified sets needed to confirm statistical trends and relationships. Understanding these statistical relationships will optimize field-testing protocols, reduce unnecessary and costly testing, help predict field performance levels, and allow for more uniform acceptance of new technology by States, Counties and Local onsite oversight and implementing agencies.

This project will develop these statistical relationships, provide a decision support system that integrates test center and field data to correctly predict field performance and provide the regulatory and manufacturing communities with common sense guidance regarding how much data of what quality is needed to accept a technology as "proven." As the onsite program and industry moves towards a performance based code and approach this research will provide a baseline understanding on how to assemble, assess and interpret new and existing data sets to maximize their benefit to the onsite program.