Invented by Ron Vianu, Richard Herzog, Daniel Elgort, Robert Epstein, Irwin Keller, Murray Becker, John Peloquin, Scott Schwartz, Greg Dubbin, Grant Langseth, Elizabeth Sweeney, Mattia Ciollaro, Andre Perunicic, Covera Health Inc
The Covera Health Inc invention works as follows
The computer-implemented method includes: accessing unstructured, digitally stored medical diagnostic information; digitally displaying, on a computer screen, at least one set of diagnostics reports; receiving digital input specifying errors in that set; and digitally saving the digital input with the subset.Background for Computer-implemented detection and statistical analysis of errors by healthcare providers
The approaches described here are not necessarily those that were previously conceived or pursued. It should not be assumed, unless specifically stated, that the approaches described here qualify as prior art simply because they are included in this section. It should also not be assumed by their inclusion that the approaches described here are routine, conventional, well-understood or understood.
In current healthcare practices, digital pictures and written reports – usually dictations – are often used as the basis for diagnostic assessment. Radiology is an example of a profession where images of the patient’s anatomy and dictated records of radiologists’ assessment are often used as the core records for a diagnosis. The interpretation of digital imaging is complex and requires a high level of medical and anatomical expertise, as well as the ability to recognize subtle or complicated patterns in the right context. As a result, radiology has a non zero error rate. This can have an impact on the patient’s comfort, care, treatment outcome and cost. A wrong diagnosis, for example, could lead to the preparation or performance of an unnecessary surgical procedure.
Some diagnostic errors are caused by a lack of skill on the part of a radiologist in interpreting images. Other diagnostic errors can be attributed to differences in how diagnostic information is communicated in diagnostic reports, whether they’re written or dictated. Different radiology practitioners often express diagnoses in different ways, using arcane terms or other incorrect terminology. Some of these variations accurately express the patient’s diagnosis while others may convey a false or misleading diagnosis.
There are many types of diagnostic errors that occur in patient examinations. The rates vary. Diagnostic errors can be classified into: (1) falsely reporting a positive diagnosis, (2) falsely reporting a negative diagnosis, (3) errors where a finding was “overcalled” Diagnostic errors include: (1) false positive reporting of a diagnostic finding, (2) false negative reporting of a diagnostic finding, (3) errors where a diagnosis is “overcalled” or graded excessively severe or (4) mistakes where he diagnosis is “undercalled” or graded too minor. Or graded as too minor. Other quality issues related to communication issues can include: (1) findings reported in a way that is overly equivocal, (2) findings reported in a way that is overly vague, (3) findings reported with an inappropriate emphasis, and (4) inappropriate or absence of comparisons with previous diagnostic studies. Not using the Breast Image Reporting and Data Systems or BIRADS scoring system in mammogram reports. Technical errors and issues with quality can also affect diagnostic radiology exams. These include: 1) poor image quality, (e.g. Low signal-to noise ratio, (2) images that are degraded or obscured due to patient motion or artifacts. (3) Poorly configured exam protocols. An MRI examination conducted without collecting images with the necessary contrast setting, or images collected at a resolution too low.
Patients and other stakeholders are unable to assess the accuracy of diagnoses or presence of specific errors, such as other doctors involved in a patient?s care. This includes healthcare payers. Most efforts to determine the accuracy of a diagnoses rely on getting a second opinion, usually from another medical professional or radiologist. The second opinion is then compared with the original opinion. The healthcare system may not be best served if only a small group of experts can make correct diagnoses. It is also important to remember that authoritative experts can make mistakes and that pathological assessment involves some subjectivity. Therefore, it can be difficult for the healthcare system to know if variations between the two diagnoses are due either a diagnostic error in one diagnosis, or multiple ways the same diagnosis could be stated. This problem is not resolved by seeking a third opinion or more than one. For most patients, this would be prohibitive because of logistics and cost.
There is a need for a robust, standardized and quantitative method to assess the accuracy of diagnoses made by radiology practitioners and their diagnostic accuracy. This requires a system that can be scaled to standardize multiple aspects of the process of assessing diagnostic quality, such as (1) the interpretation of images, (2) the documentation in written or dictated reports of diagnostic findings, and (3) the categorization and classification of different diagnostic errors and issues.
While comprehensive medical records are typically developed in electronic digital form for each patient, most of the data are unstructured. Examples are digital medical images and dictated diagnostic reports. Both are non-standardized and cannot be easily interpreted by computers or machines. Although more structured dictation is possible, this is not a widely adopted approach. “Additional tools or systems will be required to convert the unstructured data in medical images and diagnoses reports into standardized information that can then be used for assessing diagnostic accuracy, error rate, and quality.
Because a variety of diagnostic errors, and related quality issues can occur in diagnostic imaging exams, a system of quality and accuracy assessment that targets diagnostic errors or specific diagnostic findings may be useful. Prioritizing diagnostics is one way to achieve high levels of agreement among radiologists. It is unlikely that there will be perfect agreement in any category of diagnosis or diagnostic error. However, the levels of accordance are variable across categories.
The key outputs of diagnostic accuracy and quality assessments include estimates on the accuracy rates and errors rates achieved by a provider of radiology under evaluation. If estimates of error rates and accuracy rates are based directly on data generated from independent radiologists using a standard process to identify and characterize selected diagnostic findings, then the estimates themselves will not be accurate due to interradiologist variability.
Stakeholders within the healthcare ecosystem are increasingly interested in reliable and quantitative healthcare quality metrics which are strongly correlated with patient outcomes and patient comfort. Since not all diagnostic issues and errors have the same impact, simple estimates of error or accuracy rates are not always a useful quality metric.
When using a system of diagnostic accuracy and quality to evaluate different providers, it’s important to take into account the fact that providers care for different patient populations. Unadjusted estimates for diagnostic accuracy or error rates may not be appropriate as standard and generalizable measures to assess radiology care. “A quality assessment system which can be used by a wide range of healthcare providers must usually include adjustments for the differences in patient populations.
Furthermore there is a need for computer-implemented methods that can generate data that represents the accuracy or quality of medical diagnoses on a robust, scalable basis. Some institutions have tried to replace radiologists in their clinical workflow, as they interpret image data and produce diagnostic reports. They did this by using image recognition and interpretation software. These systems are designed to flag abnormalities and inspect images. “However, existing systems are known to identify a lot of false positives or only work with images that have obvious abnormalities. They do not provide significant value in this capacity.
Computer-implemented image interpretation and medical report interpretation technologies have not been developed, expanded, or adapted for use as part of a diagnostic accuracy and quality assessment system. These technologies have different performance and design requirements in this application domain. A computer-implemented system that interprets image data in order to assist (or to replace) a radiologist as they create a patient’s diagnosis report will require high sensitivity and specificity. It also needs to be able to target many different types of diagnostic findings. There are less stringent performance requirements for a system of diagnostic accuracy and quality assessments that is supplemented or executed solely by a computerized image interpretation system. This system will also be integrated with a system of computer-implemented report interpretation. This relaxation in performance requirements is due to the fact that as long as the performance levels for sensitivity and specificity of computer-implemented systems are quantified, robust and reliable estimates can be made of the overall diagnostic accuracy, error rates and confidence intervals that radiology providers achieve while caring for patients.
The appended claims can serve as an overview of the invention.
The following description provides a detailed explanation of the invention. However, it will become apparent that the present invention can be used without these details. Other instances of well-known devices and structures are shown in block diagrams to avoid obscure the invention.
1. “1. Overview of the General
In one embodiment, a method for quantifying radiology diagnostic errors relies on structured and standard exam reviews performed by independent radiologists in order to create a database of clinically relevant attributes of radiology pictures and radiology reports. Digital analysis of these attributes provides an objective source of truth for any diagnosis associated with digital images or physical features of a subject, as well as for any diagnostic errors or quality issues associated with the way diagnoses are described or omitted in the radiology report.
A modified embodiment can supplement attributes or categories of attributes with reliable measures or probability of correctness. These reliable measures of probability or confidence of correctness can be generated through statistical analysis of variances in the reports generated by radiologists who performed structured and standard radiology exam reviews. “In some cases, radiologists who perform structured and standard radiology exams will review independently the same radiology exam underlying and generate reports which will contribute to variance analysis.
The techniques described herein are best suited for assessing the diagnostic accuracy, errors and/or quality of a pathology or disease where there is a general agreement among experts regarding physical features, such as location, size, etc.
In some embodiments, a system for quantifying radiology diagnostic errors is optimized to produce accurate quantitative measures of error rates and quality related to selected radiology providers and their performance in relation to specific pathologies or diseases. These quantitative measures may be aggregated at different levels of anatomical details, such as: (1) a combined measurement representing the rate that a provider of radiology makes when performing diagnostic knee MM examinations, or (2) a narrower-scope measure that represents the rate that a provider of radiology makes in relation to the accurate diagnosis of meniscal tear within knee MM examinations. These quantitative measures can also be aggregated into different diagnostic error types. For example, (1) a measure that represents the rate of false positives that radiology providers make in diagnostic imaging exams; (2) a measurement that represents the rate that radiology providers make in diagnostic imaging exams when a finding has been?undercalled’ or graded incorrectly as too minor. These quantitative measures of diagnostic errors rates can be aggregated at different levels within radiology providers, such as: (1) a measurement representing the rate for any diagnostic error made by an individual radiologist in the context selected diagnostic imaging exams, or (2) a combined measure representing any error made by a group radiologists working together in a single radiology facility in the context selected diagnostic imaging exams.
In some embodiments the measures of diagnostic errors rates will be based entirely on the empirical diagnostic data and attributes produced by independent radiologists performing standardized reviews of exams performed by radiology providers being reviewed. In some embodiments the measures of diagnostic errors rates will be based in whole or part on statistical modeling including hierarchical Bayesian modeling of the empirical diagnostic data and attributes.
Click here to view the patent on Google Patents.