Glaucoma Data Standards

From EyeWiki

All content on Eyewiki is protected by copyright law and the Terms of Service. This content may not be reproduced, copied, or put into any artificial intelligence program, including large language and generative AI models, without permission from the Academy.


Background

A common challenge facing glaucoma clinical care and research is the collection and organization of relevant data related to an individual or group of individual's disease.[1][2][3][4] The uptake in use of electronic health records (EHR) has improved the accessibility of clinical history and examination data to the glaucoma practitioner; however, sharing important data between computer systems remains a challenge. In the clinical setting, this is exemplified by the segregation of clinical exam data stored in the EHR from diagnostic testing data which is stored in a picture archive and communication system (PACS). In order to facilitate communication between the multiple systems, they must agree on a set of data standards used to describe diagnostic tests, findings, diagnoses, and image data.

The challenge is multiplied in glaucoma research where a hundreds to thousands of individual patient's data needs to be extracted from multiple EHRs (and PACS) and be transformed into a common format.[5] Only then can the data be used to answer medically interesting questions. For high quality research utilizing machine learning, the inherent risk of biases (or non-generalizability) in training data sets necessitates the use of clinical and diagnostic testing data from a diverse and geographically widespread patient population. Data standards facilitate the collection and integration from multiple EHRs which is often needed to obtain adequately diverse source data.[6][7]

Patient safety and healthcare spending is also impacted by the (lack of) standardized data representations. The AAO Preferred Practice Pattern (PPP) guidelines for primary open-angle glaucoma[8] and suspected glaucoma[9]recommend the use of automated perimetry and RNFL evaluation to assess the progression of disease. The POAG PPP states that "repeat and confirmatory visual field examinations are recommended for test results that are unreliable or show a new glaucomatous defect" before changing treatment. To reduce waste in healthcare, diagnostic testing information needs to be effectively communicated between referring providers. The widespread use of data standards for reporting on VF and OCT measures (including reliability) is essential to the reduction of wasted duplication or delays in care.

Additional work is necessary to create and promote data standards for use in glaucoma and ophthalmology in general. Clinical exam measures as fundamental as visual acuity present challenges when extracted from EHRs for use in research.[10] In other areas, data standards exist but are poorly implemented by diagnostic device vendors.[11]

Past and current work on data standards, as well as a strategy for using these standards in glaucoma care and research: Standards Development Strategy for Glaucoma

Glaucoma Imaging Data

Visual Field (VF)

Visual fields are one of the primary diagnostic imaging tests used to assess glaucoma and other optic neuropathies. The DICOM Ophthalmic Visual Field (OPV) file format is the gold-standard for saving and communicating visual field results.

Table 2. Examples of visual field devices, testing strategies, patterns, and reported metrics.
Vendor Device Testing Strategies Testing Patterns Testing Metrics Analysis Features
Zeiss Humphrey Field Analyzer 3
  • 10-2
  • 24-2
  • 24-2c
  • 30-2
  • 60-4
  • Nasal Step
  • Foveal threshold
  • Visual field index (VFI)[16]
  • Mean deviation (MD)
  • Pattern standard deviation (PSD)
  • Glaucoma hemifield test (GHT)[17]
  • Guided progression analysis (GPA)
Haag-Streit Octopus 900
  • Tendency oriented perimetry (TOP)
  • Dynamic
  • Normal (4-2-1 bracketing)
  • Goldmann kinetic perimetry
  • Glaucoma G1-Program
  • Glaucoma G2-Program
  • 24-2
  • 30-2
  • Macula M-Program
  • 10-2
  • Estermann (monocular/binocular)
  • Mean sensitivity (MS)
  • Mean defect (MD)
  • Loss variance (sLV)
  • Cluster trend
  • Polar trend

Optical Coherence Tomography (OCT)

Vendor Device Technology Testing Metrics Analysis Features
Zeiss Stratus
  • Frequency domain OCT
  • Circular scan of 3.4 mm diameter around optic disc
  • RNFL thickness
    • Average
    • Quadrant
    • Clock-hours
  • Macular GCL/IPL thickness
    • Averag
  • Cup-to-disc ratio
  • RNFL thickness TSNIT graph over time
Zeiss Cirrus 5000 & 6000
  • Spectral domain OCT
  • 100k A-scans/second (Cirrus 6000)
  • 27-68k A-scans/second (Cirrus 5000)
  • 5 υm axial resolution
  • 15 υm lateral resolution
  • RNFL thickness
    • Average
    • Quadrant
    • Clock-hours
  • Macular GCL/IPL thickness
    • Average
    • 6-sector
  • Cup-to-disc ratio
  • RNFL thickness deviation maps
  • Ganglion cell analysis
  • Combined GCL/IPL and RNFL thickness deviation maps
  • Guided progression analysis (GPA)
Heidelberg Engineering Spectralis OCT
  • Spectral domain OCT
  • 40 kHz scan rate
  • 7 υm axial resolution
  • 14 υm lateral resolution
  • 25-48 B-scans/second
  • RNFL thickness
    • Average
    • Quadrant
    • 6-sector
  • Asymmetry analysis
  • Progression analysis
Topcon Maestro2
  • Spectral domain OCT
  • 50k A-scans/second
  • <6 υm axial resolution
  • <20 υm lateral resolution
  • Optic nerve, macula, and anterior segment imaging
  • cpRNFL thickness
    • Average
    • Quadrant
    • Clock-hours
  • Macular GCL+IPL thickness
    • Average
    • 6-sector
  • Cup-to-disc ratio
  • RNFL thickness trend analysis
  • GCL+ and GCL++ thickness maps
  • GCL thickness trend analysis

Data Extraction

Standard Automated Perimetry

Glaucoma Data Extraction for Research

Relevant Data Standards

SNOMED — Systematized Nomenclature of Medicine

SNOMED is a comprehensive clinical terminology developed to describe all concepts related to medicine. Each concept in SNOMED is assigned a unique code, provides unambiguous meanings, is associated with related concepts, and is maintained on an ongoing basis.[18] In 2004 the National Library of Medicine recognized its importance and made SNOMED-CT freely available in the US. A study in 2005 was found to have the broadest coverage in ophthalmology (among 5 controlled terminologies available at the time) and has since become the standard for describing ophthalmic terms.[19]

The SNOMED CT browser is available at https://browser.ihtsdotools.org/ which provides easy online access to the hierarchy of disorders, findings, observations, procedures, etc. For example, glaucoma (as a disorder) is represented as SCTID: 23986001 and found at https://browser.ihtsdotools.org/?perspective=full&conceptId1=23986001&edition=MAIN/SNOMEDCT-US/2023-03-01&release=&languages=en. The hierarchy of glaucoma related disorders can be navigated in the online browser to identify more- or less-specific disorders.

Table 3. Examples of SNOMED-CT terms
Semantic Meaning Example SCTID
Disorder Secondary glaucoma due to aphakia 15374009
Finding Raised intraocular pressure 112222000
Observable entity Snellen visual acuity 422673001
Procedure Injection of filtering bleb following glaucoma surgery 428494000
Physical object Nonvalved ophthalmic drainage device 410444004
Substance Latanoprost 386926002
Medicinal product Product containing only latanoprost 776481004
Qualifier value Intracameral route 418821007

DICOM — Digital Imaging and Communications in Medicine

The Digital Imaging and Communications in Medicine (DICOM) standard is the standard for communicating medical imaging tests and results.[20] All EHR and PACS software needs to be DICOM compliant.[21] In addition to storing raw imaging data alongside computed metrics from the diagnostic test, DICOM has capabilities for producing structured reports that encode what has been found in ophthalmic images using hierarchical lists of findings, coded or numeric content in addition to plain text, and presenting relationships between features present in the image.[22]

A list of ophthalmology-related DICOM supplements and vendors' conformance (statements) with these standards can be found at the AAO: https://www.aao.org/education/medical-information-technology-guidelines. A number of DICOM supplements establish standards for communicating ophthalmic results of interest to glaucoma specialists and researchers (Table 1).

Table 1. A few of the many DICOM standards relevant to storing and communicating glaucoma-relevant information.
Supplement Date Published Description
Ophthalmic Tomography Image Storage SOP Class (Supplement 110) 2007 Describes the storage of the retinal nerve fiber layer (RNFL) in addition to the characterization of the anterior chamber angle exam from tomography scans through the anterior segment.
Ophthalmic Visual Field (OPV) Static Perimetry Measurements Storage SOP Class (Supplement 146) 2010 Describes the storage and representation of visual field data including quality measures (fixation losses, false positive rate, etc.), foveal sensitivity, mean sensitivity, and mean deviation.
Ophthalmic Thickness Map Storage SOP Class (Supplement 152) 2011 Describes the storage and representation of thickness measurements such as retinal nerve fiber layer (RNFL) maps.

Currently, there is not a DICOM standard for structured optic nerve OCT measurements. One exists, however, for macular grid thickness and volume (Supplement 143, published in 2008).

LOINC — Logical Observation Identifiers Names and Codes

LOINC provides a terminology for communicating clinically-useful measurements between clinical devices (such as laboratory equipment, measurement devices, and EHRs.) Codes and definitions exist for sharing visual acuity, IOP, pachymetry, visual field, and RNFL thickness measurements. Each LOINC code provides 6 dimensions (parts) for each observation: component (substance or entity being measured), property (characteristic or attribute), time (interval over which the measurement occurs), system/specimen (thing whose component is being measured), scale (how the observed value is quantified), and method (optional classification of how an observation was made). A FAQ describes these parts in more detail.

Table 2. A few of the many LOINC codes relevant to glaucoma.
LOINC Common Name Type Component Property Scale
98497-1 Visual acuity panel Clinical order panel Visual acuity panel N/A N/A
98499-7 Visual acuity, uncorrected, right eye Clinical observation Visual acuity^uncorrected Length Ratio (LenRto) Quantitative
6616-7 Visual acuity log MAR Eye - right Clinical Visual acuity log MAR.right Complex Quantitative
79896-7 Tonometry panel Clinical order panel Tonometry panel N/A N/A
79764-7 Type of tonometer Clinical observation Tonometer Type Nominal
79892-6 Right eye intraocular pressure Clinical observation IOP Pressure Quantitative

OMOP CDM — Observational Medical Outcomes Partnership Common Data Model

The Observational Health Data Sciences and Informatics program (OHDSI) is a collaborative to leverage an international network of researchers and health databases through large-scale analytics. OHDSI maintains the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) designed to standardize the structure and content of observation data to enable efficient research analyses. To facilitate the comparison and analysis of multi-institutional data, the All of Us Research Program transforms ingested EHR data into the OMOP CDM.[23]

The OHDSI Eye Care & Vision Research workgroup (https://www.ohdsi.org/workgroups/) exists to advance the development and implementation of data standards in ophthalmology, optometry, and vision sciences.[24] It's aim is to support studies using observational ophthalmic data for generating insights to improve health and vision outcomes. Amongst its accomplishments are conducting gap analyses of two large, well-known EHR systems for eye care to examine where OMOP standards are lacking for commonly used data elements. The workgroup has had presences at AAO and ARVO annual meetings. It is collaborating with Verana Health on transforming the AAO Intelligent Research In Sight (IRIS) Registry to use the OMOP CDM.

Tools & Code

Hypothetical Use Cases for Using Data Standards

Clinical Example

Dozens of patients are seen each day in a busy glaucoma clinic. Most patients will have had multiple glaucoma imaging tests done over months to years requiring comparison to assess for glaucoma progression. An inefficient workflow would require opening the PACS each time for each patient and manually viewing past tests to assess for the rate of progression over time. A more efficient workflow would extract meaningful metrics from the visual field and OCT and import those into the EHR to colocate glaucoma metrics alongside the patient's vision and IOP history. Having an imaging silo or PACS alone for this diagnostic data is suboptimal because it won't have the patient's clinical exam, medication, or surgical history.

When visual field devices support existing DICOM standards, then a compliant PACS would be able to extract meaningful glaucoma metrics such as mean deviation and pattern standard deviation from the patient's clinical testing and share that data with the EHR through standard FHIR interfaces. The EHR would then be able to show the change in MD/PSD over time alongside the patient's IOP and VA.

Visual Field Data Extraction: Real-World Techniques and Tips

Analyzing testing and imaging data at scale is crucial for advancing glaucoma research, but the lack of standards and interoperability has historically impeded their inclusion in big data sources. A group of glaucoma researchers and informatics experts from 10 US academic institutions, along with industry representatives, convened an online workshop in February 2024 to share current practices and future approaches for extracting VF and OCT data at scale. Motivated by challenges faced in extracting glaucoma imaging data, the meeting aimed to review current practices, share knowledge, engage with vendors, and discuss future strategies.

HFA data extraction at USC

Dr. Benjmain Xu, a glaucoma specialist and clinician-scientist at the University of Southern California (USC) and representative for the Los Angeles County (LAC) Department of Health Services (DHS), discussed the challenges of extracting Humphrey Field Analyzer (HFA) data from two separate systems at USC and LAC DHS. Each system operates with distinct IT and data infrastructures, complicating data access. At USC, the PACS system named Axis requires HFA devices to upload a PDF of each visual field, which physicians access individually from computer workstations. A significant challenge at USC has been the inability to access visual field PDFs in bulk, as data extraction requires manually downloading each PDF one by one— a time-consuming and suboptimal method carried out by medical students due to IT restrictions on direct server access.

Dr. Xu highlighted an innovation by Dr. Murtaza Saifee, who developed an optical character recognition (OCR) algorithm to extract visual field metrics from these PDFs. The Python script, which Dr. Saifee made publicly available on Github, has proved invaluable for extracting visual field metrics from the PDFs, although it only works with specific formats. This tool has significantly eased the process of accessing the data, though it remains limited by the need to individually download each file.

Dr. Xu detailed the current efforts at USC involving collaboration with IT to replicate the Med-Axis Server. This initiative aims to automate the extraction process for not only visual field data but also for OCT and photograph data. Automating this process will streamline the cataloging of this extensive dataset, making it much smoother than the current method of downloading files one by one. Successfully replicating this server would enable USC to acquire a complete dataset, which will be invaluable for various AI projects and future clinical research.

Dr. Xu also discussed the setup at LAC DHS, which serves a primarily socially disadvantaged, safety-net population. This municipal healthcare system provides care to almost a million patients, the majority of whom are Latino and at high risk for glaucoma. The data from LAC DHS is crucial for understanding the burden of glaucoma in this high-risk group. Unlike USC, LAC DHS utilizes the Forum system, which allows for easier access and better summary reports. However, extracting data in bulk remains a challenge due to limitations in the current IT infrastructure and the absence of access to the Zeiss Advanced Data Export (ADE) tool.

Over the past year, efforts have been made to acquire an ADE tool from Zeiss, and they are now close to obtaining the necessary license, with IT's approval. This tool will be accessible not to all providers but specifically to designated researchers, which should help overcome current patient privacy barriers. The acquisition of the ADE license and the installation of the tool will allow comprehensive data extraction from LCDHS, enabling studies similar to those planned at USC.

In summary, these are the initiatives Dr. Xu has presented regarding the extraction of visual field data at USC and LAC DHS.

HFA data extraction at Stanford

Dr. Wang, an assistant professor of Ophthalmology at Stanford, who specializes in glaucoma, shared insights on challenges and solutions related to visual field data extraction.

She pointed out that Stanford also uses FORUM. Initially, without the ADE tool which would allow images to be exported in bulk, they faced significant challenges. Stanford had a similar experience to USC in needing to access the server to request visual field file archives without the ADE, relying on a vendor-neutral archive which contains DICOMs of all imaging studies they are interested in. This archive is not usually accessible to faculty, and it took significant effort to get access to these files.

Upon receiving the DICOM files—almost 100,000 studies—compressed and stored on secure, PHI-compliant cloud storage, they discovered the files were essentially the same PDFs a physician would see on the front end. Consequently, they faced the same necessity as Dr. Xu to run Dr. Murtaza’s OCR software, to extract data from these PDFs. She noted issues with getting the OCR software fully operational, especially since about 40% of their site's studies are 30-2 visual fields, which presented additional challenges.

With the forum's ADE tool eventually updated and functional, they began to explore its capabilities. It allowed them to create new export jobs, choose export formats (XML or DICOM), and input patient MRNs directly. The data exported could then be unzipped to reveal XML files containing neatly tagged visual field points. She detailed how they made several decisions on how to handle the data, such as flipping all data to assume a right-eye default and saving global measures alongside point-wise measures. This data handling was coordinated with the Sight Outcomes Research Collaborative (SOURCE), a large multicenter repository, to ensure their formatting aligned with broader standards.

She explained the decisions made on how to best save and manage the data. They opted to save all data as if it were from the right eye, flipping the orientation of the data for left eyes,  and introduced a separate indicator for whether it actually was a right or left eye. This approach ensured that the data columns for both eyes matched perfectly. They preserved all global measures, such as reliability indices, alongside point-wise measures to maintain comprehensive data integrity.

In collaboration with SOURCE, a large data repository, she aimed to align their data formatting with the repository's standards. She adopted a method of consecutive numbering for the visual field points, organizing them from left to right, top to bottom, ensuring consistency even when flipping the orientation for left and right eyes such that, for example, the point labeled “1” was always the superior nasal point in the visual field. This standardized naming convention was crucial for data integration and sharing. Given the significant proportion of 30-2 visual field tests at their site, which include an additional ring of points compared to the standard 24-2 format, she acknowledged the complexities this added. For simplicity in analysis, sometimes only the central 24-2 equivalent points were considered. When extracting these, she ensured their numbering aligned with the 24-2 format, while distinctly numbering the additional points of the 30-2 for clarity.

She highlighted the ongoing process of integrating and standardizing data. She discussed the importance of ensuring their data format was compatible with available analytical tools and scripts, some of which also assume a consecutive numbering format. To assist in this, she included links to commonly used R and Python packages, useful for effective data analysis.

HFA data extraction at Bascom Palmer

Dr. Swaminathan shared that they have successfully put together the Bascom Palmer Glaucoma Repository, containing 73,373 patients with glaucoma. Emphasizing the process, he acknowledged the common challenges associated with data extraction and cleaning mentioned by other speakers, which can be laborious and time-consuming. He attributed their success in these areas to strong IT support, stressing the importance of maintaining good relationships with both IT personnel and data vendors, as crucial for facilitating data management. Strong data engineers are essential in the extraction process. Dr. Swaminathan mentioned the benefits of using the Zeiss system and pointed out the advantages of having the FORUM and ADE, which greatly aid in extracting data efficiently.

This setup allowed them to efficiently batch export tests for patients, specifying MRNs, date ranges, and desired file types (XML or DICOM) to extract the point data typically seen in PDFs in a more tabular format. Dr. Swaminathan shared a screenshot to illustrate how each data point in the XML files is tagged and named, which aids IT teams in identifying and extracting the necessary data - a similar process for both OCP and SAP data. For pointwise data he explained how each XML file provided X and Y coordinates of each data point, which could then be converted as desired. The XML file would provide detailed information about that point, such as whether the stimulus was detected, the decibel value of sensitivity, and both the total and pattern deviation values along with their probabilities.

Dr. Swaminathan noted the complexity of extracting these data points and that they wanted to avoid creating an overwhelmingly large SQL or CSV file filled with countless coordinates and associated values. Instead, they sought to convert the data into a format that was more manageable and meaningful, ultimately coming up with a structured approach that included the specific XY coordinates and their respective characteristics (e.g., X3Y9_sens and X3Y9_td for the sensitivity and total deviation, respectively, at the X = 3, Y = 9 test point). This process involved significant coding efforts.

Addressing the challenges of point-wise data, particularly concerning laterality, Dr. Swaminathan highlighted the importance of understanding how data points differ when comparing the left eye to the right eye, especially in different types of visual field tests like the 24-2 and 30-2. They adopted a naming convention that uses topographical or spatial orientation—such as "N3S3” to indicate a superonasal point. This allows the data to become agnostic to laterality, thus ensuring accuracy regardless of which eye the data came from, which is in line with NIH data harmonization efforts.

In the end, Dr. Swaminathan emphasized the significance of having specialized IT support dedicated to ophthalmology, which facilitates more effective management of the intricate data characteristic of their field. He also pointed out the significance of the ADE tool for its capability to handle various types of visual fields including both 24-2 and 30-2 and extracting the available data effectively.

Using DICOM OPV

Dr. Michael Boland from Mass Eye and Ear discussed extracting discrete data from the HFA using OPV. The OPV is the DICOM standard for static perimetry, designed to be vendor-agnostic in collaboration with Zeiss (HFA) and Haag Streit (Octopus). While Octopus has not implemented it, other vendors can use it for their static visual fields. The OPV includes all discrete data on a single field analysis report.  There is no DICOM standard for progression analysis as that process requires all data in the same place at the same time.

Dr. Boland mentioned challenges for those new to DICOM such as generically named concepts, differing from vendor-specific terms to stay vendor-agnostic. For instance, "age-corrected sensitivity deviation value" equates to "total deviation" on the HFA.

Use of DICOM for visual fields requires using FORUM, as the old HFA2 devices do not  support OPV. The process involved configuring FORUM Glaucoma Workspace to generate OPV files. This is done using the “Create DICOM OPV format for Visual Fields'' option in FORUM Glaucoma Workplace. Without this option being turned on, only encapsulated PDFs are generated, not the OPV DICOM files. To generate OPV files for existing tests, a box must be checked to regenerate OPV files for historical data (configuration option “Create single exam reports for existing exams”). Once selected, new tests will generate OPV files automatically. The next step is copying these OPV files off the FORUM server which was straightforward.

Dr. Boland converted the OPV files to XML using the free DCMTK toolkits. XML, being highly flexible, allowed conversion into various formats using XML style sheets. This process involved copying files, converting DICOM to XML, and applying the XSL stylesheet processing tool “xsltproc” to transform the data into SQL for a relational database or to delimited text for statistical software.

For the SQL option, the database had three levels of abstraction: patient level, study level, and points level. Patient data included basic information like date of birth and medical record number. Study-level data include global data such as sphere and cylinder corrections, stimulus color, and background color. Point level data included  coordinates and measurements, including thresholds, age-corrected deviations (total deviation), and average-corrected deviations (pattern deviation). Structured this way, it is possible to query tests with particular characteristics.

Dr. Boland emphasized that FORUM was necessary for this process, but the HFA 3 would soon generate OPV files, potentially eliminating the need for FORUM. While Dr. Boland used XML, R, and a relational database; he suggested using Python as an option to find and transform OPV files.

Extracting data from Octopus Visual Fields

Dr. Shahin Hallaj, presented data extraction form Haag-Streit Octopus device on behalf of the glaucoma service of Wills Eye Hospital. Data extraction from the device is possible in different ways. One of the methods includes usage of the statistical export tool and export via reporter. Data can be exported in different formats including single patient report, .ESX file, and .CSV file. The process involves selection of a predefined template based on the device version and desired data elements from the “Template Reporter” tab and indication of the destination path. Customer support from Haag-Streit can help with the templates and they are usually very responsive and helpful. The useful file for research purposes is the .CSV option which returns a single file containing bulk data from various patients and different encounters. This file has an extensive breadth of information about all the aspects of the performed tests and results, including the normative values. Of note, the exported .CSV file does not contain the probabilities, which can be calculated after the export, given that the normative values are provided as a part of the report. The data can then be parsed, analyzed, and plotted using the existing R package: “visualFields: Statistical Methods for Visual Fields”. Although the glaucoma service does not have access to OPV Dicom license, access to OPV Dicom files is possible upon subscription to Dicom data export tool.

Extracting raw data from HFA reports using Python

Dr. Saifee discussed the development of a software platform to expedite the review of visual fields, which is traditionally a labor-intensive process. The goal was to enhance the efficiency of viewing and analyzing large data sets. Dr. Saifee contributed to the creation of the hvf-extraction-script, a Python script that extracts visual field data from reports using image processing, OCR, and regular expressions. This script can also convert OPV data files into a JSON format, making the data more accessible and easier to analyze.

He explained the nomenclature used in the 24-2 HVF single field analysis report, dividing the data into three sectors: metadata (including heading name, ID, reliability indices, and global metrics), value plots (individual decibel-based or deviation-based numerical data), and percentile plot. The script processes these HVF report images through three modules within the Python script: an OCR module for metadata, a regular expression text standardization module organized into 17 different metadata fields, and template matching for each digit of value and percentile plots.

The HF Extraction script numbers the data within a 10x10 grid numbered from left to right for different study sizes, ensuring consistency across different visual field types like 24-2, 30-2, and 10-2; although left and right eyes will be numbered differently in this method. The script is versatile, allowing the input of all three types of HVF report images (30-2, 24-2 and 10-2) or DICOM files and exporting data in JSON or tab-separated value (TSV) formats for further analysis.

Dr. Saifee provided a demonstration of the script's command-line interface, showing how to import modules, feed files, and extract data. He mentioned additional packages that calculate metrics such as the CIGT score and demonstrated a human-readable JSON text file containing the extracted data. The package has a README file with instructions on how to use it for extraction and processing on PyPI and GitHUb.

The script was validated by comparing its performance with manual transcriptions by four ophthalmologists across 90 different visual fields. The validation trial measured four different factors: extraction time, metadata extraction accuracy, value plot extraction accuracy, and percentile plot extraction accuracy showing that the software takes about 5 to 10 seconds per image and has low error rates between 1.2% to 3.5%, particularly for value and percentile plots with under 1% error rate.

Dr. Saifee emphasized that while the script is not 100% accurate, it significantly reduces the time and effort compared to manual transcription. He mentioned that the validation study was published in frontiers in Medicine for those interested. (Development and Validation of Automated Visual Field Report Extraction Platform Using Computer Vision Tools) At UCSF, during his glaucoma fellowship, the primary workflow involved using OPV files for maximum accuracy, with occasional manual validation when image extraction was necessary. Dr. Saifee also emphasized what others had mentioned earlier that requiring Zeiss FORUM for OPV data access is difficult and not universally accessible.

Related EyeWiki Pages

References

  1. Goetz KE, Reed AA, Chiang MF, Keane T, Tripathi M, Ng E, et al. Accelerating care: A roadmap to interoperable ophthalmic imaging standards in the United States. Ophthalmology. 2023;131: 12–15. doi:10.1016/j.ophtha.2023.10.001
  2. Shweikh Y, Sekimitsu S, Boland MV, Zebardast N. The Growing Need for Ophthalmic Data Standardization. Ophthalmol Sci. 2023;3: 100262. doi:10.1016/j.xops.2022.100262
  3. Rothman AL, Chang R, Kolomeyer NN, Turalba A, Stein JD, Boland MV. American Glaucoma Society Position Paper: Information Sharing Using Established Standards Is Essential to the Future of Glaucoma Care. Ophthalmol Glaucoma. 2022;5: 375–378. doi:10.1016/j.ogla.2021.12.002
  4. Lum F, Hildebrand L. Why is a terminology important? Ophthalmology. 2005. pp. 173–174. doi:10.1016/j.ophtha.2004.11.024
  5. Baxter SL, Lee AY. Gaps in standards for integrating artificial intelligence technologies into ophthalmic practice. Curr Opin Ophthalmol. 2021;32: 431–438. doi:10.1097/ICU.0000000000000781
  6. Khan SM, Liu X, Nath S, Korot E, Faes L, Wagner SK, et al. A global review of publicly available datasets for ophthalmological imaging: barriers to access, usability, and generalisability. Lancet Digit Health. 2021;3: e51–e66. doi:10.1016/S2589-7500(20)30240-5
  7. Ahuja AS, Bommakanti S, Wagner I, Dorairaj S, Ten Hulzen RD, Checo L. Current and Future Implications of Using Artificial Intelligence in Glaucoma Care. J Curr Ophthalmol. 2022;34: 129–132. doi:10.4103/joco.joco_39_22
  8. Prum BE Jr, Rosenberg LF, Gedde SJ, Mansberger SL, Stein JD, Moroi SE, et al. Primary Open-Angle Glaucoma Preferred Practice Pattern(®) Guidelines. Ophthalmology. 2016;123: P41–P111. doi:10.1016/j.ophtha.2015.10.053
  9. Prum BE Jr, Lim MC, Mansberger SL, Stein JD, Moroi SE, Gedde SJ, et al. Primary Open-Angle Glaucoma Suspect Preferred Practice Pattern(®) Guidelines. Ophthalmology. 2016;123: P112-51. doi:10.1016/j.ophtha.2015.10.055
  10. Goldstein JE, Guo X, Boland MV, Smith KE. Visual Acuity: Assessment of Data Quality and Usability in an Electronic Health Record System. Ophthalmol Sci. 2023;3: 100215. doi:10.1016/j.xops.2022.100215
  11. Supplement 146: Ophthalmic Visual Field (OPV) Static Perimetry Measurements Storage SOP Class. DICOM Working Group 9; 2010 Sep. Available: https://www.dicomstandard.org/News-dir/ftsup/docs/sups/sup146.pdf
  12. Bengtsson B, Olsson J, Heijl A, Rootzén H. A new generation of algorithms for computerized threshold perimetry, SITA. Acta Ophthalmol Scand. 1997;75: 368–375. Available: https://www.ncbi.nlm.nih.gov/pubmed/9374242
  13. Bengtsson B, Heijl A. SITA Fast, a new rapid perimetric threshold test. Description of methods and evaluation in patients with manifest and suspect glaucoma: SITA Fast, a new rapid perimetric threshold test. Description of methods and evaluation in patients with manifest and suspect glaucoma. Acta Ophthalmol Scand. 1998;76: 431–437. doi:10.1034/j.1600-0420.1998.760408.x
  14. Heijl A, Patella VM, Chong LX, Iwase A, Leung CK, Tuulonen A, et al. A new SITA perimetric threshold testing algorithm: Construction and a multicenter clinical study. Am J Ophthalmol. 2019;198: 154–165. doi:10.1016/j.ajo.2018.10.010
  15. Flanagan JG, Moss ID, Wild JM, Hudson C, Prokopich L, Whitaker D, et al. Evaluation of FASTPAC: a new strategy for threshold estimation with the Humphrey Field Analyser. Arbeitsphysiologie. 1993;231: 465–469. doi:10.1007/bf02044233
  16. Bengtsson B, Heijl A. A visual field index for calculation of glaucoma rate of progression. Am J Ophthalmol. 2008;145: 343–353. doi:10.1016/j.ajo.2007.09.038
  17. Åsman P. Glaucoma hemifield test: Automated visual field evaluation. Arch Ophthalmol. 1992;110: 812. doi:10.1001/archopht.1992.01080180084033
  18. Lum F, Hildebrand L. Why is a terminology important? Ophthalmology. 2005. pp. 173–174. doi:10.1016/j.ophtha.2004.11.024
  19. Hwang JC, Yu AC, Casper DS, Starren J, Cimino JJ, Chiang MF. Representation of ophthalmology concepts by electronic systems: intercoder agreement among physicians using controlled terminologies. Ophthalmology. 2006;113: 511–519. doi:10.1016/j.ophtha.2006.01.017
  20. Bidgood WD Jr, Horii SC, Prior FW, Van Syckle DE. Understanding and using DICOM, the data interchange standard for biomedical imaging. J Am Med Inform Assoc. 1997;4: 199–212. doi:10.1136/jamia.1997.0040199
  21. DICOM? What’s That? Why You Should Care. 1 Apr 2013 [cited 4 May 2021]. Available: https://www.aao.org/eyenet/article/opinion-20
  22. Clunie DA. DICOM Structured Reporting. PixelMed Publishing; 2000. Available: https://play.google.com/store/books/details?id=EVjOolUJNGUC
  23. Baxter SL, Saseendrakumar BR, Paul P, Kim J, Bonomi L, Kuo T-T, et al. Predictive Analytics for Glaucoma Using Data From the All of Us Research Program. Am J Ophthalmol. 2021;227: 74–86. doi:10.1016/j.ajo.2021.01.008
  24. OHDSI Eye Care and Vision Research Workgroup 2023 OKR. https://www.ohdsi.org/wp-content/uploads/2023/02/Eye-Care-and-Vision-Research-2023.pdf
The Academy uses cookies to analyze performance and provide relevant personalized content to users of our website.