Standard Practice for Rating-Scale Measures Relevant to the Electronic Health Record

SCOPE
1.1 This standard addresses the identification of data elements from the EHR definitions in Guide E 1384 that have ordinal scale value sets and which can be further defined to have scale-free measurement properties. It is applicable to data recorded for the Electronic Health Record and its paper counterparts. It is also applicable to abstracted data from the patient record that originates from these same data elements. It is applicable to identifying the location within the EHR where the observed measurements shall be stored and what is the meaning of the stored data. It does not address either the uses or the interpretations of the stored measurements.

General Information

Status
Historical
Publication Date
09-Dec-2002
Current Stage
Ref Project

Relations

Buy Standard

Standard
ASTM E2171-02 - Standard Practice for Rating-Scale Measures Relevant to the Electronic Health Record
English language
23 pages
sale 15% off
Preview
sale 15% off
Preview

Standards Content (Sample)


NOTICE: This standard has either been superseded and replaced by a new version or withdrawn.
Contact ASTM International (www.astm.org) for the latest information
An American National Standard
Designation: E 2171 – 02
Standard Practice for
Rating-Scale Measures Relevant to the Electronic Health
Record
This standard is issued under the fixed designation E2171; the number immediately following the designation indicates the year of
original adoption or, in the case of revision, the year of last revision.Anumber in parentheses indicates the year of last reapproval.A
superscript epsilon (e) indicates an editorial change since the last revision or reapproval.
1. Scope 3.2.4 calibration—process of establishing additivity and
reproducability of a data set.
1.1 This standard addresses the identification of data ele-
3.2.5 concatenation—processofmeasurementusesenumer-
ments from the EHR definitions in Guide E1384 that have
ated physical unit quantities equal to the magnitude of the
ordinal scale value sets and which can be further defined to
measured item.
havescale-freemeasurementproperties.Itisapplicabletodata
3.2.6 construct—name of the conceptual domain measured.
recorded for the Electronic Health Record and its paper
3.2.7 convergence—closing of the differences in sequential
counterparts. It is also applicable to abstracted data from the
measure estimates.
patientrecordthatoriginatesfromthesesamedataelements.It
3.2.8 counting—basic activity upon which measurement is
is applicable to identifying the location within the EHR where
based and utilizes enumeration.
the observed measurements shall be stored and what is the
3.2.9 data—observation made in such a way that they lead
meaning of the stored data. It does not address either the uses
to generalization.
or the interpretations of the stored measurements.
3.2.10 data quality/ statistical consistency/ model fit—
2. Referenced Documents establishment of whether the measuring instrument is affected
by the object of measurement.
2.1 ASTM Standards:
3.2.11 determinism—measurement model that requires
E177 Practice for Use of the Terms Precision and Bias in
counts to be sufficient for reproducing the pattern of the
ASTM Test Methods
responses over the length of the instrument.
E456 Terminology Related to Quality and Statistics
3.2.12 dimensionality—propertyofhavingmultiplecompo-
E691 Practice for Conducting an Interlaboratory Study to
nents of a measured value.
Determine the Precision of a Test Method
3.2.13 equality/cocalibration—process of ensuring that dif-
E1169 Guide for Conducting Ruggedness Tests
ferent instruments measure the same property.
E1384 Guide for Content and Structure of the Computer-
3.2.14 error—uncertainty of measured properties.
Based Patient Record
3.2.15 estimation algorithms—mathematical specification
3. Terminology
of an observational framework.
3.2.16 incommensurable/commensurable—measure value
3.1 Definitions—Full definitions and discussion of Scale-
of the same quantity does/does not depend upon rating/
Free Measurement Terms are given in Annex A1.
responses of the rating construct and does not/does remain
3.2 Definitions of Terms Specific to This Standard:
constant.
3.2.1 adaptive measurement—advantageofmeasurementto
3.2.17 instrument—sensing device having a defined scale.
account for missing data.
3.2.18 intraandinter-laboratorytesting—variabilitytesting
3.2.2 additivity—rating scale adherence to associativity and
usingthesamesetting/measure/operatorasopposedtodifferent
commutability.
setting/measure/operators.
3.2.3 bias analysis—investigationofconsiderationsrelative
3.2.19 item response/latent trait theory—analytic models
to subject or area of performance.
that forego prescriptive parameter separation, sufficiency and
scale and sample free data standards for additional descriptive
parameters.
This practice is under the jurisdiction ofASTM Committee E31 on Healthcare
Informatics and is the direct responsibility of Subcommittee E31.25 on Healthcare
3.2.20 items/item-bank—part of survey statements/test
Data Management, Security, Confidentiality, and Privacy.
questions for adaptive administration.
Current edition approved Dec. 10, 2002. Published February 2003.
2 3.2.21 levels of measurement—nature of scale of measure-
Annual Book of ASTM Standards, Vol 14.02.
Annual Book of ASTM Standards, Vol 14.01. ment.
Copyright © ASTM International, 100 Barr Harbor Drive, PO Box C700, West Conshohocken, PA 19428-2959, United States.
E2171–02
3.2.22 logit—scale unit using logarithms of odds ratios. 3.2.48 transparency—ability to “look through” raw scores
(P/1−P). to the composite ratings producing that score (see also suffı-
ciency).
3.2.23 mathematical entities—concepts that can be taught
or learned through what is already known. 3.2.49 unit of measurement—common conventions for the
appropriate smallest basic measures for a given construct.
3.2.24 measurement—determining in units the value of a
property in a scale having magnitude (that is, ratio or differ- 3.2.50 validity/construct/content—both content and con-
struct must make sound theoretical sense to be considered
ence).
valid.
3.2.25 metaphor in measurement—suspension of disbelief
3.2.51 variable—attribute of the property being measured.
of some areas or properties in the name of estimating magni-
tude.
4. Significance and Use
3.2.26 metric—measure of a property in defined units.
4.1 The simplicity and practicality of Rasch’s probabilistic
3.2.27 missing data—use of uncalibrated data in instru-
scale-free measurement models have brought within reach
ments with varying numbers of items.
universal metrics for educational and psychological tests, and
3.2.28 multi-faceted measurement—use of measurement
for rating scale-based instruments in general.There are at least
models that have more than two basic parameters.
3 implications to the application of Rasch’s models to the
3.2.29 ordinal data—one scale for measurement.
health-related calibration of universal metrics for each of the
3.2.30 population—universe of elements relevant to mea-
variables relevant to the Electronic Health Record (EHR) that
surement of a particular construct.
are typically measured using rating scale instruments.
3.2.31 probabilistic conjoint measurement—framework for
4.1.1 First, establishing a single metric standard with a
demonstrating data quality, statistical consistency, and model
defined range and unit will arrest the burgeoning proliferation
fit of non-deterministic measures with a stable order of facets.
of new scale-dependent metrics.
3.2.32 quantification—cocalibration of different constructs
4.1.2 Second, the communication of the information per-
with respect to the same property (variable) in a common
taining to patient status represented by these measures (physi-
metric.
cal, cognitive, and psychosocial health status, quality of life,
3.2.33 Rasch analysis measurement and models—analytic
satisfaction with services, etc.) will be simplified.
modelspecifyingtheobservationalframeworkanddataquality
4.1.3 Third, common standards of data quality will be used
measures for quantification.
to evaluate and improve instrument performance. The vast
3.2.34 raw score—sum of ratings or count of direct re-
majority of test and survey data quality is currently almost
sponses in a given measurement event.
completely unknown, and when quality is evaluated, it is via
3.2.35 reliability—ratio of variation to error or signal to
many different methods that are often insufficient to the task,
noise.
misapplied,misinterpreted,orevencontradictoryintheiraims.
3.2.36 repeatability—variability of measurements in a
4.1.4 Fourth, currently unavailable economic benefits will
single setting by a single operator using the same measuring
accrue from the implementation of measurement methods
instrument.
based on quality-assessed data and widely accepted reference
3.2.37 reproducibility—variability of measurements in dif-
standardmetrics.Thepotentialmagnitudeofthesebenefitscan
ferent settings.
be seen in an assessment of 12 different metrological improve-
3.2.38 root mean square error—mathematicalalgorithmfor
ment studies conducted by the National Science and Technol-
determining the variation due to error of the estimates.
ogy Council (Subcommittee on Research, 1996). The average
3.2.39 sample—subset of measured population.
return on investment associated with these twelve studies was
3.2.40 sample size—magnitudeofthemeasuredpopulation.
147%. Is there any reason to suppose that similar instrument
3.2.41 scale-free/scale-dependent—measures not affected
improvement efforts in the psychosocial sciences will result in
bytheinstrumentemployedasopposedtomeasuresthatareso
markedly lower returns?
affected.
4.2 Until now, it has been assumed that the Guide E1384
3.2.42 separability theorem/parameter separation—ability
would necessarily have to stipulate fields for the EHR that
of measures to be independent of the instrument selected and
would contain summary scores from commonly used func-
ability of the instrument’s item calibrations to be independent
tionalassessment,healthstatus,qualityoflife,andsatisfaction
of the sample measured.
instruments. This is because standards for rating scale instru-
3.2.43 software—packages of machine code used for data
ments to date have been entirely content-based. Those who
analysis.
havesought“gold”orcriterionstandardsthatwouldcommand
3.2.44 specific objectivity—data satisfying the separability
universal respect and relevance have been stymied by the
theorem.
impossibility of identifying content (survey questions and
3.2.45 standardized—commonconventionsforinstruments,
rating categories) capable of satisfying all users’ needs. Com-
reference measurement material, scales and units of measure
munication of patient statistics between managers and clini-
for a measurement process.
cians, or payors and providers, may require one kind of
3.2.46 suffıciency—statistics that extract all available infor-
information; between providers and referral sources, other
mation from the data. kinds; between providers and accreditors, yet another; among
3.2.47 targeting—lack of floor an/or ceiling effects in mea- clinicians themselves, still another; and even more kinds of
surement. information may be required for research applications.
E2171–02
4.2.1 For instance, payors may want to know outcome 4.4.5 Statement of the full text of at least a significant
information that tells them what percentage of patients dis- sample of the questions included on the instrument;
4.4.6 Specification of the mathematical model employed,
charged can function independently at home. A hospital man-
ager, referral source, or accreditor might want to know more with a justification for its use;
detail, such as percentages of patients discharged who can 4.4.7 Specification of the error estimation and model fit
estimationalgorithmsemployed,withmathematicaldetailsand
dress, bathe, walk, and eat independently. Clinicians will want
to know still more detail about amounts of independence, such justification provided when they differ from those routinely
used;
as whether there are safety issues, needs for assistive devices,
4.4.8 Evaluation of overall model fit, elaborated in a report
or specific areas in which functionality could be improved.
on the details of one or more of the least and most consistent
Researchers may seek even more detail yet, as they evaluate
response patterns observed;
differences in outcomes across treatment programs, diagnostic
4.4.9 Graphical comparison of at least two calibrations of
groups, facilities, levels of care, etc.
newinstrumentsfromdifferentsamplesofthesamepopulation
4.2.1.1 Members of each of these groups have, at some
to establish the invariance of the item calibration order across
time, felt that their particular information needs have not been
samples;
met by the tools designed and developed by members of
4.4.10 Graphical comparison of measures produced by at
another group. Despite the fact that the information provided
least two subsets of items on new instruments to establish the
by these different tools appears in many different forms and at
invariance of the person measure order across scales (collec-
different levels of detail, to the extent that they can be shown
tions of items);
to measure the same thing, they can do so in the same metric.
4.4.11 Graphicalcomparisonofnewinstrumentcalibrations
This is the primary result of the introduction of Rasch’s
withthecalibrationsproducedbyotherinstrumentsintendedto
probabilistic scale-free measurement models. The different
measure the same variable in the same population, to establish
purposes guiding the design of the instruments will still
the potential for sample-free equating of the instruments and
continue to impact the two fundamental statistics associated
establishment of reference standards;
witheverymeasure:theerrorandmodelfit.Moregeneral,and
4.4.12 At least a useable prototype of the instrument em-
also less well-designed instruments, will measure with more
ployed, with the worksheet laid out to produce informative
error than those that make more detailed and consistent
quantitative measures (not summed scores) as soon as it is
distinctions. Data consistency is the key to scale-free measure-
filled out; and
ment.
4.4.13 Graphical presentation of the treatment and control
4.3 Theremainderofthisdocument(1)identifies,inSection
groups’ measurement distributions, for the purpose of facili-
5, the fields in the current Guide E1384 targeted for change
tating a substantive interpretations of differences’ significance.
from a scale-dependent to a scale-free measurement orienta-
tion; (2) lists referenced ASTM documents; (3) defines scale-
5. Applicable Data Elements
free measurement terms, often contrasting them with their
5.1 The data elements in Guide E1384 which are affected
scale-dependent counterparts; (4) addresses the significance
bythesuggestionsformeasurementstandardizationmadehere
and use of scale-free measures in the context of the EHR; (5)
include the following:
lists,inAnnexA2,scientificpublicationsdocumentingrelevant
PHYSICAL EXAM SEGMENT
instrument calibrations; (6) briefly presents some basic opera-
09001.16 Patient Health Status Measure Name
tional considerations; (7) lists minimum and comprehensive
09001.17. Patient Health Status Measure Total Value
09001.19. Patient Health Status Measure Element Name (M)
arrays of EHR database fields; and (8) lists, in Annex A3, the
90001.19.01 Patient Health Status Measure Element Value
references made in presentation of the measurement theory,
estimation methods, etc.
ENCOUNTER RECEIPT SUBSEGMENT
14001.A154. Patient Receipt Health Status Measure Name
4.4 Publications of calibration studies referencing this prac-
14001.A156. Patient Receipt Health Status Measure Total Value
tice and the associated standard practice should require:
14001.A160. P
...

Questions, Comments and Discussion

Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.