Information technology — Measurement and rating of performance of computer-based software systems

This International Standard defines how user oriented performance of computer-based software systems (CBSS) may be measured and rated. A CBSS is a data processing system as it is seen by its users, e.g. by users at various terminals, or as it is seen by operational users and business users at the data processing center. A CBSS includes hardware and all its software (system software and application software) which is needed to realize the data processing functions required by the users or what may influence to the CBSS's time behaviour. This International Standard is applicable for tests of all time constrained systems or system parts. Also a network may be part of a system or may be the main subject of a test. The method defined in this International Standard is not limited to special cases like classic batch or terminal-host systems, e.g. also included are client server systems or, with a broader comprehension of the definition of ?task', real time systems. But the practicability of tests may be limited by the expenditure required to test large environments. This International Standard specifies the key figures of user oriented performance terms and specifies a method of measuring and rating these performance values. The specified performance values are those which describe the execution speed of user orders (tasks), namely the triple of: - execution time, - throughput, - timeliness. The user orders, subsequently called tasks, may be of simple or complex internal structure. A task may be a job, transaction, process or a more complex structure, but with a defined start and end depending on the needs of the evaluator. When evaluating the performance it is possible to use this International Standard for measuring the time behaviour with reference to business transaction completion times in addition to other individual response times. The rating is done with respect to users requirements or by comparing two or more measured systems (types or versions). Intentionally no proposals for measuring internal values, such as: - utilisation values, - mean instruction rates, - path lengths, - cache hit rates, - queuing times, - service times, are given, because the definition of internal values depends on the architecture of the hardware and the software of the system under test. Contrary to this the user oriented performance values which are defined in this International Standard are independent of architecture. The definition of internal performance values can be done independently from the definition of user oriented performance values. They may be used and can be measured in addition to the user oriented performance values. Also the definition of terms for the efficiency with which the user oriented values are produced can be done freely. In addition this International Standard gives guidance on how to establish at a data processing system a stable and reproducible state of operation. This reproducible state may be used to measure other performance values such as the above mentioned internal values. This International Standard focuses on: - application software; - system software; - turn-key systems (i.e. systems consisting of an application software, the system software and the hardware for which it was designed); - general data processing systems. This International Standard specifies the requirements for an emulation (by a technical system - the so-called remote terminal emulator (RTE) - of user interactions with a data processing system. It is the guideline for precisely measuring and rating the user oriented performance values. It provides the guideline for estimating these values with the required accuracy and repeatability of CBSSs with deterministic as well as random behaviour of users. It is also a guidance for implementing a RTE or proving whether it works according to this International Standard. This International Standard provides the guideline to measure and rate the perf

Technologies de l'information — Mesurage et gradation de la performance des systèmes de logiciels d'ordinateurs

General Information

Status
Published
Publication Date
15-Dec-1999
Current Stage
9060 - Close of review
Start Date
02-Sep-2029
Ref Project

Relations

Buy Standard

Standard
ISO/IEC 14756:1999 - Information technology -- Measurement and rating of performance of computer-based software systems
English language
49 pages
sale 15% off
Preview
sale 15% off
Preview

Standards Content (Sample)


INTERNATIONAL ISO/IEC
STANDARD 14756
First edition
1999-11-15
Information technology — Measurement
and rating of performance of
computer-based software systems
Technologies de l'information — Mesurage et gradation de la performance
des systèmes de logiciels d'ordinateurs
Reference number
B C
Contents
Foreword . v
Introduction. vi
Section 1: General.1
1 Scope .1
2 Conformance.3
3 Normative reference.3
4 Definitions.4
5 Abbreviations and symbols .7
5.1 Abbreviations .7
5.2 Symbols .8
Section 2: Principles of measurement and rating.10
6 The measurement.10
6.1 Configuration requirements .10
6.2 User emulation.10
6.2.1 Random user behaviour.10
6.2.2 Remote terminal emulator.10
6.2.3 Workload parameter set .11
6.2.4 Parameter set for proving the accuracy of the user emulation .11
6.3 The measurement procedure .12
6.3.1 The time phases of the measurement procedure.12
6.3.2 Writing a measurement logfile .13
6.3.3 Writing a computation result file.13
6.4 Proof of validity of the measurement .13
6.4.1 Proof of the CBSS's computational correctness .13
6.4.2 Proof of the remote terminal emulator's accuracy .13
6.4.3 Proof of the measurement result's statistical significance .13
7 Calculation of the performance values of the SUT.14
7.1 Mean execution time.14
7.2 Throughput .14
7.3 Timely throughput .14
8 Basic data for rating .14
8.1 User requirements .14
8.2 The reference environment for rating software efficiency.14
8.2.1 Reference environment for assessing application software efficiency.15
8.2.2 Reference environment for assessing system software efficiency .15
©  ISO/IEC 1999
All rights reserved. Unless otherwise specified, no part of this publication may be reproduced
or utilized in any form or by any means, electronic or mechanical, including photocopying and
microfilm, without permission in writing from the publisher.
ISO/IEC Copyright Office • Case postale 56 • CH-1211 Genève 20 • Switzerland
Printed in Switzerland
ii
©
ISO/IEC
9 Rating the performance values . 15
9.1 Computing the performance reference values . 15
9.1.1 Mean execution time reference values . 15
9.1.2 Throughput reference values. 15
9.2 Computing the performance rating values . 15
9.2.1 The mean execution time rating values . 15
9.2.2 Throughput rating values . 15
9.2.3 The timeliness rating values . 16
9.3 Rating the overall performance of the SUT . 16
9.4 Assessment of performance. 17
9.4.1 The steps of assessment process . 17
9.4.2 Weak reference environment . 17
Section 3: Detailed procedure for measurement and rating . 18
10 Input requirements . 18
10.1 The SUT description. 18
10.1.1 Specification of the hardware architecture and configuration . 18
10.1.2 Specification of the system software configuration. 18
10.1.3 The application programs . 19
10.1.4 Additional software required for the measurement run. 19
10.1.5 The stored data. 19
10.1.6 Additional information for proof. 19
10.2 The workload parameter set. 19
10.2.1 The activity types . 19
10.2.2 Activity input variation. 20
10.2.3 The task types with timeliness function and task mode. 20
10.2.4 The chain types and their frequencies . 21
10.2.5 Preparation times' mean values and their standard deviations . 21
10.3 Input for measurement validation . 22
10.3.1 Correct computation results. 22
10.3.2 Variation of input data and its resulting output. 22
10.3.3 Criteria for precision of working of the RTE . 22
10.3.4 Criteria for statistical validity of results. 22
11 The measurement. 22
11.1 The measurement procedure. 22
11.2 Individual rating interval . 23
12 Output from measurement procedure. 25
12.1 Measurement logfile . 25
12.2 Computation result file. 25
13 Validation of measurements . 26
13.1 Validation of the computational correctness of the SUT . 26
13.2 Validation of the accuracy of the RTE . 26
13.2.1 Validity test by checking the relative chain frequencies. 26
13.2.2 Validity test by checking the preparation times. 26
13.3 Validation of the statistical significance of the measured mean execution time. 27
14 Calculation of the performance values of the SUT . 28
14.1 Mean execution time . 28
14.2 Throughput . 28
14.3 Timely throughput. 28
iii
©
ISO/IEC
15 Rating the measured performance values of the SUT.29
15.1 Specification of rating level .29
15.2 Computing performance reference values .29
15.2.1 Mean execution time reference values .29
15.2.2 Throughput reference values .29
15.3 Computing rating values.29
15.3.1 Computing mean execution time rating values.29
15.3.2 Computing throughput rating values .30
15.3.3 Computing timeliness rating values .30
15.4 Rating .
...

Questions, Comments and Discussion

Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.