Information technology — Multimedia content description interface — Part 17: Compression of neural networks for multimedia content description and analysis

This document specifies Neural Network Coding (NNC) as a compressed representation of the parameters/weights of a trained neural network and a decoding process for the compressed representation, complementing the description of the network topology in existing (exchange) formats for neural networks. It establishes a toolbox of compression methods, specifying (where applicable) the resulting elements of the compressed bitstream. Most of these tools can be applied to the compression of entire neural networks, and some of them can also be applied to the compression of differential updates of neural networks with respect to a base network. Such differential updates are for example useful when models are redistributed after fine-tuning or transfer learning, or when providing versions of a neural network with different compression ratios. This document does not specify a complete protocol for the transmission of neural networks, but focuses on compression of network parameters. Only the syntax format, semantics, associated decoding process requirements, parameter sparsification, parameter transformation methods, parameter quantization, entropy coding method and integration/signalling within existing exchange formats are specified, while other matters such as pre-processing, system signalling and multiplexing, data loss recovery and post-processing are considered to be outside the scope of this document. Additionally, the internal processing steps performed within a decoder are also considered to be outside the scope of this document; only the externally observable output behaviour is required to conform to the specifications of this document.

Technologies de l'information — Interface de description du contenu multimédia — Partie 17: Compression des réseaux neuronaux pour la description et l'analyse du contenu multimédia

General Information

Status
Published
Publication Date
09-Jan-2024
Current Stage
6060 - International Standard published
Start Date
10-Jan-2024
Due Date
26-Feb-2024
Completion Date
10-Jan-2024
Ref Project

Relations

Standard
ISO/IEC 15938-17:2024 - Information technology — Multimedia content description interface — Part 17: Compression of neural networks for multimedia content description and analysis Released:10. 01. 2024
English language
95 pages
sale 15% off
Preview
sale 15% off
Preview

Standards Content (Sample)


International
Standard
ISO/IEC 15938-17
Second edition
Information technology —
2024-01
Multimedia content description
interface —
Part 17:
Compression of neural networks for
multimedia content description and
analysis
Technologies de l'information — Interface de description du
contenu multimédia —
Partie 17: Compression des réseaux neuronaux pour la
description et l'analyse du contenu multimédia
Reference number
© ISO/IEC 2024
All rights reserved. Unless otherwise specified, or required in the context of its implementation, no part of this publication may
be reproduced or utilized otherwise in any form or by any means, electronic or mechanical, including photocopying, or posting on
the internet or an intranet, without prior written permission. Permission can be requested from either ISO at the address below
or ISO’s member body in the country of the requester.
ISO copyright office
CP 401 • Ch. de Blandonnet 8
CH-1214 Vernier, Geneva
Phone: +41 22 749 01 11
Email: copyright@iso.org
Website: www.iso.org
Published in Switzerland
© ISO/IEC 2024 – All rights reserved
ii
Contents Page
Foreword .v
Introduction .vi
1 Scope . 1
2 Normative references . 1
3 Terms and definitions . 1
4 Abbreviated terms, conventions and symbols . 3
4.1 General .3
4.2 Abbreviated terms .3
4.3 List of symbols .3
4.4 Number formats and computation conventions .6
4.5 Arithmetic operators .6
4.6 Logical operators .7
4.7 Relational operators .7
4.8 Bit-wise operators .7
4.9 Assignment operators .8
4.10 Range notation .8
4.11 Mathematical functions .8
4.12 Array functions .9
4.13 Order of operation precedence .11
4.14 Variables, syntax elements and tables.11
5 Overview .13
5.1 General . 13
5.2 Compression tools . 13
5.3 Creating encoding pipelines .14
6 Syntax and semantics .15
6.1 Specification of syntax and semantics . 15
6.1.1 Method of specifying syntax in tabular form . 15
6.1.2 Bit ordering .16
6.1.3 Specification of syntax functions and data types .16
6.1.4 Semantics .17
6.2 General bitstream syntax elements .18
6.2.1 NNR unit . .18
6.2.2 Aggregate NNR unit .18
6.2.3 Composition of NNR bitstream .19
6.3 NNR bitstream syntax . 20
6.3.1 NNR unit syntax . 20
6.3.2 NNR unit size syntax. 20
6.3.3 NNR unit header syntax . 20
6.3.4 NNR unit payload syntax . 25
6.3.5 Byte alignment syntax .31
6.4 Semantics .31
6.4.1 General .31
6.4.2 NNR unit size semantics .31
6.4.3 NNR unit header semantics .31
6.4.4 NNR unit payload semantics . 39
7 Decoding process .45
7.1 General .45
7.2 NNR decompressed data formats . 46
7.3 Decoding methods .47
7.3.1 General .47
7.3.2 Decoding method for NNR compressed payloads of type NNR_PT_INT .47
7.3.3 Decoding method for NNR compressed payloads of type NNR_PT_FLOAT . 48

© ISO/IEC 2024 – All rights reserved
iii
7.3.4 Decoding method for NNR compressed payloads of type NNR_PT_RAW_FLOAT . 48
7.3.5 Decoding method for NNR compressed payloads of type NNR_PT_BLOCK . 49
7.3.6 Decoding process for an integer weight tensor . 50
8 Parameter reduction . 51
8.1 General .51
8.2 Methods .51
8.2.1 Batchnorm folding .51
8.3 Syntax and semantics .52
8.3.1 Sparsification using compressibility loss .52
8.3.2 Sparsification using micro-structured pruning .52
8.3.3 Combined pruning and sparsification .52
8.3.4 Unstructured statistics-adaptive sparsification . 53
8.3.5 Structured sparsification (global and local approach) . 53
8.3.6 Weight unification . 53
8.3.7 Low rank/low displacement rank for convolutional and fully connected layers . 54
8.3.8 Batchnorm folding . 54
8.3.9 Local scaling adaptation (LSA) . 54
9 Parameter quantization .55
9.1 General . 55
9.2 Methods . 55
9.2.1 Uniform quantization method. 55
9.2.2 Codebook-based method . 55
9.2.3 Dependent scalar quantization method . 55
9.2.4 Predictive residual encoding (PRE) . 55
9.3 Syntax and semantics . 55
9.3.1 Uniform quantization method. 55
9.3.2 Codebook-based method . 56
9.3.3 Dependent scalar quantization method . 56
10 Entropy coding .56
10.1 Methods . 56
10.1.1 DeepCABAC . 56
10.2 Syntax and semantics . 58
10.2.1 DeepCABAC syntax . 58
10.3 Entropy decoding process . 64
10.3.1 General . 64
10.3.2 Initialization process . 64
10.3.3 Binarization process . 65
10.3.4 Decoding process flow . 66
Annex A (normative) Implementation for NNEF .73
Annex B (informative) Implementation for ONNX® .75
Annex C (informative) Implementation for PyTorch® .77
Annex D (informative) Implementation for TensorFlow® .79
Annex E (informative) Recommendation for carriage of NNR bitstreams in other containers .81
Annex F (informative) Recommendation for naming method regarding performance metric
type . .83
Annex G (informative) Encoding side information for selected compresstion tools .84
Bibliography .95

© ISO/IEC 2024 – All rights reserved
iv
Foreword
ISO (the International Organization for Standardization) and IEC (the International Electrotechnical
Commission) form the specialized system for worldwide standardization. National bodies that are
members of ISO or IEC participate in the development of International Standards through technical
committees established by the respective organization to deal with particular fields of technical activity.
ISO and IEC technical committees collaborate in fields of mutual interest. Other international organizations,
governmental and non-governmental, in liaison with ISO and IEC, also take part in the work.
The procedures used to develop this document and those intended for its further maintenance are described
in the ISO/IEC Directives, Part 1. In particular, the different approval criteria needed for the different types
of document should be noted. This document was drafted in accordance with the editorial rules of the ISO/
IEC Directives, Part 2 (see www.iso.org/directives or www.iec.ch/members_experts/refdocs).
ISO and IEC draw attention to the possibility that the implementation of this document may involve the
use of (a) patent(s). ISO and IEC take no position concerning the evidence, validity or applicability of any
claimed patent rights in respect thereof. As of the date of publication of this document, ISO and IEC had
received notice of (a) patent(s) which may be required to implement this document. However, implementers
are cautioned that this may not represent the latest information, which may be obtained from the patent
database available at www.iso.org/patents and https://patents.iec.ch. ISO and IEC shall not be held
responsible for identifying any or all such patent rights.
Any trade name used in this document is information given for the convenience of users and does not
constitute an endorsement.
For an explanation of the voluntary nature of standards, the meaning of ISO specific terms and expressions
related to conformity assessment, as well as information about ISO's adherence to the World Trade
Organization (WTO) principles in the Technical Barriers to Trade (TBT) see www.iso.org/iso/foreword.html.
In the IEC, see www.iec.ch/understanding-standards.
This document was prepared by Joint Technical Committee ISO/IEC JTC 1, Information technology,
Subcommittee SC 29, Coding of audio, picture, multimedia and hypermedia information.
This second edition cancels and replaces the first edition (ISO/IEC 15938-17:2022), which has been
technically revised.
The main changes are as follows:
— Support for incremental compression of updates of neural networks respective to a base model,
— Additional sparsification tools,
— Additional entropy coding tools, leveraging dependencies in incremental updates,
— Additional quantization tools, including representation as residuals of updates, and
— Additional high-level syntax, covering the new coding tools as well as more metadata (e.g. performance
metrics).
A list of all parts in the ISO/IEC 15938 series can be found on the ISO and IEC websites.
Any feedback or questions on this document should be directed to the user’s national standards
body. A complete listing of these bodies can be found at www.iso.org/members.html and
www.iec.ch/national-committees.

© ISO/IEC 2024 – All rights reserved
v
Introduction
Artificial neural networks have been adopted for a broad range of tasks in multimedia analysis and
processing, media coding, data analytics and many other fields. Their recent success is based on the
feasibility of processing much larger and complex neural networks (deep neural networks, DNNs) than in
the past, and the availability of large-scale training data sets. As a consequence, trained neural networks
contain a large number of parameters and weights, resulting in a quite large size (e.g. several hundred
MBs). Many applications require the deployment of a particular trained network instance, potentially to a
larger number of devices, which may have limitations in terms of processing power and memory (e.g. mobile
devices or smart cameras), and also in terms of communication bandwidth. Any use case, in which a trained
neural network (or its updates) needs to be deployed to a number of devices thus benefits from a standard
for the compressed representation of neural networks.
Considering the fact that compression of neural networks is likely to have a hardware dependent and
hardware independent component, this document is designed as a toolbox of compression technologies. Some
of these technologies require specific representations in an exchange format (i.e. sparse representations,
adaptive quantization), and thus a normative specification for representing outputs of these technologies is
defined. Others do not at all materialize in a serialized representation (e.g. pruning), however, also for the
latter ones required metadata is specified. This document is independent of a particular neural network
exchange format, and interoperability with common formats is described in the annexes.
This document thus defines a high-level syntax that specifies required metadata elements and related
semantics. In cases where the structure of binary data is to be specified (e.g. decomposed matrices) this
document also specifies the actual bitstream syntax of the respective block. Annexes to the document
specify the requirements and constraints of compressed neural network representations; as defined in this
document; and how they are applied.
— Annex A specifies the implementation of this document with the Neural Network Exchange Format
1)
(NNEF ), defining the use of NNEF to represent network topologies in a compressed neural network
bitstream.
— Annex B provides recommendations for the implementation of this document with the Open Neural
2)
Network Exchange Format (ONNX®) , defining the use of ONNX to represent network topologies in a
compressed neural network bitstream.
3)
— Annex C provides recommendations for the implementation of this document with the PyTorch®
format, defining the reference to PyTorch elements in the network topology description of a compressed
neural network bitstream.
4)
— Annex D provides recommendations for the implementation of this document with the Tensorflow®
format, defining the reference to Tensorflow elements in the network topology description of a
compressed neural network bitstream.
— Annex E provides recommendations for the carriage of tensors compressed according to this document
in third party container formats.
— Annex F provides recommendations for the naming of common performance metrics to specify the
metric that was used for validation.
1) NNEF is the trademark of a product owned by The Khronos® Group. This information is given for the convenience of
users of this document and does not constitute an endorsement by ISO/IEC of the product named.
2) ONNX is the trademark of a product owned by LF PROJECTS, LLC. This information is given for the convenience of
users of this document and does not constitute an endorsement by ISO/IEC of the product named.
3) PyTorch is the trademark of a product supplied by Facebook, Inc. This information is given for the convenience of
users of this document and does not constitute an endorsement by ISO/IEC of the product named.
4) TensorFlow is the trademark of a product supplied by Google LLC. This information is given for the convenience of
users of this document and does not constitute an endorsement by ISO/IEC of the product named.

© ISO/IEC 2024 – All rights reserved
vi
— Annex G provides recommendations for implementing the encoding side of some of the compression
tools.
The compression tools described in this document have been selected and evaluated for neural networks
used in applications for multimedia description, analysis and processing. However, they may be useful for
the compression of neural networks used in other applications and applied to other types of data.

© ISO/IEC 2024 – All rights reserved
vii
International Standard ISO/IEC 15938-17:2024(en)
Information technology — Multimedia content description
interface —
Part 17:
Compression of neural networks for multimedia content
description and analysis
1 Scope
This document specifies Neural Network Coding (NNC) as a compressed representation of the parameters/
weights of a trained neural network and a decoding process for the compressed representation,
complementing the description of the network topology in existing (exchange) formats for neural networks.
It establishes a toolbox of compression methods, specifying (where applicable) the resulting elements of the
compressed bitstream. Most of these tools can be applied to the compression of entire neural networks, and
some of them can also be applied to the compression of differential updates of neural networks with respect
to a base network. Such differential updates are for example useful when models are redistributed after
fine-tuning or transfer learning, or when providing versions of a neural network with different compression
ratios.
This document does not specify a complete protocol for the transmission of neural networks, but focuses
on compression of network parameters. Only the syntax format, semantics, associated decoding process
requirements, parameter sparsification, parameter transformation methods, parameter quantization,
entropy coding method and integration/signalling within existing exchange formats are specified, while
other matters such as pre-processing, system signalling and multiplexing, data loss recovery and post-
processing are considered to be outside the scope of this document. Additionally, the internal processing
steps performed within a decoder are also considered to be outside the scope of this document; only the
externally observable output behaviour is required to conform to the specifications of this document.
2 Normative references
The following documents are referred to in the text in such a way that some or all of their content constitutes
requirements of this document. For dated references, only the edition cited applies. For undated references,
the latest edition of the referenced document (including any amendments) applies.
ISO/IEC 10646, Information technology — Universal coded character set (UCS)
ISO/IEC 60559, Information technology — Microprocessor Systems — Floating-Point arithmetic
IETF RFC 1950, ZLIB Compressed Data Format Specification version 3.3
5)
NNEF-v1.0.3 , Neural Network Exchange Format, The Khronos NNEF Working Group, Version 1.0.3, 2020
FIPS PUB 180-4:2015, Secure Hash Standard (SHS)
3 Terms and definitions
For the purposes of this document, the following terms and definitions apply.
5) Available from: https:// www .khronos .org/ registry/ NNEF/ specs/ 1 .0/ nnef -1 .0 .3 .pdf

© ISO/IEC 2024 – All rights reserved
ISO and IEC maintain terminology databases for use in standardization at the following addresses:
— ISO Online browsing platform: available at https:// www .iso .org/ obp
— IEC Electropedia: available at https:// www .electropedia .org/
3.1
aggregate NNR unit
NNR unit which carries multiple NNR units in its payload
3.2
base neural network
neural network serving as reference for a differential update
3.3
compressed neural network representation
NNR
representation of a neural network with model parameters encoded using compression tools
3.4
decomposition
transformation to express a tensor as product of two tensors
3.5
hyperparameter
parameter whose value is used to control the learning process
3.6
layer
collection of nodes operating together at a specific depth within a neural network
3.7
model parameter
coefficients of the neural network model such as weights and biases
3.8
NNR unit
data structure for carrying (compressed or uncompressed) neural network data and related metadata
3.9
parameter identifier
value that uniquely identifies a parameter throughout different incremental updates
Note 1 to entry: Parameters having the same parameter identifier are at the same position in the same tensor in
different incremental updates. This means they are co-located.
3.10
pruning
reduction of parameters in (a part of) the neural network
3.11
sparsification
increase of the number of zero-valued entries of a tensor
3.12
tensor
multidimensional structure grouping related model parameters

© ISO/IEC 2024 – All rights reserved
3.13
updated neural network
neural network resulting from modifying the base neural network
Note 1 to entry: The updated neural network is reconstructed by applying the differential update to the base neural
network.
4 Abbreviated terms, conventions and symbols
4.1 General
This subclause contains the definition of operators, notations, functions, textual conventions and processes
used throughout this document.
The mathematical operators used in this document are similar to those used in the C programming language.
However, the results of integer division and arithmetic shift operations are specified more precisely,
and additional operations are specified, such as exponentiation and real-valued division. Numbering
and counting conventions generally begin from 0, e.g. "the first" is equivalent to the 0-th, "the second" is
equivalent to the 1-th, etc.
4.2 Abbreviated terms
DeepCABAC Context-adaptive binary arithmetic coding for deep neural networks
LDR Low displacement rank
LPS Layer parameter set
LR Low-rank
LSA Local scaling adaptation
LSB Least significant bit
MPS Model parameter set
MSB Most significant bit
MSE Mean square error
NN Neural network
NNC Neural network coding
NDU NNR compressed data unit
NNEF Neural network exchange format
QP Quantization parameter
PRE Predictive residual encoding
SBT Stochastic binary-ternary quantization
SVD Singular value decomposition
4.3 List of symbols
This document defines the following symbols:

© ISO/IEC 2024 – All rights reserved
A
Input tensor
B
Output tensor
k
Block in superblock j of layer k.
B
jl
b
Bias parameter
C
Number of input channels of a convolutional layer
i
C
Number of output channels of a convolutional layer
o
k
Number of channels in dimension j of tensor in layer k
c
j

k
Derived number of channels in dimension j of tensor in layer k
c
j
k
Depth dimension of tensor at layer k
d
j
e
Parameter of f-circulant matrix Z
e
F
Parameter tensor of a convolutional layer
f
Parameter of f-circulant matrix Zf
G Left-hand side matrix of Low Rank decomposed representation of matrix W
k k
H Right-hand side matrix of Low Rank decomposed representation of matrix W
k k
k
Height dimension of tensor for layer k
h
j
K
Dimension of a convolutional kernel
L
Loss function
L
Compressibility loss
c
L
Diversity loss
d
L
Task loss
s
L
Training loss
t
M
Feature matrix
M
Pruning mask for layer k
k
m
Sparsification hyperparameter
i-th row of feature matrix M
m
i
k
Kernel size of tensor at layer k.
n
j
k k
n Dimension resulting from a product over n
j
P
Stochastic transition matrix
p
Pruning ratio hyperparameter
Elements of transition matrix P
p
ij
q
Sparsification ratio hyperparameter

© ISO/IEC 2024 – All rights reserved
q
Binary quantization
b
q
Ternary quantization
t
S
Importance of parameters for pruning
k
Superblock j in layer k
S
j
s
Local scaling factors
k
Size of superblock j in layer k
s
j
T Topology element
q
T Quantizable topology element
u
Unification ratio hyperparameter
W
Parameter tensor
ΔW
Difference of parameter tensor
W
Weight tensor of l-th layer
l
W Parameter tensor of layer k
k
ˆ Low Rank approximation of W
W
k
k
w
Parameter vector
w
Vector of weights for the i-th filter in the l-th layer
li,

w
Vector of normalized weights for the i-th filter in the l-th layer
li,
yy,
Coding performance, reference coding performance
ref
y
Coding performance difference
d
X
Input to a batch-normalization layer
Z
f-circulant matrix
e
Z f-circulant matrix
f
α
Folded batch normalization parameter
α′
Combined value for folded batch normalization parameter and local scaling factors
β
Batch normalization parameter
β
Updated batch normalization parameter
u
γ
Compressibility loss multiplier
c
γ
Batch normalization parameter
γ
Updated batch normalization parameter
u
δ
Folded batch normalization parameter
δSparsification threshold (mean of filter means)
f
© ISO/IEC 2024 – All rights reserved
δ
Scaling factor for sparsification
s

Scalar close to zero to avoid division by zero in batch normalization
λ
Eigenvector
λ
Compressibility loss weight
c
λ
Diversity loss weight
d
μ
Batch normalization parameter
k
Width dimension of tensor for layer k.
v
j
π Equilibrium probability of P
π
Probability of applying ternary quantization
t
ρ Parameter
σ
Batch normalization parameter
τ
Threshold (sparsification, ternary-binary quantization)
θ Weight magnitude threshold
ρ
ϕ
Smoothing factor
4.4 Number formats and computation conventions
This document defines the following number formats:
integer Integer number which may be arbitrarily small or large. Integers are also
referred to as signed integers.
unsigned integer Unsigned integer that may be zero or arbitrarily large.
float Floating point number according to ISO/IEC 60559.
If not specified otherwise, outcomes of all operators and mathematical functions are mathematically exact.
Whenever an outcome shall be a float, it is explicitly specified.
4.5 Arithmetic operators
The following arithmetic operators are defined:
+ Addition
− Subtraction (as a two-argument operator) or negation (as a unary prefix operator)
* Multiplication, including matrix multiplication

Element-wise multiplication of two transposed vectors or element-wise multiplication
of a transposed vector with rows of a matrix or Hadamard product of two matrices with
identical dimensions
y
x Exponentiation. Specifies x to the power of y. In other contexts, such notation is used
for superscripting not intended for interpretation as exponentiation.

© ISO/IEC 2024 – All rights reserved
/ Integer division with truncation of the result toward zero. For example, 7 / 4 and −7 /
−4 are truncated to 1 and −7 / 4 and 7 / −4 are truncated to −1.
÷ Used to denote division in mathematical equations where no truncation or rounding
is intended.
x Used to denote division in mathematical equations where no truncation or rounding is
intended, including element-wise division of two transposed vectors or element-wise
y
division of a transposed vector with rows of a matrix.
y
The summation of f( i ) with i taking all integer values from x up to and including y.
fi()

ix=
y
The product of f( i ) with i taking all integer values from x up to and including y.
fi()

ix=
x % y Modulus. Remainder of x divided by y, defined only for integers x and y with x ≥ 0 and y > 0.
4.6 Logical operators
The following logical operators are defined:
x && y Boolean logical "and" of x and y
x || y Boolean logical "or" of x and y
! Boolean logical "not"
x ? y : z If x is TRUE or not equal to 0, evaluates to the value of y; otherwise, evaluates to the
value of z.
4.7 Relational operators
The following relational operators are defined as follows:
> Greater than
≥ Greater than or equal to
< Less than
≤ Less than or equal to
== Equal to
!= Not equal to
When a relational operator is applied to a syntax element or variable that has been assigned the value "na"
(not applicable), the value "na" is treated as a distinct value for the syntax element or variable. The value
"na" is considered not to be equal to any other value.
4.8 Bit-wise operators
The following bit-wise operators are defined as follows:
& Bit-wise "and". When operating on integer arguments, operates on a two's complement
representation of the integer value. When operating on a binary argument that contains
fewer bits than another argument, the shorter argument is extended by adding more
significant bits equal to 0.
© ISO/IEC 2024 – All rights reserved
| Bit-wise "or". When operating on integer arguments, operates on a two's complement
representation of the integer value. When operating on a binary argument that contains
fewer bits than another argument, the shorter argument is extended by adding more
significant bits equal to 0.
^ Bit-wise "exclusive or". When operating on integer arguments, operates on a two's com-
plement representation of the integer value. When operating on a binary argument that
contains fewer bits than another argument, the shorter argument is extended by adding
more significant bits equal to 0.
x >> y Arithmetic right shift of a two's complement integer representation of x by y binary
digits. This function is defined only for non-negative integer values of y. Bits shifted
into the MSBs as a result of the right shift have a value equal to the MSB of x prior to
the shift operation.
x << y Arithmetic left shift of a two's complement integer representation of x by y binary digits.
This function is defined only for non-negative integer values of y. Bits shifted into the
LSBs as a result of the left shift have a value equal to 0.
! Bit-wise not operator returning 1 if applied to 0 and 0 if applied to 1.
4.9 Assignment operators
The following arithmetic operators are defined as follows:
= Assignment operator
++ Increment, i.e. x++ is equivalent to x = x + 1; when used in an array index, evaluates to
the value of the variable prior to the increment operation.
−− Decrement, i.e. x−− is equivalent to x = x − 1; when used in an array index, evaluates to
the value of the variable prior to the decrement operation.
+= Increment by amount specified, i.e. x += 3 is equivalent to x = x + 3, and x += (−3) is
equivalent to x = x + (−3).
−= Decrement by amount specified, i.e. x −= 3 is equivalent to x = x − 3, and x −= (−3) is
equivalent to x = x − (−3).
4.10 Range notation
The following notation is used to specify a range of values:
x = y.z x takes on integer values starting from y to z, inclusive, with x, y, and z being integer
numbers and z being greater than y.
array[x, y] a sub-array containing the elements of array comprised between position x and y in-
cluded. If x is greater than y, the resulting sub-array is empty.
4.11 Mathematical functions
The following mathematical functions are defined:
Ceil( x ) the smallest integer greater than or equal to x
Floor( x ) the largest integer less than or equal to x
Log2( x ) the base-2 logarithm of x

© ISO/IEC 2024 – All rights reserved
xx  ;   ≤ y

Minx,y =
()

yx  ;   > y

xx  ;   ≥ y

Maxx(),y =

yx  ;   < y

4.12 Array functions
Size( arrayName[] ) returns the number of elements contained in the array or tensor named arrayName. If
arrayName[] is a tensor this corresponds to the product of all dimensions of the tensor.
Prod( arrayName[] ) returns the product of all elements of array arrayName[].
TensorReshape( arrayName[], tensorDimension[]) returns the reshaped tensor array_name[] with the
specified tensorDimension[], without changing its data.
IndexToXY(w, h, i, bs) returns an array with two elements. The first element is an x coordinate and the second
element is a y coordinate pointing into a 2D array of width w and height h. x and y point to the position that
corresponds to scan index i when the block is scanned in blocks of size bs times bs. x and y are derived as
follows:
A variable fullRowOfBlocks is set to w * bs
A variable blockY is set to i / fullRowOfBlocks
A variable iOff is set to i % fullRowOfBlocks
A variable currBlockH is set to Min( bs, h − blockY * bs)
...

Questions, Comments and Discussion

Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.

Loading comments...