Information technology — Coding of audio-visual objects — Part 3: Audio

This document integrates many different types of audio coding: natural sound with synthetic sound, low bitrate delivery with high-quality delivery, speech with music, complex soundtracks with simple ones, and traditional content with interactive and virtual-reality content. This document standardizes individually sophisticated coding tools to provide a novel, flexible framework for audio synchronization, mixing, and downloaded post-production. This document does not target a single application such as real-time telephony or high-quality audio compression. Rather, it applies to every application requiring the use of advanced sound compression, synthesis, manipulation, or playback. This document specifies the state-of-the-art coding tools in several domains. As the tools it defines are integrated with the rest of the ISO/IEC 14496 series, exciting new possibilities for object-based audio coding, interactive presentation, dynamic soundtracks, and other sorts of new media, are enabled.

Technologies de l'information — Codage des objets audiovisuels — Partie 3: Codage audio

General Information

Status
Published
Publication Date
11-Dec-2019
Current Stage
6060 - International Standard published
Start Date
12-Dec-2019
Due Date
22-Jun-2021
Completion Date
12-Dec-2019
Ref Project

Relations

Buy Standard

Standard
ISO/IEC 14496-3:2019 - Information technology -- Coding of audio-visual objects
English language
1443 pages
sale 15% off
Preview
sale 15% off
Preview

Standards Content (Sample)

INTERNATIONAL ISO/IEC
STANDARD 14496-3
Fifth edition
2019-12
Information technology — Coding of
audio-visual objects —
Part 3:
Audio
Technologies de l'information — Codage des objets audiovisuels —
Partie 3: Codage audio
Reference number
ISO/IEC 14496-3:2019(E)
©
ISO/IEC 2019

---------------------- Page: 1 ----------------------
ISO/IEC 14496-3:2019(E)

COPYRIGHT PROTECTED DOCUMENT
© ISO/IEC 2019
All rights reserved. Unless otherwise specified, or required in the context of its implementation, no part of this publication may
be reproduced or utilized otherwise in any form or by any means, electronic or mechanical, including photocopying, or posting
on the internet or an intranet, without prior written permission. Permission can be requested from either ISO at the address
below or ISO’s member body in the country of the requester.
ISO copyright office
CP 401 • Ch. de Blandonnet 8
CH-1214 Vernier, Geneva
Phone: +41 22 749 01 11
Fax: +41 22 749 09 47
Email: copyright@iso.org
Website: www.iso.org
Published in Switzerland
ii © ISO/IEC 2019 – All rights reserved

---------------------- Page: 2 ----------------------
ISO/IEC 14496-3:2019(E)
Contents
Foreword           iv
0 Introduction           v
1 Scope            1
2 Normative references          1
3 Terms and definitions          2
4 Abbreviated terms          21
5 Structure of this document         23
Subpart 1: Main           24
Subpart 2: Speech coding — HVXC        169
Subpart 3: Speech coding — CELP        320
Subpart 4: General Audio coding (GA) — AAC, TwinVQ, BSAC     483
Subpart 5: Structured Audio (SA)        895
Subpart 6: Text To Speech Interface (TTSI)       1043
Subpart 7: Parametric Audio Coding — HILN       1053
Subpart 8: Parametric coding for high quality audio — SSC     1112
Subpart 9: MPEG-1/2 Audio in MPEG-4       1231
Subpart 10: Lossless coding of oversampled audio — DST     1244
Subpart 11: Audio lossless coding — ALS       1281
Subpart 12: Scalable lossless coding— SLS       1355
Subpart 13: Audio Synchronization        1429
© ISO/IEC 2019 – All rights reserved iii

---------------------- Page: 3 ----------------------
ISO/IEC 14496-3:2019(E)
Foreword
ISO (the International Organization for Standardization) and IEC (the International Electrotechnical
Commission) form the specialized system for worldwide standardization. National bodies that are members of
ISO or IEC participate in the development of International Standards through technical committees
established by the respective organization to deal with particular fields of technical activity. ISO and IEC
technical committees collaborate in fields of mutual interest. Other international organizations, governmental
and non-governmental, in liaison with ISO and IEC, also take part in the work.
The procedures used to develop this document and those intended for its further maintenance are described
in the ISO/IEC Directives, Part 1. In particular, the different approval criteria needed for the different types of
document should be noted. This document was drafted in accordance with the editorial rules of the
ISO/IEC Directives, Part 2 (see www.iso.org/directives).
Attention is drawn to the possibility that some of the elements of this document may be the subject of patent
rights. ISO and IEC shall not be held responsible for identifying any or all such patent rights. Details of any
patent rights identified during the development of the document will be in the Introduction and/or on the ISO
list of patent declarations received (see www.iso.org/patents) or the IEC list of patent declarations received
(see http://patents.iec.ch).
Any trade name used in this document is information given for the convenience of users and does not
constitute an endorsement.
For an explanation of the voluntary nature of standards, the meaning of ISO specific terms and expressions
related to conformity assessment, as well as information about ISO's adherence to the World Trade
Organization (WTO) principles in the Technical Barriers to Trade (TBT) see www.iso.org/iso/foreword.html.
This document was prepared by Joint Technical Committee ISO/IEC JTC 1, Information technology,
Subcommittee SC 29, Coding of audio, picture, multimedia and hypermedia information.
This fifth edition cancels and replaces the fourth edition (ISO/IEC 14496-3:2009), which has been technically
revised. It incorporates the Amendments ISO/IEC 14496-3:2009/Amd.1:2009, ISO/IEC 14496-
3:2009/Amd.2:2010, ISO/IEC 14496-3:2009/Amd.3:2012, ISO/IEC 14496-3:2009/Amd.4:2013,
ISO/IEC 14496-3:2009/Amd.4:2013/Cor.1:2015, ISO/IEC 14496-3:2009/Amd.5:2015, ISO/IEC 14496-
3:2009/Amd.6:2017 and ISO/IEC 14496-3:2009/Amd.7:2018 as well as Technical Corrigenda ISO/IEC 14496-
3:2009/Cor.1:2009, ISO/IEC 14496-3:2009/Cor.2:2011, ISO/IEC 14496-3:2009/Cor.3:2012, ISO/IEC 14496-
3:2009/Cor.4:2012, ISO/IEC 14496-3:2009/Cor.5:2015, ISO/IEC 14496-3:2009/Cor.6:2015, ISO/IEC 14496-
3:2009/Cor.7:2015.
A list of all parts in the ISO/IEC 14496 series can be found on the ISO website.
Any feedback or questions on this document should be directed to the user’s national standards body. A
complete listing of these bodies can be found at www.iso.org/members.html.
iv © ISO/IEC 2019 – All rights reserved

---------------------- Page: 4 ----------------------
ISO/IEC 14496-3:2019(E)
0 Introduction
0.1 Overview
ISO/IEC 14496-3 (MPEG-4 Audio) is a new kind of audio standard that integrates many different types of
audio coding: natural sound with synthetic sound, low bitrate delivery with high-quality delivery, speech with
music, complex soundtracks with simple ones, and traditional content with interactive and virtual-reality
content. By standardizing individually sophisticated coding tools as well as a novel, flexible framework for
audio synchronization, mixing, and downloaded post-production, the developers of the MPEG-4 Audio
standard have created new technology for a new, interactive world of digital audio.
MPEG-4, unlike previous audio standards created by ISO/IEC and other groups, does not target a single
application such as real-time telephony or high-quality audio compression. Rather, MPEG-4 Audio is a
standard that applies to every application requiring the use of advanced sound compression, synthesis,
manipulation, or playback. The subparts that follow specify the state-of-the-art coding tools in several
domains; however, MPEG-4 Audio is more than just the sum of its parts. As the tools described here are
integrated with the rest of the MPEG-4 standard, exciting new possibilities for object-based audio coding,
interactive presentation, dynamic soundtracks, and other sorts of new media, are enabled.
Since a single set of tools is used to cover the needs of a broad range of applications, interoperability is a
natural feature of systems that depend on the MPEG-4 Audio standard. A system that uses a particular coder
— for example a real-time voice communication system making use of the MPEG-4 speech coding toolset —
can easily share data and development tools with other systems, even in different domains, that use the same
tool — for example a voicemail indexing and retrieval system making use of MPEG-4 speech coding.
The remainder of this clause gives a more detailed overview of the capabilities and functioning of MPEG-4
Audio. First a discussion of concepts, that have changed since the MPEG-2 audio standards, is presented.
Then the MPEG-4 Audio toolset is outlined.
0.2 Concepts of MPEG-4 Audio
0.2.1 General
As with previous MPEG standards, MPEG-4 does not standardize methods for encoding sound. Thus, content
authors are left to their own decisions as to the best method of creating bitstream payloads. At the present
time, methods to automatically convert natural sound into synthetic or multi-object descriptions are not mature;
therefore, most immediate solutions will involve interactively-authoring the content stream in some way. This
process is similar to current schemes for MIDI-based and multi-channel mixdown authoring of soundtracks.
Many concepts in MPEG-4 Audio are different than those in previous MPEG Audio standards. For the benefit
of readers who are familiar with MPEG-1 and MPEG-2 we provide a brief overview here.
0.2.2 Audio storage and transport facilities
In all of the MPEG-4 tools for audio coding, the coding standard ends at the point of constructing access units
that contain the compressed data. The MPEG-4 Systems (ISO/IEC 14496-1) specification describes how to
convert these individually coded access units into elementary streams.
There is no standard transport mechanism of these elementary streams over a channel. This is because the
broad range of applications that can make use of MPEG-4 technology have delivery requirements that are too
wide to easily characterize with a single solution. Rather, what is standardized is an interface (the Delivery
Multimedia Interface Format, or DMIF, specified in ISO/IEC 14496-6) that describes the capabilities of a
transport layer and the communication between transport, multiplex, and demultiplex functions in encoders
and decoders. The use of DMIF and the MPEG-4 Systems specification allows transmission functions that are
much more sophisticated than are possible with previous MPEG standards.
However, LATM and LOAS were defined to provide a low overhead audio multiplex and transport mechanism
for natural audio applications, which do not require sophisticated object-based coding or other functions
provided by MPEG-4 Systems.
© ISO/IEC 2019 – All rights reserved v

---------------------- Page: 5 ----------------------
ISO/IEC 14496-3:2019(E)
Table 0.1 gives an overview about the multiplex, storage and transmission formats currently available for
MPEG-4 Audio within the MPEG-4 framework:
Table 0.1 – MPEG-4 Audio multiplex, storage and transmission formats
Format Functionality defined in Functionality originally Description
MPEG-4: defined in:
M4Mux ISO/IEC 14496-1 - MPEG-4 Multiplex scheme
(normative)
LATM ISO/IEC 14496-3 - Low Overhead Audio Transport
(normative) Multiplex
ADIF ISO/IEC 14496-3 ISO/IEC 13818-7 Audio Data Interchange Format,
(informative) (normative) (AAC only)
MP4FF ISO/IEC 14496-12 - MPEG-4 File Format
(normative)
ADTS ISO/IEC 14496-3 ISO/IEC 13818-7 Audio Data Transport Stream,
(informative) (normative, exemplarily) (AAC only)
LOAS ISO/IEC 14496-3 - Low Overhead Audio Stream, based
(normative, exemplarily) on LATM, three versions are
available:
AudioSyncStream()
EPAudioSyncStream()
AudioPointerStream()

To allow for a user on the remote side of a channel to dynamically control a server streaming MPEG-4
content, MPEG-4 defines backchannel streams that can carry user interaction information.
0.2.3 MPEG-4 Audio supports low-bitrate coding
Previous MPEG Audio standards have focused primarily on transparent (undetectable) or nearly transparent
coding of high-quality audio at whatever bitrate was required to provide it. MPEG-4 provides new and
improved tools for this purpose, but also standardizes (and has tested) tools that can be used for transmitting
audio at the low bitrates suitable for Internet, digital radio, or other bandwidth-limited delivery. The new tools
specified in MPEG-4 are the state-of-the-art tools that support low-bitrate coding of speech and other audio.
0.2.4 MPEG-4 Audio is an object-based coding standard with multiple tools
Previous MPEG Audio standards provided a single toolset, with different configurations of that toolset
specified for use in various applications. MPEG-4 provides several toolsets that have no particular relationship
to each other, each with a different target function. The profiles of MPEG-4 Audio specify which of these tools
are used together for various applications.
Further, in previous MPEG standards, a single (perhaps multi-channel or multi-language) piece of content was
transmitted. In contrast, MPEG-4 supports a much more flexible concept of a soundtrack. Multiple tools may
be used to transmit several audio objects, and when using multiple tools together an audio composition
system is provided to create a single soundtrack from the several audio substreams. User interaction, terminal
capability, and speaker configuration may be used when determining how to produce a single soundtrack from
the component objects. This capability gives MPEG-4 significant advantages in quality and flexibility when
compared to previous audio standards.
0.2.5 MPEG-4 Audio provides capabilities for synthetic sound
In natural sound coding, an existing sound is compressed by a server, transmitted and decompressed at the
receiver. This type of coding is the subject of many existing standards for sound compression. In contrast,
MPEG-4 standardizes a novel paradigm in which synthetic sound descriptions, including synthetic speech and
synthetic music, are transmitted and then synthesized into sound at the receiver. Such capabilities open up
new areas of very-low-bitrate but still very-high-quality coding.
vi © ISO/IEC 2019 – All rights reserved

Transmission Storage Multiplex

---------------------- Page: 6 ----------------------
ISO/IEC 14496-3:2019(E)
0.2.6 MPEG-4 Audio provides capabilities for error robustness
Improved error robustness capabilities for all coding tools are provided through the error resilient bitstream
payload syntax. This tool supports advanced channel coding techniques, which can be adapted to the special
needs of given coding tools and a given communications channel. This error resilient bitstream payload syntax
is mandatory for all error resillient object types.
The error protection tool (EP tool) provides unequal error protection (UEP) for MPEG-4 Audio in conjunction
with the error resilient bitstream payload. UEP is an efficient method to improve the error robustness of source
coding schemes. It is used by various speech and audio coding systems operating over error-prone channels
such as mobile telephone networks or Digital Audio Broadcasting (DAB). The bits of the coded signal
representation are first grouped into different classes according to their error sensitivity. Then error protection
is individually applied to the different classes, giving better protection to more sensitive bits.
Improved error robustness for AAC is provided by a set of error resilience tools. These tools reduce the
perceived degradation of the decoded audio signal that is caused by corrupted bits in the bitstream payload.
0.2.7 MPEG-4 Audio provides capabilities for scalability
Previous MPEG Audio standards provided a single bitrate, single bandwidth toolset, with different
configurations of that toolset specified for use in various applications. MPEG-4 provides several bitrate and
bandwidth options within a single stream, providing a scalability functionality that permits a given stream to
scale to the requirement of different channels and applications or to be responsive to a given channel that has
dynamic throughput characteristics. The tools specified in MPEG-4 are the state-of-the-art tools providing
scalable compression of speech and audio signals.
0.3 The MPEG-4 Audio tool set
0.3.1 Speech coding tools
0.3.1.1 Overview
Speech coding tools are designed for the transmission and decoding of synthetic and natural speech.
Two types of speech coding tools are provided in MPEG-4. The natural speech tools allow the compression,
transmission, and decoding of human speech, for use in telephony, personal communication, and surveillance
applications. The synthetic speech tool provides an interface to text-to-speech synthesis systems; using
synthetic speech provides very-low-bitrate operation and built-in connection with facial animation for use in
low-bitrate video teleconferencing applications.
0.3.1.2 Natural speech coding
The MPEG-4 speech coding toolset covers the compression and decoding of natural speech sound at bitrates
ranging between 2 and 24 kbit/s. When variable bitrate coding is allowed, coding at even less than 2 kbit/s, for
example an average bitrate of 1.2 kbit/s, is also supported. Two basic speech coding techniques are used:
One is a parametric speech coding algorithm, HVXC (Harmonic Vector eXcitation Coding), for very low bit
rates; and the other is a CELP (Code Excited Linear Prediction) coding technique. The MPEG-4 speech
coders target applications range from mobile and satellite communications, to Internet telephony, to packaged
media and speech databases. It meets a wide range of requirements encompassing bitrate, functionality and
sound quality.
MPEG-4 HVXC operates at fixed bitrates between 2.0 kbit/s and 4.0 kbit/s using a bitrate scalability technique.
It also operates at lower bitrates, typically 1.2 - 1.7 kbit/s, using a variable bitrate technique. HVXC provides
communications-quality to near-toll-quality speech in the 100 Hz – 3800 Hz band at 8 kHz sampling rate.
HVXC also allows independent change of speed and pitch during decoding, which is a powerful functionality
for fast access to speech databases. HVXC functionalities including 2.0 - 4.0 kbit/s fixed bitrate modes and a
2.0 kbit/s maximum variable bitrate mode.
Error Resilient (ER) HVXC extends operation of the variable bitrate mode to 4.0 kbit/s to allow higher quality
variable rate coding. The ER HVXC therefore provides fixed bitrate modes of 2.0 - 4.0 kbit/s and a variable
bitrate of either less than 2.0 kbit/s or less than 4.0 kbit/s, both in scalable and non-scalable modes. In the
© ISO/IEC 2019 – All rights reserved vii

---------------------- Page: 7 ----------------------
ISO/IEC 14496-3:2019(E)
variable bitrate modes, non-speech parts are detected in unvoiced signals, and a smaller number of bits are
used for these non-speech parts to reduce the average bitrate. ER HVXC provides communications-quality to
near-toll-quality speech in the 100 Hz - 3800 Hz band at 8 kHz sampling rate. When the variable bitrate mode
is allowed, operation at lower average bitrate is possible. Coded speech using variable bitrate mode at typical
bitrates of 1.5 kbit/s average, and at typical bitrate of 3.0 kbit/s average has essentially the same quality as 2.0
kbit/s fixed rate and 4.0 kbit/s fixed rate respectively. The functionality of pitch and speed change during
decoding is supported for all modes. ER HVXC has a bitstream payload syntax with the error sensitivity
classes to be used with the EP-Tool, and some error concealment functionality is supported for use in error-
prone channels such as mobile communication channels. The ER HVXC speech coder target applications
range from mobile and satellite communications, to Internet telephony, to packaged media and speech
databases.
MPEG-4 CELP is a well-known coding algorithm with new functionality. Conventional CELP coders offer
compression at a single bit rate and are optimized for specific applications. Compression is one of the
functionalities provided by MPEG-4 CELP, but MPEG-4 also enables the use of one basic coder in multiple
applications. It provides scalability in bitrate and bandwidth, as well as the ability to generate bitstream
payloads at arbitrary bitrates. The MPEG-4 CELP coder supports two sampling rates, namely, 8 kHz and
16 kHz. The associated bandwidths are 100 Hz – 3800 Hz for 8 kHz sampling and 50 Hz – 7000 Hz for 16
kHz sampling. The silence compression tool comprises a voice activity detector (VAD), a discontinuous
transmission (DTX) unit and a comfort noise generator (CNG) module. The tool encodes/decodes the input
signal at a lower bitrate during the non-active-voice (silent) frames. During the active-voice (speech) frames,
MPEG-4 CELP encoding and decoding are used.
The silence compression tool reduces the average bitrate thanks to compression at a lower-bitrate for silence.
In the encoder, a voice activity detector is used to distinguish between regions with normal speech activity and
those with silence or background noise. During normal speech activity, the CELP coding is used. Otherwise a
silence insertion descriptor (SID) is transmitted at a lower bitrate. This SID enables a comfort noise generator
(CNG) in the decoder. The amplitude and the spectral shape of this comfort noise are specified by energy and
LPC parameters in methods similar to those used in a normal CELP frame. These parameters are optionally
re-transmitted in the SID and thus can be updated as required.
MPEG has conducted extensive verification testing in realistic listening conditions in order to prove the
efficacy of the speech coding toolset.
0.3.1.3 Text-to-speech interface
Text-to-speech (TTS) capability is becoming a rather common media type and plays an important role in
various multi-media application areas. For instance, by using TTS functionality, multimedia content with
narration can be easily created without recording natural speech. Before MPEG-4, however, there was no way
for a multimedia content provider to easily give instructions to an unknown TTS system. With MPEG-4 TTS
Interface, a single common interface for TTS systems is standardized. This interface allows speech
information to be transmitted in the international phonetic alphabet (IPA), or in a textual (written) form of any
language.
The MPEG-4 Hybrid/Multi-Level Scalable TTS Interface is a superset of the conventional TTS framework.
This extended TTS Interface can utilize prosodic information taken from natural speech in addition to input text
and can thus generate much higher-quality synthetic speech. The interface and its bitstream payload format is
scalable in terms of this added information; for example, if some parameters of prosodic information are not
available, a decoder can generate the missing parameters by rule. Normative algorithms for speech synthesis
and text-to-phoneme translation are not specified in MPEG-4, but to meet the goal that underlies the MPEG-4
TTS Interface, a decoder should fully utilize all the provided information according to the user’s requirements
level.
As well as an interface to text-to-speech synthesis systems, MPEG-4 specifies a joint coding method for
phonemic information and facial animation (FA) parameters and other animation parameters (AP). Using this
technique, a single bitstream payload may be used to control both the text-to-speech interface and the facial
animation visual object decoder (see ISO/IEC 14496-2, Annex C). The functionality of this extended TTS thus
ranges from conventional TTS to natural speech coding and its application areas, from simple TTS to audio
presentation with TTS and motion picture dubbing with TTS.
viii © ISO/IEC 2019 – All rights reserved

---------------------- Page: 8 ----------------------
ISO/IEC 14496-3:2019(E)
0.3.2 Audio coding tools
0.3.2.1 Overview
Audio coding tools are designed for the transmission and decoding of recorded music and other audio
soundtracks.
0.3.2.2 General audio coding tools
MPEG-4 standardizes the coding of natural audio at bitrates ranging from 6 kbit/s up to several hundred kbit/s
per audio channel for mono, two-channel-, and multi-channel-stereo signals. General high-quality
compression is provided by incorporating the MPEG-2 AAC standard (ISO/IEC 13818-7), with certain
improvements, as MPEG-4 AAC. At 64 kbit/s/channel and higher ranges, this coder has been found in
verification testing under rigorous conditions to meet the criterion of “indistinguishable quality” as defined by
the European Broadcasting Union.
General audio (GA) coding tools comprise the AAC tool set expanded by alternative quantization and coding
schemes (Twin-VQ and BSAC). The general audio coder uses a perceptual filterbank, a sophisticated
masking model, noise-shaping techniques, channel coupling, and noiseless coding and bit-allocation to
provide the maximum compression within the constraints of providing the highest possible quality.
Psychoacoustic coding standards developed by MPEG have represented the state-of-the-art in this
technology since MPEG-1 Audio; MPEG-4 General Audio coding continues this tradition.
For bitrates ranging from 6 kbit/s to 64 kbit/s per channel, the MPEG-4 standard provides extensions to the
GA coding tools, that allow the content author to achieve the highest quality coding at the desired bitrate.
Furthermore, various bit rate scalability options are available within the GA coder. The low-bitrate techniques
and scalability modes provided within this tool set have also been verified in formal tests by MPEG.
The MPEG-4 low delay coding functionality provides the ability to extend the usage of generic low bitrate
audio coding to applications requiring a very low delay in the encoding / decoding chain (e.g. full-duplex real-
time communications). In contrast to traditional low delay coders based on speech coding technology, the
concept of this low delay coder is based on general perceptual audio coding and is thus suitable for a wide
range of audio signals. Specifically, it is derived from the proven architecture of MPEG-2/4 Advanced Audio
Coding (AAC) and all capabilities for coding of 2 (stereo) or more sound channels (multi-channel) are
available within the low delay coder. To enable coding of general audio signals with an algorithmic delay not
exceeding 20 ms at 48 kHz, it uses a frame length of 512 or 480 samples (compared to the 1024 or 960
samples used in standard MPEG-2/4 AAC). Also the size of the window used in the analysis and synthesis
filterbank is reduced by a factor of 2. No block switching is used to avoid the “look-ahead'” delay due to the
block switching decision. To reduce pre-echo artefacts in the case of transient signals, window shape
switching is provided instead. For non-transient portions of the signal a sine window is used, while a so-called
low overlap window is used for transient portions. Use of the bit reservoir is minimized in the encoder in order
to reach the desired target delay. As one extreme case, no bit reservoir is used at all.
The MPEG-4 BSAC is used in combination with the AAC coding tools and replaces the noiseless coding of
the quantized spectral data and the scalefactors. The MPEG-4 BSAC provides fine grain scalability in steps of
1 kbit/s per audio channel, i.e. 2 kbit/s steps for a stereo signal. One base layer stream and many small
enhancement layer streams are used. To obtain fine step scalability, a bit-slicing scheme is applied to the
quantized spectral data. First the quantized spectral values are grouped into frequency bands. Each of these
groups contains the quantized spectral values in their binary representation. Then the bits of a group are
processed in slices according to their significance. Thus all most significant bits (MSB) of the quantized values
in a group are processed first. These bit-slices are then encoded using an arithmetic coding scheme to obtain
entropy coding with minimal redundancy. In order to implement fine grain scalability efficiently using MPEG-4
Systems tools, the fine grain audio data can be grouped into large-step layers and these large-step layers can
be further grouped by concatenating large-step layers from several sub-frames. Furthermore, the
configuration of the payload transmitted over an Elementary Stream (ES) can be changed dynamically (by
means of the MPEG-4 backchannel capability) depending on the environment, such as network traffic or user
interaction. This means that BSAC can allow for real-time adjustments to the quality of service. In addition to
fine grain scalablity, it can improve the quality of an audio signal that is decoded from a stream transmitted
over an error-prone channel, such as a mobile communication networks or Digital Audio Broadcasting (DAB)
channel.
© ISO/IEC 2019 – All rights reserved ix

---------------------- Page: 9 ----------------------
ISO/IEC 14496-3:2019(E)
MPEG-4 SBR (Spectral Band Replication) is a bandwidth extension tool used in combination with the AAC
general audio codec. When integrated into the MPEG AAC codec, a significant improvement of the
performance is available, which can be used to lower the bitrate or improve the audio quality. This is achieved
by replicating the highband, i.e. the high frequency part of the spectrum. A small amount of data representing
a parametric description of the highband is encoded and used in the decoding process. The data rate is by far
below the data rate required when using conventional AAC coding of the highband.
0.3.2.3 Parametric audio coding tools
The parametric audio coding tool MPEG-4 HILN (Harmonic and Individual Lines plus Noise) codes non-
speech signals like music at bitrates of 4 kbit/s and higher using a parametric representation of the audio
signal. The basic idea of this technique is to decompose the input signal into audio objects which are
described by appropriate source models and represented by model parameters. Object models for sinusoids,
harmonic tones, and noise are utilized in the HILN coder. HILN allows independent change of speed and pitch
during decoding.
The Parametric Audio Coding tools combine very low bitrate coding of general audio signals with the
possibility of modifying the playback speed or pitch during decoding without the need for an effects processing
unit. In combination with the speech and audio coding tools in MPEG-4, improved overall coding efficiency is
expected for applications of object based coding allowing selection and/or switching between different coding
techniques.
This approach allows to introduce a more advanced source model than just assuming a stationary signal for
the duration of a fr
...

Questions, Comments and Discussion

Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.