ISO/IEC 14496-16:2004/Amd 1:2006
(Amendment)Information technology — Coding of audio-visual objects — Part 16: Animation Framework eXtension (AFX) — Amendment 1: Morphing and textures
Information technology — Coding of audio-visual objects — Part 16: Animation Framework eXtension (AFX) — Amendment 1: Morphing and textures
Technologies de l'information — Codage des objets audiovisuels — Partie 16: Extension du cadre d'animation (AFX) — Amendement 1: Morphage et textures
General Information
Relations
Standards Content (Sample)
INTERNATIONAL ISO/IEC
STANDARD 14496-16
First edition
2004-02-01
AMENDMENT 1
2006-01-15
Information technology — Coding of
audio-visual objects —
Part 16:
Animation Framework eXtension (AFX)
AMENDMENT 1: Morphing and textures
Technologies de l'information — Codage des objets audiovisuels —
Partie 16: Extension du cadre d'animation (AFX)
AMENDEMENT 1: Morphage et textures
Reference number
ISO/IEC 14496-16:2004/Amd.1:2006(E)
©
ISO/IEC 2006
---------------------- Page: 1 ----------------------
ISO/IEC 14496-16:2004/Amd.1:2006(E)
PDF disclaimer
This PDF file may contain embedded typefaces. In accordance with Adobe's licensing policy, this file may be printed or viewed but
shall not be edited unless the typefaces which are embedded are licensed to and installed on the computer performing the editing. In
downloading this file, parties accept therein the responsibility of not infringing Adobe's licensing policy. The ISO Central Secretariat
accepts no liability in this area.
Adobe is a trademark of Adobe Systems Incorporated.
Details of the software products used to create this PDF file can be found in the General Info relative to the file; the PDF-creation
parameters were optimized for printing. Every care has been taken to ensure that the file is suitable for use by ISO member bodies. In
the unlikely event that a problem relating to it is found, please inform the Central Secretariat at the address given below.
© ISO/IEC 2006
All rights reserved. Unless otherwise specified, no part of this publication may be reproduced or utilized in any form or by any means,
electronic or mechanical, including photocopying and microfilm, without permission in writing from either ISO at the address below or
ISO's member body in the country of the requester.
ISO copyright office
Case postale 56 • CH-1211 Geneva 20
Tel. + 41 22 749 01 11
Fax + 41 22 749 09 47
E-mail copyright@iso.org
Web www.iso.org
Published in Switzerland
ii © ISO/IEC 2006 – All rights reserved
---------------------- Page: 2 ----------------------
ISO/IEC 14496-16:2004/Amd.1:2006(E)
Foreword
ISO (the International Organization for Standardization) and IEC (the International Electrotechnical
Commission) form the specialized system for worldwide standardization. National bodies that are members of
ISO or IEC participate in the development of International Standards through technical committees
established by the respective organization to deal with particular fields of technical activity. ISO and IEC
technical committees collaborate in fields of mutual interest. Other international organizations, governmental
and non-governmental, in liaison with ISO and IEC, also take part in the work. In the field of information
technology, ISO and IEC have established a joint technical committee, ISO/IEC JTC 1.
International Standards are drafted in accordance with the rules given in the ISO/IEC Directives, Part 2.
The main task of the joint technical committee is to prepare International Standards. Draft International
Standards adopted by the joint technical committee are circulated to national bodies for voting. Publication as
an International Standard requires approval by at least 75 % of the national bodies casting a vote.
Attention is drawn to the possibility that some of the elements of this document may be the subject of patent
rights. ISO and IEC shall not be held responsible for identifying any or all such patent rights.
Amendment 1 to ISO/IEC 14496-16:2004 was prepared by Joint Technical Committee ISO/IEC JTC 1,
Information technology, Subcommittee SC 29, Coding of audio, picture, multimedia and hypermedia
information.
© ISO/IEC 2006 – All rights reserved iii
---------------------- Page: 3 ----------------------
ISO/IEC 14496-16:2004/Amd.1:2006(E)
Information technology — Coding of audio-visual objects —
Part 16:
Animation Framework eXtension (AFX)
AMENDMENT 1: Morphing and textures
Add subclause 4.3.6 MorphSpace:
4.3.6 MorphSpace
4.3.6.1 Introduction
Morphing is mainly an interpolation technique used to create from two objects a series of intermediate objects
that change continuously, in order to make a smooth transition from the source to the target. A straight
extension of the morphing between two elements –the source and the target- consists in considering a
collection of possible targets and compose a virtual object configuration by weighting those targets. This
collection represents a basis of animation space and animation is performed by simply updating the weight
vector. The following node allows the representation of a mesh as a combination of a base shape and a
collection of target geometries.
4.3.6.2 Node interface
MorphShape{ #%NDT=SF3DNode,SF2DNode
exposedField SFInt32 morphID
exposedField SFShapeNode baseShape
exposedField MFShapeNode targetShapes [ ]
exposedField MFFloat weights [ ]
}
4.3.6.3 Semantics
morphID - a unique identifier between 0 and 1023 which allows that the morph to be addressed at animation
run-time.
baseShape – a Shape node that represent the base mesh. The geometry field of the baseShape can be any
geometry supported by ISOIEC 14496 (e.g. IndexedFaceSet, IndexedLineSet, SolidRep).
targetShapes – a vector of Shapes nodes representing the shape of the target meshes. The tool used for
definig an appearance and a geometry of a target shape must be the same as the tool used for defining the
appearance and the geometry of the base shape (e.g. if the baseShape is defined by using IndexedFaceSet,
all the target shapes must be defined by using IndexedFaceSet).
weights – a vector of integers of the same size as the targetShapes. The morphed shape is obtained
according to the following formula:
© ISO/IEC 2006 – All rights reserved 1
---------------------- Page: 4 ----------------------
ISO/IEC 14496-16:2004/Amd.1:2006(E)
n
M=+BT()−B*w
(ADM1-1)
∑ii
i=1
with
M –morphed shape,
B – base shape,
T – target shape i,
i
W – weight of the T .
i i
The morphing is performed for all the components of the Shape (Appearance and Geometry) that have
different values in the base shape and the target shapes [e.g. if the base shape and the target shapes are
definined by using IndexedFaceSet and the coord field contains different values in the base shape and in the
target geometries, the coord component of the morph shape is obtained by using Equation (ADM1-1)] applied
to the coord field. Note that the size of the coord field must be the same for the base shapes and the target
shapes).
If the shapes (base and targets) are defined by using IndexedFaceSet, a tipical decoder should support
morphing of the following geometry components: coord, normals, color, texCoord.
Add subclause 4.5.4 Depth Image-based Representation Version 2:
4.5.4 Depth Image-based Representation Version 2
4.5.4.1 Introduction
Version 1 of DIBR introduced depth image-based representations (DIBR) of still and animated 3D objects.
Instead of a complex polygonal mesh, which is hard to construct and handle for realistic models, image- or
point-based methods represent a 3D object (scene) as a set of reference images completely covering its
visible surface. This data is usually accompanied by some kind of information about the object geometry. To
that end, each reference image comes with a corresponding depth map, an array of distances from the pixels
in the image plane to the object surface. Rendering is achieved by either forward warping or splat rendering.
But with Version 1 of the specification of DIBR nodes no high-quality rendering can be achieved.
Version 2 nodes allow for high-quality rendering of depth image-based representations. High-quality rendering
is based on the notion of point-sampled surfaces as non-uniformly sampled signals. Point-sampled surfaces
can be easily constructed from the DIBR nodes by projecting the pixels with depth into 3D-space. The discrete
signals are rendered by reconstructing and band-limiting a continuous signal in image space using so called
resampling filters.
A point-based surface consists of a set of non-uniformly distributed samples of a surface; hence we interpret it
as a non-uniformly sampled signal. To continuously reconstruct this signal, we have to associate a 2D
reconstruction kernel ru() with each sample point p . The kernels are defined in a local tangent frame with
k k
coordinates (uu= ,v) at the point p , as illustrated on the left in Figure AMD1-1. The tangent frame is defined
k
by the splat and normal extensions of the DIBR structures Version 2 [1].
2 © ISO/IEC 2006 – All rights reserved
---------------------- Page: 5 ----------------------
ISO/IEC 14496-16:2004/Amd.1:2006(E)
Figure AMD1-1 — Local tangent planes and reconstruction kernels
4.5.4.2 DepthImageV2 Node
4.5.4.2.1 Node interface
DepthImageV2 { #%NDT=SF3DNode
exposedField SFVec3f position 0 0 10
exposedField SFRotation orientation 0 0 1 0
exposedField SFVec2f fieldOfView π/4 π/4
exposedField SFFloat nearPlane 10
exposedField SFFloat farPlane 100
field SFVec2f splatMinMax 0.1115 0.9875
exposedField SFBool orthographic TRUE
field SFDepthTextureNode diTexture NULL
}
4.5.4.2.2 Functionality and semantics
The DepthImageV2 node defines a single IBR texture. When multiple DepthImage nodes are related to each
other, they are processed as a group, and thus, should be placed under the same Transform node.
The diTexture field specifies the texture with depth, which shall be mapped into the region defined in the
DepthImageV2 node. It shall be one of the various types of depth image texture (SimpleTextureV2 or
PointTextureV2).
The position and orientation fields specify the relative location of the viewpoint of the IBR texture in the local
coordinate system. position is relative to the coordinate system’s origin (0, 0, 0), while orientation specifies a
rotation relative to the default orientation. In the default position and orientation, the viewer is on the Z-axis
looking down the –Z-axis toward the origin with +X to the right and +Y straight up. However, the
transformation hierarchy affects the final position and orientation of the viewpoint.
The fieldOfView field specifies a viewing angle from the camera viewpoint defined by position and
orientation fields. The first value denotes the angle to the horizontal side and the second value denotes the
angle to the vertical side. The default values are 45 degrees in radians. However, when orthographic field is
set to TRUE, the fieldOfView field denotes the width and height of the near plane and far plane.
The nearPlane and farPlane fields specify the distances from the viewpoint to the near plane and far plane of
the visibility area. The texture and depth data shows the area closed by the near plane, far plane and the
fieldOfView. The depth data are scaled to the distance from nearPlane to farPlane.
© ISO/IEC 2006 – All rights reserved 3
---------------------- Page: 6 ----------------------
ISO/IEC 14496-16:2004/Amd.1:2006(E)
The splatMinMax field specifies the minimum and maximum splat vector lengths. The splatU and splatV data
of SimpleTextureV2 is scaled to the interval defined by the splatMinMax field.
The orthographic field specifies the view type of the IBR texture. When set to TRUE, the IBR texture is based
on orthographic view. Otherwise, the IBR texture is based on perspective view.
The position, orientation, fieldOfView, nearPlane, farPlane, and orthographic fields are exposedField
types, which are for extrinsic parameters. The DepthImage node supports the camera movement and
changeable view frustum corresponding to movement or deformation of a DIBR object.
Reference images that are suitable to the characteristic of a DIBR model are obtained in the modeling stage.
Therefore, the fields that reflect the camera movement and the changeable view frustum and the reference
images in the modeling stage are used to create a view frustum and a DIBR object in the rendering stage.
4.5.4.3 SimpleTextureV2 node
4.5.4.3.1 Node interface
SimpleTextureV2 { #%NDT=SFDepthTextureNode
field SFTextureNode texture NULL
field SFTextureNode depth NULL
field SFTextureNode normal NULL
field SFTextureNode splatU NULL
field SFTextureNode splatV NULL
}
4.5.4.3.2 Functionality and semantics
The SimpleTextureV2 node defines a single layer of IBR texture.
The texture field specifies the flat image that contains color for each pixel. It shall be one of the various types
of texture nodes (ImageTexture, MovieTexture or PixelTexture).
The depth field specifies the depth for each pixel in the texture field. The size of the depth map shall be the
same size as the image or movie in the texture field. Depth field shall be one of the various types of texture
nodes (ImageTexture, MovieTexture or PixelTexture), where only the nodes representing gray scale images
are allowed. If the depth field is unspecified, the alpha channel in the texture field shall be used as the depth
map. If the depth map is not specified through depth field or alpha channel, the result is undefined.
Depth field allows to compute the actual distance of the 3D points of the model to the plane which passes
through the viewpoint and parallel to the near plane and far plane:
⎛⎞
d−1
dist=+nearPlane1.− farPlane−nearPlane
() (AMD1-2)
⎜⎟
d −1
⎝⎠max
where d is depth value and d is maximum allowed depth value. It is assumed that for the points of the
max
model, d>0, where d=1 corresponds to far plane, d=d corresponds to near plane.
max
This formula is valid for both perspective and orthographic case, since d is distance between the point and the
plane. max d is the largest d value that can be represented by the bits used for each pixel:
(1) If the depth is specified through depth field, then depth value d equals to the gray scale.
(2) If the depth is specified through alpha channel in the image defined via texture field, then the depth
value d is equal to alpha channel value.
4 © ISO/IEC 2006 – All rights reserved
---------------------- Page: 7 ----------------------
ISO/IEC 14496-16:2004/Amd.1:2006(E)
The depth value is also used to indicate which points belong to the model: only the point for which d is
nonzero belong to the model.
For animated DepthImage-based model, only DepthImage with SimpleTextures as diTextures are used.
Each of the Simple Textures can be animated in one of the following ways:
(1) depth field is still image satisfying the above condition, texture field is arbitrary MovieTexture
(2) depth field is arbitrary MovieTexture satisfying the above condition on the depth field, texture field is
still image
(3) both depth and texture are MovieTextures, and depth field satisfies the above condition
(4) depth field is not used, and the depth information is retrieved from the alpha channel of the
MovieTexture that animates the texture field
The normal field specifies the normal vector for each pixel in the texture field. The normal vector should be
assigned to the object-space point sample derived from extruding the pixel with depth to 3-space. The normal
map shall be the same size as the image or movie in the texture field. Normal field shall be one of the
various types of texture nodes (ImageTexture, MovieTexture, or PixelTexture), where only the nodes
representing color images are allowed. If the normal map is not specified through the normal field, the decoder
can calculate a normal field by evaluating the cross-product of the splatU and splatV fields. If neither the
normal map nor the splatU and splatV fields are specified, only basic rendering is possible.
The splatU and splatV fields specify the tangent plane and reconstruction kernel needed for high-quality
point-based rendering. Both splatU and splatV fields have to be scaled to the interval defined by the
splatMinMax field.
The splatU field specifies the splatU vector for each pixel in the texture field. The splatU vector should be
assigned to the object-space point sample derived from extruding the pixel with depth to 3-space. The splatU
map shall be the same size as the image or movie in the texture field. splatU field shall be one of the various
types of texture nodes (ImageTexture, MovieTexture, or PixelTexture), where the nodes either representing
color or gray scale images are allowed. If the splatU map is specified as gray scale image the decoder can
calculate a circular splat by using the normal map to produce a tangent plane and the splatU map as radius. In
this case, if the normal map is not specified, the result is undefined. If the splatU map is specified as color
image, it can be used in conjunction with the splatV map to calculate a tangent frame and reconstruction
kernel for high-quality point-based rendering. If neither the normal map nor the splatV map is specified, the
result is undefined.
The splatV field specifies the splatV vector for each pixel in the texture field. The splatV vector should be
assigned to the object-space point sample derived from extruding the pixel with depth to 3-space. The splatV
map shall be the same size as the image or movie in the texture field. splatV field shall be one of the various
types of texture nodes (ImageTexture, MovieTexture, or PixelTexture), where only the nodes representing
color images are allowed. If the splatU map is not specified as well, the result is undefined.
4.5.4.4 PointTextureV2 node
4.5.4.4.1 Node interface
PointTextureV2 { #%NDT=SFDepthTextureNode
field SFInt32 width 256
field SFInt32 height 256
field SFInt32 depthNbBits 7
field MFInt32 depth []
field MFColor color []
field SFNormalNode normal
field MFVec3f splatU []
field MFVec3f splatV []
}
© ISO/IEC 2006 – All rights reserved 5
---------------------- Page: 8 ----------------------
ISO/IEC 14496-16:2004/Amd.1:2006(E)
4.5.4.4.2 Functionality and semantics
The PointTextureV2 node defines multiple layers of IBR points.
The width and height field specify the width and height of the texture.
Geometrical meaning of the depth values, and all the conventions on their interpretation adopted for the
SimpleTexture, apply here as well.
The depth field specifies a multiple depths of each point in the projection plane, which is assumed to be
farPlane (see above) in the order of traversal, which starts from the point in the lower left corner and traverses
to the right to finish the horizontal line before moving to the upper line. For each point, the number of depths
(pixels) is first stored and that number of depth values shall follow.
The color field specifies color of current pixel. The order shall be the same as the depth field except that
number of depths (pixels) for each point is not included.
The depthNbBits field specifies the number of bits used for the original depth data. The value of depthNbBits
ranges from 0 to 31, and the actual number of bits used in the original data is depthNbBits+1. The d used in
max
the distance equation is derived as follows:
depthNbBits+1
( )
(AMD1-3)
d=−21.
max
The normal field specifies normals for each specified depth of each point in the projection plane in the same
order. The normal vector should be assigned to the object-space point sample derived from extruding the pixel
with depth to 3-space. If the normals are not specified through the normal field, the decoder can calculate a
normal field by evaluating the cross-product of the splatU and splatV fields. If neither the normals nor the
splatU and splatV fields are specified, only basic point rendering is possible. Normals can be quantized by
using the SFNormalNode functionality.
The splatU field specifies splatU vectors for each specified depth of each point in the projection plane in the
same order. The splatU vector should be assigned to the object-space point sample derived from extruding
the pixel with depth to 3-space. If the splatV vectors are not specified the decoder can calculate a circular
splat by using the normals to produce a tangent plane and the length of the splatU vectors as radius. In this
case, if the normals are not specified, the result is undefined. If the splatU vectors are specified, it can be used
in conjunction with the splatV vectors to calculate a tangent frame and reconstruction kernel for high-quality
point-based rendering. If neither the normals nor the splatV vectors are specified, the result is undefined.
The splatV field specifies splatV vectors for each specified depth of each point in the projection plane in the
same order. The splatV vector should be assigned to the object-space point sample derived from extruding
the pixel with depth to 3-space. If the splatU vectors are not specified as well, the result is undefined.
Add subclause 4.5.5 Multitexturing:
4.5.5 Multitexturing
4.5.5.1 MultiTexture Node
4.5.5.1.1 Node interface
MultiTexture { #%NDT=SFTextureNode
exposedField SFFloat alpha 1 #[0,1]
exposedField SFColor color 1 1 1 #[0,1]
exposedField MFInt function []
exposedField MFInt mode []
exposedField MFInt source []
6 © ISO/IEC 2006 – All rights reserved
---------------------- Page: 9 ----------------------
ISO/IEC 14496-16:2004/Amd.1:2006(E)
exposedField MFTextureNode texture []
exposedField MFVec3f cameraVector []
exposedField SFBool transparent FALSE
}
4.5.5.1.2 Functionality and semantics
MultiTexture enables the application of several individual textures to a 3D object to achieve a more complex
visual effect. MultiTexture can be used as a value for the texture field in an Appearance node.
The texture field contains a list of texture nodes (e.g., ImageTexture, PixelTexture, MovieTexture). The
texture field may not contain another MultiTexture node.
The cameraVector field contains a list of camera vectors in 3D for each texture in the texture field. These
vectors point from each associated camera to the 3D scene center. The view vectors are used to calculate
texture weights according to the unstructured lumigraph approach from [1] for each render cycle, to weight all
textures according to the actual scene viewpoint.
The color and alpha fields define base RGB and alpha values for SELECT mode operations.
The mode field controls the type of blending operation. The available modes include MODULATE for a lit
Appearance, REPLACE for an unlit Appearance and several variations of the two. However, for view-
dependent Multitexturing the default mode MODULATE shall be used in conjuction with the source field value
“FACTOR”.
Table AMD1-1 lists possible multitexture modes.
Table AMD1-1 — Multitexture modes
VALUE MODE Description
Multiply texture color with current color
00000 MODULATE
Arg1 × Arg2
Replace current color
00001 REPLACE
Arg2
Multiply the components of the arguments, and shift
00010 MODULATE2X the products to the left 1 bit (effectively multiplying
them by 2) for brightening.
Multiply the components of the arguments, and shift
00011 MODULATE4X the products to the left 2 bits (effectively multiplying
them by 4) for brightening.
Add the components of the arguments
00100 ADD
Arg1 + Arg2
Add the components of the arguments with a -0.5 bias,
00101 ADDSIGNED making the effective range of values from −0.5 through
0.5.
Add the components of the arguments with a -0.5 bias,
00110 ADDSIGNED2X
and shift the products to the left 1 bit.
Subtract the components of the second argument from
00111 SUBTRACT those of the first argument.
Arg1 − Arg2
Add the first and second arguments, then subtract their
01000 ADDSMOOTH product from the sum.
Arg1 + Arg2 − Arg1 × Arg2 = Arg1 + (1 − Arg1) × Arg2
Linearly blend this texture stage, using the interpolated
01001 BLENDDIFFUSEALPHA alpha from each vertex.
Arg1 × (Alpha) + Arg2 × (1 − Alpha)
© ISO/IEC 2006 – All rights reserved 7
---------------------- Page: 10 ----------------------
ISO/IEC 14496-16:2004/Amd.1:2006(E)
VALUE MODE Description
Linearly blend this texture stage, using the alpha from
01010 BLENDTEXTUREALPHA this stage's texture.
Arg1 × (Alpha) + Arg2 × (1 − Alpha)
Linearly blend this texture stage, using the alpha factor
01011 BLENDFACTORALPHA from the MultiTexture node.
Arg1 × (Alpha) + Arg2 × (1 − Alpha)
Linearly blend this texture stage, using the alpha taken
01100 BLENDCURRENTALPHA from the previous texture stage.
Arg1 × (Alpha) + Arg2 × (1 − Alpha)
Modulate the color of the second argument, using the
alpha of the first argument; then add the result to
01101 MODULATEALPHA_ADDCOLOR
argument one.
Arg1.RGB + Arg1.A × Arg2.RGB
Similar to MODULATEALPHA_ADDCOLOR, but use
01110 MODULATEINVALPHA_ADDCOLOR the inverse of the alpha of the first argument.
(1 − Arg1.A) × Arg2.RGB + Arg1.RGB
Similar to MODULATECOLOR_ADDALPHA, but use
01111 MODULATEINVCOLOR_ADDALPHA the inverse of the color of the first argument.
(1 − Arg1.RGB) × Arg2.RGB + Arg1.A
10000 OFF Turn off the texture unit
Use color argument 1
10001 SELECTARG1
Arg1
Use color argument 1
10010 SELECTARG2
Arg2
Modulate the components of each argument (as
signed components), add their products, then replicate
the sum to all color channels, including alpha.
This can do either diffuse or specular bump mapping
10011 DOTPRODUCT3 with correct input. Performs the function (Arg1.R ×
Arg2.R + Arg1.G × Arg2.G + Arg1.B × Arg2.B) where
each component has been scaled and offset to make it
signed. The result is replicated into all four (including
alpha) channels.
10100 –
Reserved for future use
11111
The source field determines the color source for the second argument. Table AMD1-2 lists valid values for the
source field. For view-dependent Multitexturing “FACTOR” shall be used in conjuction with the mode field
value MODULATE.
Table AMD1-2 — Values for the source field
VALUE MODE Description
The second argument color (ARG2) is the color from the previous rendering
000 "" (default)
stage (DIFFUSE for first stage).
The texture argument is the diffuse color interpolated from vertex components
001 "DIFFUSE"
during Gouraud shading.
The texture argument is the specular color interpolated from vertex
010 "SPECULAR"
components during Gouraud shading.
011 "FACTOR" The texture argument is the factor (color, alpha) from the MultiTexture node.
100-111 Reserved for future use
The function field defines an optional function to be applied to the argument after the mode has been
evaluated. Table AMD1-3 lists valid values for the function field.
8 © ISO/IEC 2006 – All rights reserved
---------------------- Page: 11 ----------------------
ISO/IEC 14496-16:2004/Amd.1:2006(E)
Table AMD1-3 — Values for the function field
VALUE MODE Description
000 "" (default) No function is applied.
Invert the argument so that, if the result of the argument were referred
001 "COMPLEMENT"
to by the variable x, the value would be 1.0 minus x.
Replicate the alpha information to all color channels before the
010 "ALPHAREPLICATE"
operation completes.
011-111 Reserved for future use
Mode may contain an additional Blending mode for the alpha channel; e.g., "MODULATE,REPLACE"
specifies Color = (Arg1.color × Arg2.color, Arg1.alpha).
The number of used texture stages is determined by the length of the texture field. If there are fewer mode
values, the default mode is "MODULATE".
Note: Due to the texture stage architecture and its processing order of textures in common graphic cards, the
result of general texture weighting depends on the order of textures if more than two textures are used. If
order-independent texture mapping is required, the proposed settings can be used, i.e. MODULATE for the
mode field and “TFACTOR” for the source field.
Beside the MultiTexture-Node that assigns the actual 2D images to the scene, contains blending modes and
transform parameters, the second component of Multi-Texturing is the MultiTextureCoordinate-Node. This
node addresses the relative 2D coordinates of each texture, which are combined with the 3D points of the
scene geometry. In Multi-Texturing, one 3D point is associated with n 2D texture points with n being the
number of views. The node syntax for MultiTextureCoordinate in X3D is as follows and can be used as is.
4.5.5.2 MultiTextureCoordinate Node
MultiTextureCoordinate node supplies multiple texture coordinates per vertex. This node can be used to set
the texture coordinates for the different texture channels.
4.5.5.2.1 Node interface
MultiTextureCoordinate { #%NDT=SFTextureCoordinateNode
exposedField MFTextureCoordinateNode texCoord []
}
4.5.5.2.2 Functionality and semantics
Each entry in the texCoord field may contain a TextureCoordinate or TextureCoordinateGenerator node.
By default, if using MultiTexture with an IndexedFaceSet without a MultiTextureCoordinate texCoord node,
texture coordinates for channel 0 are replicated along the other channels. Likewise, if there are too few entries
in the texCoord field, the last entry is replicated.
Example:
Shape {
appearance Appearance {
texture MultiTexture {
mode [ 0 0 0 0 ]
source [ 3 3 3 3 ]
texture [
ImageTexture { url "np00.jpg" }
ImageTexture { url "np01.jpg" }
ImageTexture { url "np02.jpg" }
ImageTexture { url "np03.jpg" }
© ISO/IEC 2006 – All rights reserved 9
---------------------- Page: 12 ----------------------
ISO/IEC 14496-16:2004/Amd.1:2006(E)
]
}
}
geometry IndexedFaceSet {
.
texCoord MultiTextureCoord {
texCoord [
TextureCoordinate { . }
TextureCoordinate { . }
TextureCoordinate { . }
...
Questions, Comments and Discussion
Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.