From 36d22d82aa202bb199967e9512281e9a53db42c9 Mon Sep 17 00:00:00 2001
From: Daniel Baumann
+ Although the extension contains ANGLE in the name it may be exposed by any implementation, whether or not the implementation uses the ANGLE library.
+ How does ANGLE_instanced_arrays interact with OES_vertex_array_object? RESOLVED: When the ANGLE_instanced_arrays and OES_vertex_array_object
+ extensions are both enabled, attribute divisors are tracked by the
+ vertex array objects like any other vertex array state.
+ The following floating-point internal formats become color-renderable: Renderbuffers with these internal formats can be created. The format and type combination Notes: drawArraysInstancedANGLE
and drawElementsInstancedANGLE
+ similarly to how indices referenced by drawArrays
and drawElements
are validated according to section
+ Enabled Vertex Attributes and Range Checking of the
+ WebGL specification.
+
+
+ blendEquation
and blendEquationSeparate
+ entry points are extended to accept MIN_EXT
and MAX_EXT
+
+ var ext = gl.getExtension('EXT_blend_minmax');
+ gl.blendEquation(ext.MAX_EXT);
+ gl.getParameter(gl.BLEND_EQUATION) == ext.MAX_EXT;
+
+ RGB16F
is not color-renderable in this
+ extension. This is a difference in functionality compared to the EXT_color_buffer_half_float extension.
+ R16F
,
+ RG16F
, RGBA16F
, R32F
,
+ RG32F
, RGBA32F
and
+ R11F_G11F_B10F
. A renderbuffer or a texture with a
+ color-renderable internal format can be used as a rendering target by
+ attaching it to a framebuffer object as a color attachment.RGBA
and
+ FLOAT
becomes valid for reading from a floating-point
+ color buffer.
+
clearColor
and
+ blendColor
are not clamped when applied to buffers with
+ these internal formats.RGBA
and
+ UNSIGNED_BYTE
cannot be used for reading from a
+ floating-point color buffer.RGB16F
is not color-renderable in this
+ extension.
All references to R16F
and RG16F
types
+ are ignored.
WebGL implementations supporting this extension are required to
+ support rendering to RGBA16F
format.
The 16-bit floating-point types RGB16F
and
+ RGBA16F
become available as color-renderable formats.
+ Renderbuffers can be created in these formats. These and textures
+ created with type = HALF_FLOAT_OES
, which will have one
+ of these internal formats, can be attached to framebuffer object color
+ attachments for rendering. Implementations supporting this extension are
+ required to support rendering to RGBA16F
format.
+ Applications must check framebuffer completeness to determine if
+ RGB16F
is supported.
NOTE: fragment shaders outputs
+ gl_FragColor and gl_FragData[0] will only be clamped and converted
+ when the color buffer is fixed-point and blendColor()
and
+ clearColor()
will no longer clamp their parameter values
+ on input. Clamping will be applied as necessary at draw time according
+ to the type of color buffer in use.
The format and type combination RGBA
and
+ FLOAT
becomes valid for reading from a floating-point
+ rendering buffer. Note: RGBA
and
+ UNSIGNED_BYTE
cannot be used for reading from a
+ floating-point rendering buffer.
The component types of framebuffer object attachments can be + queried.
+In section 5.13.12 Reading back pixels, change the allowed + format and types table to:
+ +frame buffer type | +
+ format
+ |
+
+ type
+ |
+
---|---|---|
normalized fixed-point | +RGBA | +UNSIGNED_BYTE | +
floating-point | +RGBA | +FLOAT | +
Change the paragraph beginning "If pixels
is null ..."
+ to
If frame buffer type is not that indicated in the table for + the+format
andtype
combination, an + INVALID_OPERATON error is generated. If pixels is null + ...
target
accepts TIME_ELAPSED_EXT
.
+ target
accepts TIME_ELAPSED_EXT
.
+ target
accepts TIMESTAMP_EXT
.
+ target
and pname
accept the following combinations of
+ parameters. The return type of this method depends on the parameter queried.
+ target | pname | returned type |
---|---|---|
TIME_ELAPSED_EXT | CURRENT_QUERY | WebGLQuery? |
TIMESTAMP_EXT | CURRENT_QUERY | null |
TIME_ELAPSED_EXT | QUERY_COUNTER_BITS_EXT | GLint |
TIMESTAMP_EXT | QUERY_COUNTER_BITS_EXT | GLint |
pname
accepts QUERY_RESULT_EXT
or QUERY_RESULT_AVAILABLE_EXT
.
+ pname | returned type |
---|---|
QUERY_RESULT_EXT | GLuint64EXT |
QUERY_RESULT_AVAILABLE_EXT | boolean |
pname
accepts TIMESTAMP_EXT
or GPU_DISJOINT_EXT
.
+ pname | returned type |
---|---|
TIMESTAMP_EXT | GLuint64EXT |
GPU_DISJOINT_EXT | boolean |
+ Can getQueryObjectEXT be exposed in its current form according to ECMAScript + semantics? ECMAScript's de-facto concurrency + model is "shared nothing" communicating event loops. Is it acceptable for sequential + calls to getQueryObjectEXT to return different answers? Note that Date.now() advances + during script execution, so this may be fine; but if concerns are raised, then the API may + need to be redesigned to use callbacks. +
++ // Example (1) -- uses beginQueryEXT/endQueryEXT. + var ext = gl.getExtension('EXT_disjoint_timer_query'); + var query = ext.createQueryEXT(); + ext.beginQueryEXT(ext.TIME_ELAPSED_EXT, query); + + // Draw object + gl.drawElements(...); + + ext.endQueryEXT(ext.TIME_ELAPSED_EXT); + + // ...at some point in the future, after returning control to the browser and being called again: + var available = ext.getQueryObjectEXT(query, ext.QUERY_RESULT_AVAILABLE_EXT); + var disjoint = gl.getParameter(ext.GPU_DISJOINT_EXT); + + if (available && !disjoint) { + // See how much time the rendering of the object took in nanoseconds. + var timeElapsed = ext.getQueryObjectEXT(query, ext.QUERY_RESULT_EXT); + + // Do something useful with the time. Note that care should be + // taken to use all significant bits of the result, not just the + // least significant 32 bits. + adjustObjectLODBasedOnDrawTime(timeElapsed); + } + + //---------------------------------------------------------------------- + + // Example (2) -- same as the example above, but uses queryCounterEXT instead. + var ext = gl.getExtension('EXT_disjoint_timer_query'); + var startQuery = ext.createQueryEXT(); + var endQuery = ext.createQueryEXT(); + ext.queryCounterEXT(startQuery, ext.TIMESTAMP_EXT); + + // Draw object + gl.drawElements(...); + + ext.queryCounterEXT(endQuery, ext.TIMESTAMP_EXT); + + // ...at some point in the future, after returning control to the browser and being called again: + var available = ext.getQueryObjectEXT(endQuery, ext.QUERY_RESULT_AVAILABLE_EXT); + var disjoint = gl.getParameter(ext.GPU_DISJOINT_EXT); + + if (available && !disjoint) { + // See how much time the rendering of the object took in nanoseconds. + var timeStart = ext.getQueryObjectEXT(startQuery, ext.QUERY_RESULT_EXT); + var timeEnd = ext.getQueryObjectEXT(endQuery, ext.QUERY_RESULT_EXT); + + // Do something useful with the time. Note that care should be + // taken to use all significant bits of the result, not just the + // least significant 32 bits. + adjustObjectLODBasedOnDrawTime(timeEnd - timeStart); + } + + //---------------------------------------------------------------------- + + // Example (3) -- check the number of timestamp bits to determine how to best + // measure elapsed time. + var ext = gl.getExtension('EXT_disjoint_timer_query'); + var timeElapsedQuery; + var startQuery; + var endQuery; + + var useTimestamps = false; + + if (ext.getQueryEXT(ext.TIMESTAMP_EXT, ext.QUERY_COUNTER_BITS_EXT) > 0) { + useTimestamps = true; + } + + // Clear the disjoint state before starting to work with queries to increase + // the chances that the results will be valid. + gl.getParameter(ext.GPU_DISJOINT_EXT); + + if (useTimestamps) { + startQuery = ext.createQueryEXT(); + endQuery = ext.createQueryEXT(); + ext.queryCounterEXT(startQuery, ext.TIMESTAMP_EXT); + } else { + timeElapsedQuery = ext.createQueryEXT(); + ext.beginQueryEXT(ext.TIME_ELAPSED_EXT, timeElapsedQuery); + } + + // Draw object + gl.drawElements(...); + + if (useTimestamps) { + ext.queryCounterEXT(endQuery, ext.TIMESTAMP_EXT); + } else { + ext.endQueryEXT(ext.TIME_ELAPSED_EXT); + } + + // ...at some point in the future, after returning control to the browser and being called again: + var disjoint = gl.getParameter(ext.GPU_DISJOINT_EXT); + if (disjoint) { + // Have to redo all of the measurements. + } else { + var available; + if (useTimestamps) { + available = ext.getQueryObjectEXT(endQuery, ext.QUERY_RESULT_AVAILABLE_EXT); + } else { + available = ext.getQueryObjectEXT(timeElapsedQuery, ext.QUERY_RESULT_AVAILABLE_EXT); + } + + if (available) { + var timeElapsed; + if (useTimestamps) { + // See how much time the rendering of the object took in nanoseconds. + var timeStart = ext.getQueryObjectEXT(startQuery, ext.QUERY_RESULT_EXT); + var timeEnd = ext.getQueryObjectEXT(endQuery, ext.QUERY_RESULT_EXT); + timeElapsed = timeEnd - timeStart; + } else { + timeElapsed = ext.getQueryObjectEXT(query, ext.QUERY_RESULT_EXT); + } + + // Do something useful with the time. Note that care should be + // taken to use all significant bits of the result, not just the + // least significant 32 bits. + adjustObjectLODBasedOnDrawTime(timeElapsed); + } + } ++
target
accepts TIME_ELAPSED_EXT
.
+ target
accepts TIME_ELAPSED_EXT
.
+ target
accepts TIMESTAMP_EXT
.
+ target
and pname
accept the following combinations of
+ parameters. The return type of this method now depends on the parameter queried.
+ target | pname | returned type |
---|---|---|
TIME_ELAPSED_EXT | CURRENT_QUERY | WebGLQuery? |
TIMESTAMP_EXT | CURRENT_QUERY | null |
TIME_ELAPSED_EXT | QUERY_COUNTER_BITS_EXT | GLint |
TIMESTAMP_EXT | QUERY_COUNTER_BITS_EXT | GLint |
pname
accepts TIMESTAMP_EXT
or GPU_DISJOINT_EXT
.
+ pname | returned type |
---|---|
TIMESTAMP_EXT | GLuint64EXT |
GPU_DISJOINT_EXT | boolean |
+ // Example (1) -- uses beginQuery/endQuery. + var ext = gl.getExtension('EXT_disjoint_timer_query_webgl2'); + var query = gl.createQuery(); + gl.beginQuery(ext.TIME_ELAPSED_EXT, query); + + // Draw object + gl.drawElements(...); + + gl.endQuery(ext.TIME_ELAPSED_EXT); + + // ...at some point in the future, after returning control to the browser and being called again: + var available = gl.getQueryParameter(query, gl.QUERY_RESULT_AVAILABLE); + var disjoint = gl.getParameter(ext.GPU_DISJOINT_EXT); + + if (available && !disjoint) { + // See how much time the rendering of the object took in nanoseconds. + var timeElapsed = gl.getQueryParameter(query, gl.QUERY_RESULT); + + // Do something useful with the time. Note that care should be + // taken to use all significant bits of the result, not just the + // least significant 32 bits. + adjustObjectLODBasedOnDrawTime(timeElapsed); + } + + //---------------------------------------------------------------------- + + // Example (2) -- same as the example above, but uses queryCounterEXT instead. + var ext = gl.getExtension('EXT_disjoint_timer_query_webgl2'); + var startQuery = gl.createQuery(); + var endQuery = gl.createQuery(); + ext.queryCounterEXT(startQuery, ext.TIMESTAMP_EXT); + + // Draw object + gl.drawElements(...); + + ext.queryCounterEXT(endQuery, ext.TIMESTAMP_EXT); + + // ...at some point in the future, after returning control to the browser and being called again: + var available = gl.getQueryParameter(endQuery, gl.QUERY_RESULT_AVAILABLE); + var disjoint = gl.getParameter(ext.GPU_DISJOINT_EXT); + + if (available && !disjoint) { + // See how much time the rendering of the object took in nanoseconds. + var timeStart = gl.getQueryParameter(startQuery, gl.QUERY_RESULT); + var timeEnd = gl.getQueryParameter(endQuery, gl.QUERY_RESULT); + + // Do something useful with the time. Note that care should be + // taken to use all significant bits of the result, not just the + // least significant 32 bits. + adjustObjectLODBasedOnDrawTime(timeEnd - timeStart); + } + + //---------------------------------------------------------------------- + + // Example (3) -- check the number of timestamp bits to determine how to best + // measure elapsed time. + var ext = gl.getExtension('EXT_disjoint_timer_query_webgl2'); + var timeElapsedQuery; + var startQuery; + var endQuery; + + var useTimestamps = false; + + if (gl.getQuery(ext.TIMESTAMP_EXT, ext.QUERY_COUNTER_BITS_EXT) > 0) { + useTimestamps = true; + } + + // Clear the disjoint state before starting to work with queries to increase + // the chances that the results will be valid. + gl.getParameter(ext.GPU_DISJOINT_EXT); + + if (useTimestamps) { + startQuery = gl.createQuery(); + endQuery = gl.createQuery(); + ext.queryCounterEXT(startQuery, ext.TIMESTAMP_EXT); + } else { + timeElapsedQuery = gl.createQuery(); + gl.beginQuery(ext.TIME_ELAPSED_EXT, timeElapsedQuery); + } + + // Draw object + gl.drawElements(...); + + if (useTimestamps) { + ext.queryCounterEXT(endQuery, ext.TIMESTAMP_EXT); + } else { + gl.endQuery(ext.TIME_ELAPSED_EXT); + } + + // ...at some point in the future, after returning control to the browser and being called again: + var disjoint = gl.getParameter(ext.GPU_DISJOINT_EXT); + if (disjoint) { + // Have to redo all of the measurements. + } else { + var available; + if (useTimestamps) { + available = gl.getQueryParameter(endQuery, gl.QUERY_RESULT_AVAILABLE); + } else { + available = gl.getQueryParameter(timeElapsedQuery, gl.QUERY_RESULT_AVAILABLE); + } + + if (available) { + var timeElapsed; + if (useTimestamps) { + // See how much time the rendering of the object took in nanoseconds. + var timeStart = gl.getQueryParameter(startQuery, gl.QUERY_RESULT); + var timeEnd = gl.getQueryParameter(endQuery, gl.QUERY_RESULT); + timeElapsed = timeEnd - timeStart; + } else { + timeElapsed = gl.getQueryParameter(query, gl.QUERY_RESULT); + } + + // Do something useful with the time. Note that care should be + // taken to use all significant bits of the result, not just the + // least significant 32 bits. + adjustObjectLODBasedOnDrawTime(timeElapsed); + } + } ++
An INVALID_OPERATION
error will no longer be raised by
+ drawArrays
or drawElements
when blending is
+ enabled and the draw buffer has 32-bit floating-point components. Note
+ that in order to create such a draw buffer the
+
+ EXT_color_buffer_float extension must be enabled.
+ void main(){ + gl_FragColor = vec4(1.0, 0.0, 1.0, 1.0); + gl_FragDepthEXT = 0.5; + } ++
format
and internalformat
parameters: SRGB_EXT
and SRGB_ALPHA_EXT
+ format
parameter: SRGB_EXT
and SRGB_ALPHA_EXT
+ internalformat
parameter: SRGB_ALPHA8_EXT
+ pname
parameter: FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING_EXT
+ + var ext = gl.getExtension('EXT_sRGB'); + var texture = gl.createTexture(); + gl.bindTexture(gl.TEXTURE_2D, texture); + texImage2D(gl.TEXTURE_2D, 0, ext.SRGB_EXT, 256, 256, 0, ext.SRGB_EXT, gl.UNSIGNED_BYTE, data); ++
+ #extension GL_EXT_shader_texture_lod : enable + #extension GL_OES_standard_derivatives : enable + + uniform sampler2D myTexture; + varying vec2 texcoord; + + void main(){ + // avoids artifacts when wrapping texture coordinates + gl_FragColor = texture2DGradEXT(myTexture, mod(texcoord, vec2(0.1, 0.5)), dFdx(texcoord), dFdy(texcoord)); + } ++
+ This extension exposes the compressed texture format defined in the + + EXT_texture_compression_bptc OpenGL ES extension to WebGL. Consult that extension + specification for behavioral definitions, including error behaviors. +
++ Sampling from textures in the COMPRESSED_SRGB_ALPHA_BPTC_UNORM_EXT format performs a color space + conversion as specified for SRGB textures in the + EXT_sRGB OpenGL ES + extension. +
+COMPRESSED_RGBA_BPTC_UNORM_EXT
,
+ COMPRESSED_SRGB_ALPHA_BPTC_UNORM_EXT
,
+ COMPRESSED_RGB_BPTC_SIGNED_FLOAT_EXT
,
+ and COMPRESSED_RGB_BPTC_UNSIGNED_FLOAT_EXT
may be passed to
+ the compressedTexImage2D
and compressedTexSubImage2D
entry points.
+ getParameter
with the argument COMPRESSED_TEXTURE_FORMATS
+ will include the formats from this specification.
+
+ If the internalformat
is one of the BPTC internal formats from this specification,
+ the byteLength
of the ArrayBufferView, pixels
, passed to
+ compressedTexImage2D
or compressedTexSubImage2D
must be
+ equal to the following number of bytes:
+
+ ceil(width / 4) * ceil(height / 4) * 16
+
+ If it is not, an INVALID_VALUE
error is generated.
internalformat
parameter:
+ COMPRESSED_RGBA_BPTC_UNORM_EXT
,
+ COMPRESSED_SRGB_ALPHA_BPTC_UNORM_EXT
,
+ COMPRESSED_RGB_BPTC_SIGNED_FLOAT_EXT
,
+ COMPRESSED_RGB_BPTC_UNSIGNED_FLOAT_EXT
+ internalformat
parameter:
+ COMPRESSED_RGBA_BPTC_UNORM_EXT
,
+ COMPRESSED_SRGB_ALPHA_BPTC_UNORM_EXT
,
+ COMPRESSED_RGB_BPTC_SIGNED_FLOAT_EXT
,
+ COMPRESSED_RGB_BPTC_UNSIGNED_FLOAT_EXT
+ INVALID_VALUE
is generated by compressedTexImage2D
and
+ compressedTexSubImage2D
if the internalformat
parameter is one of the BPTC
+ internal formats from this extension and the byteLength of the ArrayBufferView is not:
+
+ ceil(width / 4) * ceil(height / 4) * 16
+
+ + This extension exposes the compressed texture format defined in the + + EXT_texture_compression_rgtc OpenGL extension to WebGL. Consult that extension + specification for behavioral definitions, including error behaviors. +
++ Updates of partial tiles detailed in the "Implementation Note" section of the + EXT_texture_compression_rgtc specification must be supported in an implementation of this + WebGL extension. +
+COMPRESSED_RED_RGTC1_EXT
,
+ COMPRESSED_SIGNED_RED_RGTC1_EXT
,
+ COMPRESSED_RED_GREEN_RGTC2_EXT
,
+ and COMPRESSED_SIGNED_RED_GREEN_RGTC2_EXT
may be passed to
+ the compressedTexImage2D
and compressedTexSubImage2D
entry points.
+ getParameter
with the argument COMPRESSED_TEXTURE_FORMATS
+ will include the formats from this specification.
+ The following format-specific restrictions must be enforced:
+The byteLength
of the ArrayBufferView, pixels
, passed to
+ compressedTexImage2D
or compressedTexSubImage2D
must be equal to the following number of bytes:
+ ceil(width / 4) * ceil(height / 4) * 8
+
+ If it is not, an INVALID_VALUE
error is generated.
The byteLength
of the ArrayBufferView, pixels
, passed to
+ compressedTexImage2D
or compressedTexSubImage2D
must be equal to the following number of bytes:
+ ceil(width / 4) * ceil(height / 4) * 16
+
+ If it is not, an INVALID_VALUE
error is generated.
internalformat
parameter:
+ COMPRESSED_RED_RGTC1_EXT
,
+ COMPRESSED_SIGNED_RED_RGTC1_EXT
,
+ COMPRESSED_RED_GREEN_RGTC2_EXT
,
+ COMPRESSED_SIGNED_RED_GREEN_RGTC2_EXT
+ internalformat
parameter:
+ COMPRESSED_RED_RGTC1_EXT
,
+ COMPRESSED_SIGNED_RED_RGTC1_EXT
,
+ COMPRESSED_RED_GREEN_RGTC2_EXT
,
+ COMPRESSED_SIGNED_RED_GREEN_RGTC2_EXT
+ INVALID_VALUE
is generated by compressedTexImage2D
and
+ compressedTexSubImage2D
if the internalformat
parameter is
+ COMPRESSED_RED_RGTC1_EXT
or COMPRESSED_SIGNED_RED_RGTC1_EXT
+ and the byteLength of the ArrayBufferView is not:
+
+ ceil(width / 4) * ceil(height / 4) * 8
+
+ INVALID_VALUE
is generated by compressedTexImage2D
and
+ compressedTexSubImage2D
if the internalformat
parameter is
+ COMPRESSED_RED_GREEN_RGTC2_EXT
or COMPRESSED_SIGNED_RED_GREEN_RGTC2_EXT
+ and the byteLength of the ArrayBufferView is not:
+
+ ceil(width / 4) * ceil(height / 4) * 16
+
+ getTexParameter
, texParameterf
and texParameteri
entry points'
+ parameter pname
accepts the value TEXTURE_MAX_ANISOTROPY_EXT
+ getParameter
entry point parameter pname
accepts the value MAX_TEXTURE_MAX_ANISOTROPY_EXT
, returning a value of type float
.
+ + var canvas = document.createElement("canvas"); + var gl = canvas.getContext("webgl"); + var ext = gl.getExtension('KHR_parallel_shader_compile'); + if (ext) { + // Just for demo of API usage. Generally it's not needed unless you really + // want to override the implementation-specific maximum. + var threads = gl.getParameter(ext.MAX_SHADER_COMPILER_THREADS_KHR); + ext.maxShaderCompilerThreadsKHR(Math.max(2, threads)); + } + + var vSource = "attribute vec2 position; void main() { gl_Position = vec4(position, 0, 1); }"; + var fSource = "precision mediump float; void main() { gl_FragColor = vec4(1,0,0,1); }"; + + var vShader = gl.createShader(gl.VERTEX_SHADER); + gl.shaderSource(vShader, vSource); + gl.compileShader(vShader); + + var fShader = gl.createShader(gl.FRAGMENT_SHADER); + gl.shaderSource(fShader, fSource); + gl.compileShader(fShader); + + var program = gl.createProgram(); + gl.attachShader(program, vShader); + gl.attachShader(program, fShader); + gl.linkProgram(program); + + function checkToUseProgram() { + if (gl.getProgramParameter(program, gl.LINK_STATUS) == true) { + gl.useProgram(program); + } else { + // error check. + } + } + + if (ext) { + function checkCompletion() { + if (gl.getProgramParameter(program, ext.COMPLETION_STATUS_KHR) == true) { + checkToUseProgram(); + } else { + requestAnimationFrame(checkCompletion); + } + } + requestAnimationFrame(checkCompletion); + } else { + checkToUseProgram(); + } ++
drawElements
entry point parameter type
accepts the value UNSIGNED_INT
+ + var extension = gl.getExtension('OES_fbo_render_mipmap'); + if(extension !=== null){ + var texture = gl.createTexture(); + gl.bindTexture(gl.TEXTURE_2D, texture); + var fbos = []; + + for(var level=0; level<7; level++){ + var size = 128/Math.pow(2, level); + gl.texImage2D(gl.TEXTURE_2D, level, gl.RGBA, size, size, 0, gl.RGBA, gl.UNSIGNED_BYTE, null); + var fbo = gl.createFramebuffer(); + gl.bindFramebuffer(gl.FRAMEBUFFER, fbo); + gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, texture, level); + fbos.push(fbo); + + var fboStatus = gl.checkFramebufferStatus(gl.FRAMEBUFFER); + console.assert(fboStatus == gl.FRAMEBUFFER_COMPLETE, 'Framebuffer is not complete'); + } + + gl.bindTexture(gl.TEXTURE_2D, null); + gl.bindFramebuffer(gl.FRAMEBUFFER, null); + + console.assert(gl.getError() == gl.NO_ERROR, 'A GL error occured'); + } ++
hint
entry point accepts FRAGMENT_SHADER_DERIVATIVE_HINT_OES
+ as a target and the getParameter
entry point accepts it as a pname.
+ genType
argument and return type to function declarations.FLOAT
textures as FBO
+ attachments.texImage2D
and texSubImage2D
+ entry points taking ArrayBufferView
are extended to accept
+ Float32Array
with the pixel type FLOAT
.
+ texImage2D
and texSubImage2D
+ entry points taking ImageData
,
+ HTMLImageElement
, HTMLCanvasElement
and
+ HTMLVideoElement
are extended to accept the pixel type
+ FLOAT
. HALF_FLOAT
textures as FBO
+ attachments.texImage2D
and texSubImage2D
+ entry points taking ArrayBufferView
are extended to accept
+ Uint16Array
with the pixel type HALF_FLOAT_OES
.
+ texImage2D
and texSubImage2D
+ entry points taking ImageData
,
+ HTMLImageElement
, HTMLCanvasElement
and
+ HTMLVideoElement
are extended to accept the pixel type
+ HALF_FLOAT_OES
. texImage2D
and
+ texSubImage2D
as in OES_texture_float spec. The OES_vertex_array_object spec does not make it clear + what happens to buffers that are deleted when they are referenced + by vertex array objects. It is inferred that all buffers are + reference counted. +
+Before OES_vertex_array_object there was no way to use a deleted + buffer in a single context as the spec states it would be unbound + from all bind points. After OES_vertex_array_object it is now + possible to use deleted buffers. +
+Furthermore, the OpenGL ES 2.0 spec specifies that using a + deleted buffer has undefined results including possibly + corrupt rendering and generating GL errors. Undefined behavior + is unacceptable for WebGL. +
+RESOLVED: Buffers should be reference counted when attached to + a vertex array object. This is consistent with the OpenGL ES 3.0 + spec's implementation of Vertex Array Objects and matches the + behavior of other WebGL objects, such as textures that are attached + to framebuffers. +
+This will require that most implementations do not call + glDeleteBuffer when the user calls deleteBuffer on the WebGL context. + Instead the implementation must wait for all references to be released + before calling glDeleteBuffer to prevent undefined behavior. +
+If a buffer object is deleted while it is attached to the currently + bound vertex array object, then it is as if BindBuffer had been called, + with a buffer of 0, for each target to which this buffer was attached + in the currently bound vertex array object. In other words, this buffer + is first detached from all attachment points in the currently bound + vertex array object. Note that the buffer is specifically not detached + from any other vertex array object. Detaching the buffer from any other + vertex array objects is the responsibility of the application. +
+Adds support for rendering to 32-bit floating-point color buffers.
+ +The 32-bit floating-point type RGBA32F
becomes available
+ as a color-renderable format. Renderbuffers can be created in this
+ format. These and textures created with format = RGBA
and
+ type = FLOAT
as specified in OES_texture_float,
+ can be attached to framebuffer object color attachments for rendering.
+
The 32-bit floating-point type RGB32F
may also optionally
+ become available as a color-renderable format. These and textures created
+ with format = RGB
and type = FLOAT
as specified in
+ OES_texture_float,
+ can be attached to framebuffer object color attachments for rendering.
+ Applications must check framebuffer completeness to determine if an
+ implementation actually supports this format.
+
NOTE: fragment shaders outputs
+ gl_FragColor and gl_FragData[0] will only be clamped and converted
+ when the color buffer is fixed-point and blendColor()
and
+ clearColor()
will no longer clamp their parameter values
+ on input. Clamping will be applied as necessary at draw time according
+ to the type of color buffer in use.
The format and type combination RGBA
and
+ FLOAT
becomes valid for reading from a floating-point
+ rendering buffer. Note: RGBA
and
+ UNSIGNED_BYTE
cannot be used for reading from a
+ floating-point rendering buffer.
The component types of framebuffer object attachments can be + queried.
+RGBA32F_EXT
is accepted as the
+ internalformat
parameter of
+ renderbufferStorage()
.The new tokens and the behavioral changes for floating-point color
+ buffers specified in EXT_color_buffer_half_float
+ are incorporated into WebGL except for the RGB16F
and
+ RGBA16F
types. References to RGB16F
are ignored,
+ and references to RGBA16F
are replaced by references to
+ RGBA32F
.
+ This extension exposes the compressed texture format defined in the + + KHR_texture_compression_astc_hdr OpenGL ES extension to WebGL. Consult that extension + specification for behavioral definitions, including error behaviors. +
+
+ ASTC textures may be encoded using either high or low dynamic range, corresponding to an "HDR
+ profile" and "LDR profile". The compression format is designed to be extended, and for new
+ profiles to be added in the future. For this reason, enabling the WebGL extension enables all
+ of the profiles supported by the implementation. The supported profiles may be queried by
+ calling getSupportedProfiles
against the extension object.
+
COMPRESSED_RGBA_ASTC_4x4_KHR
,
+ COMPRESSED_RGBA_ASTC_5x4_KHR
,
+ COMPRESSED_RGBA_ASTC_5x5_KHR
,
+ COMPRESSED_RGBA_ASTC_6x5_KHR
,
+ COMPRESSED_RGBA_ASTC_6x6_KHR
,
+ COMPRESSED_RGBA_ASTC_8x5_KHR
,
+ COMPRESSED_RGBA_ASTC_8x6_KHR
,
+ COMPRESSED_RGBA_ASTC_8x8_KHR
,
+ COMPRESSED_RGBA_ASTC_10x5_KHR
,
+ COMPRESSED_RGBA_ASTC_10x6_KHR
,
+ COMPRESSED_RGBA_ASTC_10x8_KHR
,
+ COMPRESSED_RGBA_ASTC_10x10_KHR
,
+ COMPRESSED_RGBA_ASTC_12x10_KHR
,
+ COMPRESSED_RGBA_ASTC_12x12_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_4x4_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_5x4_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_5x5_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_6x5_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_6x6_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_8x5_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_8x6_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_8x8_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_10x5_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_10x6_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_10x8_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_10x10_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_12x10_KHR
,
+ and COMPRESSED_SRGB8_ALPHA8_ASTC_12x12_KHR
may be passed to
+ the compressedTexImage2D
and compressedTexSubImage2D
entry points.
+ getParameter
with the argument COMPRESSED_TEXTURE_FORMATS
+ will include the format from this specification.
+ The following format-specific restrictions must be enforced:
+The byteLength
of the ArrayBufferView, pixels
, passed to
+ compressedTexImage2D
or compressedTexSubImage2D
must be
+ equal to the following number of bytes:
+
+ floor((width + 3) / 4) * floor((height + 3) / 4) * 16
+
+ If it is not, an INVALID_VALUE
error is generated.
The following format-specific restrictions must be enforced:
+The byteLength
of the ArrayBufferView, pixels
, passed to
+ compressedTexImage2D
or compressedTexSubImage2D
must be
+ equal to the following number of bytes:
+
+ floor((width + 4) / 5) * floor((height + 3) / 4) * 16
+
+ If it is not, an INVALID_VALUE
error is generated.
The following format-specific restrictions must be enforced:
+The byteLength
of the ArrayBufferView, pixels
, passed to
+ compressedTexImage2D
or compressedTexSubImage2D
must be
+ equal to the following number of bytes:
+
+ floor((width + 4) / 5) * floor((height + 4) / 5) * 16
+
+ If it is not, an INVALID_VALUE
error is generated.
The following format-specific restrictions must be enforced:
+The byteLength
of the ArrayBufferView, pixels
, passed to
+ compressedTexImage2D
or compressedTexSubImage2D
must be
+ equal to the following number of bytes:
+
+ floor((width + 5) / 6) * floor((height + 4) / 5) * 16
+
+ If it is not, an INVALID_VALUE
error is generated.
The following format-specific restrictions must be enforced:
+The byteLength
of the ArrayBufferView, pixels
, passed to
+ compressedTexImage2D
or compressedTexSubImage2D
must be
+ equal to the following number of bytes:
+
+ floor((width + 5) / 6) * floor((height + 5) / 6) * 16
+
+ If it is not, an INVALID_VALUE
error is generated.
The following format-specific restrictions must be enforced:
+The byteLength
of the ArrayBufferView, pixels
, passed to
+ compressedTexImage2D
or compressedTexSubImage2D
must be
+ equal to the following number of bytes:
+
+ floor((width + 7) / 8) * floor((height + 4) / 5) * 16
+
+ If it is not, an INVALID_VALUE
error is generated.
The following format-specific restrictions must be enforced:
+The byteLength
of the ArrayBufferView, pixels
, passed to
+ compressedTexImage2D
or compressedTexSubImage2D
must be
+ equal to the following number of bytes:
+
+ floor((width + 7) / 8) * floor((height + 5) / 6) * 16
+
+ If it is not, an INVALID_VALUE
error is generated.
The following format-specific restrictions must be enforced:
+The byteLength
of the ArrayBufferView, pixels
, passed to
+ compressedTexImage2D
or compressedTexSubImage2D
must be
+ equal to the following number of bytes:
+
+ floor((width + 7) / 8) * floor((height + 7) / 8) * 16
+
+ If it is not, an INVALID_VALUE
error is generated.
The following format-specific restrictions must be enforced:
+The byteLength
of the ArrayBufferView, pixels
, passed to
+ compressedTexImage2D
or compressedTexSubImage2D
must be
+ equal to the following number of bytes:
+
+ floor((width + 9) / 10) * floor((height + 4) / 5) * 16
+
+ If it is not, an INVALID_VALUE
error is generated.
The following format-specific restrictions must be enforced:
+The byteLength
of the ArrayBufferView, pixels
, passed to
+ compressedTexImage2D
or compressedTexSubImage2D
must be
+ equal to the following number of bytes:
+
+ floor((width + 9) / 10) * floor((height + 5) / 6) * 16
+
+ If it is not, an INVALID_VALUE
error is generated.
The following format-specific restrictions must be enforced:
+The byteLength
of the ArrayBufferView, pixels
, passed to
+ compressedTexImage2D
or compressedTexSubImage2D
must be
+ equal to the following number of bytes:
+
+ floor((width + 9) / 10) * floor((height + 7) / 8) * 16
+
+ If it is not, an INVALID_VALUE
error is generated.
The following format-specific restrictions must be enforced:
+The byteLength
of the ArrayBufferView, pixels
, passed to
+ compressedTexImage2D
or compressedTexSubImage2D
must be
+ equal to the following number of bytes:
+
+ floor((width + 9) / 10) * floor((height + 9) / 10) * 16
+
+ If it is not, an INVALID_VALUE
error is generated.
The following format-specific restrictions must be enforced:
+The byteLength
of the ArrayBufferView, pixels
, passed to
+ compressedTexImage2D
or compressedTexSubImage2D
must be
+ equal to the following number of bytes:
+
+ floor((width + 11) / 12) * floor((height + 9) / 10) * 16
+
+ If it is not, an INVALID_VALUE
error is generated.
The following format-specific restrictions must be enforced:
+The byteLength
of the ArrayBufferView, pixels
, passed to
+ compressedTexImage2D
or compressedTexSubImage2D
must be
+ equal to the following number of bytes:
+
+ floor((width + 11) / 12) * floor((height + 11) / 12) * 16
+
+ If it is not, an INVALID_VALUE
error is generated.
getSupportedProfiles
function is to allow easy reconstruction
+ of the underlying OpenGL or OpenGL ES extension strings for environments like Emscripten, by
+ prepending the string GL_KHR_texture_compression_astc_
to the returned profile
+ names.
+ internalformat
parameter:
+ COMPRESSED_RGBA_ASTC_4x4_KHR
,
+ COMPRESSED_RGBA_ASTC_5x4_KHR
,
+ COMPRESSED_RGBA_ASTC_5x5_KHR
,
+ COMPRESSED_RGBA_ASTC_6x5_KHR
,
+ COMPRESSED_RGBA_ASTC_6x6_KHR
,
+ COMPRESSED_RGBA_ASTC_8x5_KHR
,
+ COMPRESSED_RGBA_ASTC_8x6_KHR
,
+ COMPRESSED_RGBA_ASTC_8x8_KHR
,
+ COMPRESSED_RGBA_ASTC_10x5_KHR
,
+ COMPRESSED_RGBA_ASTC_10x6_KHR
,
+ COMPRESSED_RGBA_ASTC_10x8_KHR
,
+ COMPRESSED_RGBA_ASTC_10x10_KHR
,
+ COMPRESSED_RGBA_ASTC_12x10_KHR
,
+ COMPRESSED_RGBA_ASTC_12x12_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_4x4_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_5x4_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_5x5_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_6x5_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_6x6_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_8x5_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_8x6_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_8x8_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_10x5_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_10x6_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_10x8_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_10x10_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_12x10_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_12x12_KHR
+ internalformat
parameter:
+ COMPRESSED_RGBA_ASTC_4x4_KHR
,
+ COMPRESSED_RGBA_ASTC_5x4_KHR
,
+ COMPRESSED_RGBA_ASTC_5x5_KHR
,
+ COMPRESSED_RGBA_ASTC_6x5_KHR
,
+ COMPRESSED_RGBA_ASTC_6x6_KHR
,
+ COMPRESSED_RGBA_ASTC_8x5_KHR
,
+ COMPRESSED_RGBA_ASTC_8x6_KHR
,
+ COMPRESSED_RGBA_ASTC_8x8_KHR
,
+ COMPRESSED_RGBA_ASTC_10x5_KHR
,
+ COMPRESSED_RGBA_ASTC_10x6_KHR
,
+ COMPRESSED_RGBA_ASTC_10x8_KHR
,
+ COMPRESSED_RGBA_ASTC_10x10_KHR
,
+ COMPRESSED_RGBA_ASTC_12x10_KHR
,
+ COMPRESSED_RGBA_ASTC_12x12_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_4x4_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_5x4_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_5x5_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_6x5_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_6x6_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_8x5_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_8x6_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_8x8_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_10x5_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_10x6_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_10x8_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_10x10_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_12x10_KHR
,
+ COMPRESSED_SRGB8_ALPHA8_ASTC_12x12_KHR
+ INVALID_VALUE
is generated by compressedTexImage2D
and compressedTexSubImage2D
+ if the internalformat
parameter is
+ COMPRESSED_RGBA_ASTC_4x4_KHR
or COMPRESSED_SRGB8_ALPHA8_ASTC_4x4_KHR
+ and the byteLength of the ArrayBufferView is not:
+
+ floor((width + 3) / 4) * floor((height + 3) / 4) * 16
+
+ INVALID_VALUE
is generated by compressedTexImage2D
and compressedTexSubImage2D
+ if the internalformat
parameter is
+ COMPRESSED_RGBA_ASTC_5x4_KHR
or COMPRESSED_SRGB8_ALPHA8_ASTC_5x4_KHR
+ and the byteLength of the ArrayBufferView is not:
+
+ floor((width + 4) / 5) * floor((height + 3) / 4) * 16
+
+ INVALID_VALUE
is generated by compressedTexImage2D
and compressedTexSubImage2D
+ if the internalformat
parameter is
+ COMPRESSED_RGBA_ASTC_5x5_KHR
or COMPRESSED_SRGB8_ALPHA8_ASTC_5x5_KHR
+ and the byteLength of the ArrayBufferView is not:
+
+ floor((width + 4) / 5) * floor((height + 4) / 5) * 16
+
+ INVALID_VALUE
is generated by compressedTexImage2D
and compressedTexSubImage2D
+ if the internalformat
parameter is
+ COMPRESSED_RGBA_ASTC_6x5_KHR
or COMPRESSED_SRGB8_ALPHA8_ASTC_6x5_KHR
+ and the byteLength of the ArrayBufferView is not:
+
+ floor((width + 5) / 6) * floor((height + 4) / 5) * 16
+
+ INVALID_VALUE
is generated by compressedTexImage2D
and compressedTexSubImage2D
+ if the internalformat
parameter is
+ COMPRESSED_RGBA_ASTC_6x6_KHR
or COMPRESSED_SRGB8_ALPHA8_ASTC_6x6_KHR
+ and the byteLength of the ArrayBufferView is not:
+
+ floor((width + 5) / 6) * floor((height + 5) / 6) * 16
+
+ INVALID_VALUE
is generated by compressedTexImage2D
and compressedTexSubImage2D
+ if the internalformat
parameter is
+ COMPRESSED_RGBA_ASTC_8x5_KHR
or COMPRESSED_SRGB8_ALPHA8_ASTC_8x5_KHR
+ and the byteLength of the ArrayBufferView is not:
+
+ floor((width + 7) / 8) * floor((height + 4) / 5) * 16
+
+ INVALID_VALUE
is generated by compressedTexImage2D
and compressedTexSubImage2D
+ if the internalformat
parameter is
+ COMPRESSED_RGBA_ASTC_8x6_KHR
or COMPRESSED_SRGB8_ALPHA8_ASTC_8x6_KHR
+ and the byteLength of the ArrayBufferView is not:
+
+ floor((width + 7) / 8) * floor((height + 5) / 6) * 16
+
+ INVALID_VALUE
is generated by compressedTexImage2D
and compressedTexSubImage2D
+ if the internalformat
parameter is
+ COMPRESSED_RGBA_ASTC_8x8_KHR
or COMPRESSED_SRGB8_ALPHA8_ASTC_8x8_KHR
+ and the byteLength of the ArrayBufferView is not:
+
+ floor((width + 7) / 8) * floor((height + 7) / 8) * 16
+
+ INVALID_VALUE
is generated by compressedTexImage2D
and compressedTexSubImage2D
+ if the internalformat
parameter is
+ COMPRESSED_RGBA_ASTC_10x5_KHR
or COMPRESSED_SRGB8_ALPHA8_ASTC_10x5_KHR
+ and the byteLength of the ArrayBufferView is not:
+
+ floor((width + 9) / 10) * floor((height + 4) / 5) * 16
+
+ INVALID_VALUE
is generated by compressedTexImage2D
and compressedTexSubImage2D
+ if the internalformat
parameter is
+ COMPRESSED_RGBA_ASTC_10x6_KHR
or COMPRESSED_SRGB8_ALPHA8_ASTC_10x6_KHR
+ and the byteLength of the ArrayBufferView is not:
+
+ floor((width + 9) / 10) * floor((height + 5) / 6) * 16
+
+ INVALID_VALUE
is generated by compressedTexImage2D
and compressedTexSubImage2D
+ if the internalformat
parameter is
+ COMPRESSED_RGBA_ASTC_10x8_KHR
or COMPRESSED_SRGB8_ALPHA8_ASTC_10x8_KHR
+ and the byteLength of the ArrayBufferView is not:
+
+ floor((width + 9) / 10) * floor((height + 7) / 8) * 16
+
+ INVALID_VALUE
is generated by compressedTexImage2D
and compressedTexSubImage2D
+ if the internalformat
parameter is
+ COMPRESSED_RGBA_ASTC_10x10_KHR
or COMPRESSED_SRGB8_ALPHA8_ASTC_10x10_KHR
+ and the byteLength of the ArrayBufferView is not:
+
+ floor((width + 9) / 10) * floor((height + 9) / 10) * 16
+
+ INVALID_VALUE
is generated by compressedTexImage2D
and compressedTexSubImage2D
+ if the internalformat
parameter is
+ COMPRESSED_RGBA_ASTC_12x10_KHR
or COMPRESSED_SRGB8_ALPHA8_ASTC_12x10_KHR
+ and the byteLength of the ArrayBufferView is not:
+
+ floor((width + 11) / 12) * floor((height + 9) / 10) * 16
+
+ INVALID_VALUE
is generated by compressedTexImage2D
and compressedTexSubImage2D
+ if the internalformat
parameter is
+ COMPRESSED_RGBA_ASTC_12x12_KHR
or COMPRESSED_SRGB8_ALPHA8_ASTC_12x12_KHR
+ and the byteLength of the ArrayBufferView is not:
+
+ floor((width + 11) / 12) * floor((height + 11) / 12) * 16
+
+ + This extension exposes the compressed texture formats defined as core in the + + OpenGL ES 3.0 spec to WebGL. These include the ETC2 and EAC formats, where + ETC2 is a superset of ETC1. ETC1 textures can be loaded using the ETC2 token + value. All of these formats are in the ETC family. +
++ Browsers should not advertise this extension when the WebGL implementation, or + graphics driver, supports these formats by decompressing them. +
+COMPRESSED_R11_EAC
,
+ COMPRESSED_SIGNED_R11_EAC
,
+ COMPRESSED_RG11_EAC
,
+ COMPRESSED_SIGNED_RG11_EAC
,
+ COMPRESSED_RGB8_ETC2
,
+ COMPRESSED_SRGB8_ETC2
,
+ COMPRESSED_RGB8_PUNCHTHROUGH_ALPHA1_ETC2
,
+ COMPRESSED_SRGB8_PUNCHTHROUGH_ALPHA1_ETC2
,
+ COMPRESSED_RGBA8_ETC2_EAC
,
+ and COMPRESSED_SRGB8_ALPHA8_ETC2_EAC
may be passed to the
+ compressedTexImage2D
and compressedTexSubImage2D
entry points. In
+ WebGL 2.0, they may also be passed to the compressedTexImage3D
and
+ compressedTexSubImage3D
entry points with the TEXTURE_2D_ARRAY
+ target.
+ getParameter
with the argument COMPRESSED_TEXTURE_FORMATS
+ will include the formats from this specification.
+ validatedSize
(defined for each specific format
+ below) is validated in the following ways:
+ compressedTexImage*D
or
+ compressedTexSubImage*D
taking ArrayBufferView pixels
is
+ called, then the byteLength
of the view must be equal to
+ validatedSize
, or an INVALID_VALUE error is generated.
+ compressedTexImage*D
or
+ compressedTexSubImage*D
taking GLintptr offset
is called,
+ and offset + validatedSize
is greater than the size of the bound
+ PIXEL_UNPACK_BUFFER
, an INVALID_OPERATION
error is
+ generated.
+ validatedSize
is computed in the following way:
+ floor((width + 3) / 4) * floor((height + 3) / 4) * 8
+
+ validatedSize
is computed in the following way:
+ floor((width + 3) / 4) * floor((height + 3) / 4) * 16
+
+ internalformat
parameter:
+ COMPRESSED_R11_EAC
,
+ COMPRESSED_SIGNED_R11_EAC
,
+ COMPRESSED_RG11_EAC
,
+ COMPRESSED_SIGNED_RG11_EAC
,
+ COMPRESSED_RGB8_ETC2
,
+ COMPRESSED_SRGB8_ETC2
,
+ COMPRESSED_RGB8_PUNCHTHROUGH_ALPHA1_ETC2
,
+ COMPRESSED_SRGB8_PUNCHTHROUGH_ALPHA1_ETC2
,
+ COMPRESSED_RGBA8_ETC2_EAC
or
+ COMPRESSED_SRGB8_ALPHA8_ETC2_EAC
+ internalformat
parameter:
+ COMPRESSED_R11_EAC
,
+ COMPRESSED_SIGNED_R11_EAC
,
+ COMPRESSED_RG11_EAC
,
+ COMPRESSED_SIGNED_RG11_EAC
,
+ COMPRESSED_RGB8_ETC2
,
+ COMPRESSED_SRGB8_ETC2
,
+ COMPRESSED_RGB8_PUNCHTHROUGH_ALPHA1_ETC2
,
+ COMPRESSED_SRGB8_PUNCHTHROUGH_ALPHA1_ETC2
,
+ COMPRESSED_RGBA8_ETC2_EAC
or
+ COMPRESSED_SRGB8_ALPHA8_ETC2_EAC
+ INVALID_VALUE
is generated by compressedTexImage2D
,
+ compressedTexSubImage2D
, compressedTexImage3D
, and
+ compressedTexSubImage3D
if the variant taking ArrayBufferView pixels
+ is called and the size restrictions above are not met.
+ INVALID_OPERATION
is generated by compressedTexImage2D
,
+ compressedTexSubImage2D
, compressedTexImage3D
, and
+ compressedTexSubImage3D
if the variant taking GLintptr offset
is
+ called and the size restrictions above are not met.
+ + This extension exposes the compressed texture format defined in the + + OES_compressed_ETC1_RGB8_texture OpenGL ES extension to WebGL. +
+COMPRESSED_RGB_ETC1_WEBGL
may be passed to
+ the compressedTexImage2D
entry point.
+
+ This format correspond to the format defined in the OES_compressed_ETC1_RGB8_texture OpenGL ES
+ extension. Although the enum name is changed, the numeric value is the same. The correspondence
+ is given by this table:
+ WebGL format enum | +OpenGL format enum | +Numeric value | +
---|---|---|
COMPRESSED_RGB_ETC1_WEBGL | +ETC1_RGB8_OES | +0x8D64 | +
getParameter
with the argument COMPRESSED_TEXTURE_FORMATS
+ will include the format from this specification.
+ The following format-specific restrictions must be enforced:
+The byteLength
of the ArrayBufferView, pixels
, passed to
+ compressedTexImage2D
must be equal to the following number of bytes:
+ floor((width + 3) / 4) * floor((height + 3) / 4) * 8
+
+ If it is not, an INVALID_VALUE
error is generated.
+ This extension exposes the compressed texture formats defined in the + + IMG_texture_compression_pvrtc OpenGL extension to WebGL. +
+COMPRESSED_RGB_PVRTC_4BPPV1_IMG
,
+ COMPRESSED_RGB_PVRTC_2BPPV1_IMG
, COMPRESSED_RGBA_PVRTC_4BPPV1_IMG
, and
+ COMPRESSED_RGBA_PVRTC_2BPPV1_IMG
may be passed to
+ the compressedTexImage2D
and compressedTexSubImage2D
entry points.
+ getParameter
with the argument COMPRESSED_TEXTURE_FORMATS
+ will include the 4 formats from this specification.
+ The following format-specific restrictions apply to all of the formats described + by this extension: +
+ +In compressedTexImage2D
, the width
and height
+ parameters must be powers of two. Otherwise, an INVALID_VALUE error is generated.
+
+ In compressedTexSubImage2D
, the width
and height
+ parameters must be equal to the current values of the existing texture image, and the
+ xoffset
and yoffset
parameters must be zero.
+ Otherwise, an INVALID_VALUE error is generated.
+
The following format-specific restrictions must also be enforced:
+The byteLength
of the ArrayBufferView, pixels
, passed to
+ either compressedTexImage2D
or compressedTexSubImage2D
must be
+ equal to the following number of bytes:
+ max(width, 8) * max(height, 8) / 2
+
+ If it is not, an INVALID_VALUE
error is generated.
The byteLength
of the ArrayBufferView, pixels
, passed to
+ either compressedTexImage2D
or compressedTexSubImage2D
must be
+ equal to the following number of bytes:
+ max(width, 16) * max(height, 8) / 4
+
+ If it is not, an INVALID_VALUE
error is generated.
+ This extension exposes the compressed texture formats defined in the + + EXT_texture_compression_s3tc OpenGL extension to WebGL. +
+COMPRESSED_RGB_S3TC_DXT1_EXT
,
+ COMPRESSED_RGBA_S3TC_DXT1_EXT
, COMPRESSED_RGBA_S3TC_DXT3_EXT
, and
+ COMPRESSED_RGBA_S3TC_DXT5_EXT
may be passed to
+ the compressedTexImage2D
and compressedTexSubImage2D
entry points.
+ getParameter
with the argument COMPRESSED_TEXTURE_FORMATS
+ will include the 4 formats from this specification.
+ The following format specific restrictions must be enforced:
+The byteLength
of the ArrayBufferView, pixels
, passed to
+ either compressedTexImage2D
or compressedTexSubImage2D
+ must match the following equation:
+ floor((width + 3) / 4) * floor((height + 3) / 4) * 8
+
+
+ If it is not an INVALID_VALUE
error is generated.
+
When level
equals zero width
and height
+ must be a multiple of 4. When level
is greater than 0 width
+ and height
must be 0, 1, 2 or a multiple of 4.
+ If they are not an INVALID_OPERATION
error is generated.
+
+ For compressedTexSubImage2D
xoffset
and
+ yoffset
must be a multiple of 4 and
+ width
must be a multiple of 4 or equal to the original
+ width of the level
. height
must be a multiple of 4 or
+ equal to the original height of the level
.
+ If they are not an INVALID_OPERATION
error is generated.
+
The byteLength
of the ArrayBufferView, pixels
, passed to
+ either compressedTexImage2D
or compressedTexSubImage2D
must
+ match the following equation:
+ floor((width + 3) / 4) * floor((height + 3) / 4) * 16
+
+
+ If it is not an INVALID_VALUE
error is generated.
+
When level
equals zero width
and height
+ must be a multiple of 4. When level
is greater than 0 width
+ and height
must be 0, 1, 2 or a multiple of 4.
+ If they are not an INVALID_OPERATION
error is generated.
+
+ For compressedTexSubImage2D
xoffset
and
+ yoffset
must be a multiple of 4 and
+ width
must be a multiple of 4 or equal to the original
+ width of the level
. height
must be a multiple of 4 or
+ equal to the original height of the level
.
+ If they are not an INVALID_OPERATION
error is generated.
+
+ This extension exposes the sRGB compressed texture formats defined in the + + EXT_texture_sRGB OpenGL extension to WebGL. +
+COMPRESSED_SRGB_S3TC_DXT1_EXT
,
+ COMPRESSED_SRGB_ALPHA_S3TC_DXT1_EXT
, COMPRESSED_SRGB_ALPHA_S3TC_DXT3_EXT
, and
+ COMPRESSED_SRGB_ALPHA_S3TC_DXT5_EXT
may be passed to
+ the compressedTexImage2D
and compressedTexSubImage2D
entry points.
+ getParameter
with the argument COMPRESSED_TEXTURE_FORMATS
+ will include the 4 formats from this specification.
+ The following format specific restrictions must be enforced:
+The byteLength
of the ArrayBufferView, pixels
, passed to
+ either compressedTexImage2D
or compressedTexSubImage2D
+ must match the following equation:
+ floor((width + 3) / 4) * floor((height + 3) / 4) * 8
+
+
+ If it is not an INVALID_VALUE
error is generated.
+
When level
equals zero width
and height
+ must be a multiple of 4. When level
is greater than 0 width
+ and height
must be 0, 1, 2 or a multiple of 4.
+ If they are not an INVALID_OPERATION
error is generated.
+
+ For compressedTexSubImage2D
xoffset
and
+ yoffset
must be a multiple of 4 and
+ width
must be a multiple of 4 or equal to the original
+ width of the level
. height
must be a multiple of 4 or
+ equal to the original height of the level
.
+ If they are not an INVALID_OPERATION
error is generated.
+
The byteLength
of the ArrayBufferView, pixels
, passed to
+ either compressedTexImage2D
or compressedTexSubImage2D
must
+ match the following equation:
+ floor((width + 3) / 4) * floor((height + 3) / 4) * 16
+
+
+ If it is not an INVALID_VALUE
error is generated.
+
When level
equals zero width
and height
+ must be a multiple of 4. When level
is greater than 0 width
+ and height
must be 0, 1, 2 or a multiple of 4.
+ If they are not an INVALID_OPERATION
error is generated.
+
+ For compressedTexSubImage2D
xoffset
and
+ yoffset
must be a multiple of 4 and
+ width
must be a multiple of 4 or equal to the original
+ width of the level
. height
must be a multiple of 4 or
+ equal to the original height of the level
.
+ If they are not an INVALID_OPERATION
error is generated.
+
WebGL implementations might mask the RENDERER
and VENDOR
strings of the underlying graphics driver for privacy reasons. This extension exposes new tokens to query this information in a guaranteed manner for debugging purposes.
UNMASKED_VENDOR_WEBGL
and UNMASKED_RENDERER_WEBGL
are accepted by pname
parameter in getParameter()
.
+ pname | returned type |
---|---|
UNMASKED_VENDOR_WEBGL | DOMString |
UNMASKED_RENDERER_WEBGL | DOMString |
UNMASKED_VENDOR_WEBGL | Return the VENDOR string of the underlying graphics driver. |
UNMASKED_RENDERER_WEBGL | Return the RENDERER string of the underlying graphics driver. |
+ 1) What enum values should be used for UNMASKED_VENDOR_WEBGL and UNMASKED_RENDERER_WEBGL?
++ 2) Should this extension be made available on ordinary web pages?
+
+ WebGL uses the GLSL ES 2.0 spec on all platforms, and translates these shaders to the host platform's native language (HLSL, GLSL, and even GLSL ES). For debugging purpose, it is useful to be able to examine the shader after translation. This extension exposes a new function getTranslatedShaderSource
for such purposes.
+
compileShader()
has not been called, or the translation has failed for shader
, an empty string is returned; otherwise, return the translated source.
+ + 1) Should this extension be made available on ordinary web pages?
+This extension exposes the + ANGLE_depth_texture + functionality to WebGL. ANGLE_depth_texture provides a subset of the + functionality from the OpenGL ES 2.0 extensions + OES_depth_texture + and + OES_packed_depth_stencil, with certain restrictions added for portability reasons. Specifically:
+ +DEPTH24_STENCIL8_OES
renderbuffer internal format from the OES_packed_depth_stencil extension. The core WebGL specification already supports allocation of depth/stencil renderbuffers. Consult the Errors section below for specific restrictions. +
+ +texImage2D
entry point is extended to accept the
+ format
parameter DEPTH_COMPONENT
and
+ DEPTH_STENCIL
+ texImage2D
entry point is extended to accept the
+ internalFormat
parameter DEPTH_COMPONENT
+ and DEPTH_STENCIL
+ texImage2D
entry point is extended to accept
+ the type
parameter UNSIGNED_SHORT
,
+ UNSIGNED_INT
, and
+ UNSIGNED_INT_24_8_WEBGL
+ framebufferTexture2D
entry point is extended to
+ accept the target
parameter
+ DEPTH_ATTACHMENT
and
+ DEPTH_STENCIL_ATTACHMENT
+ texImage2D
entry point is extended to accept
+ ArrayBufferView
of type Uint16Array
and
+ Uint32Array
+ + The WebGL-specific constraints about Framebuffer Object Attachments are extended:
+ +DEPTH_ATTACHMENT
attachment point must be allocated with the DEPTH_COMPONENT
internal format. DEPTH_STENCIL_ATTACHMENT
attachment point must be allocated with the DEPTH_STENCIL
internal format. + In the WebGL API, it is an error to concurrently attach either + renderbuffers or textures to the following combinations of + attachment points:
+ +DEPTH_ATTACHMENT
+ DEPTH_STENCIL_ATTACHMENT
STENCIL_ATTACHMENT
+ DEPTH_STENCIL_ATTACHMENT
DEPTH_ATTACHMENT
+ STENCIL_ATTACHMENT
+ See the section + Framebuffer Object Attachments + in the WebGL specification for the behavior if these + constraints are violated. +
++ As per the OpenGL ES spec, there is no guarantee that the OpenGL ES implementation + will use the texture type to determine how to store the depth texture internally. + It may choose to downsample the 32-bit depth values to 16-bit or even 24-bit. + When a depth or depth/stencil texture is attached to a framebuffer object, calls to getParameter + with the DEPTH_BITS and STENCIL_BITS enums return the following: +
Texture Type | +DEPTH_BITS (GLint) | +STENCIL_BITS (GLint) | +
---|---|---|
UNSIGNED_SHORT | +>= 16 | +0 | +
UNSIGNED_INT | +>= 16 | +0 | +
UNSIGNED_INT_24_8_WEBGL | +>= 24 | +>= 8 | +
INVALID_OPERATION
is generated by
+ texImage2D
if the format
parameter is
+ DEPTH_COMPONENT
or DEPTH_STENCIL
and the
+ target
is
+ TEXTURE_CUBE_MAP_{POSITIVE,NEGATIVE}_{X,Y,Z}
.
+ INVALID_OPERATION
is generated by
+ texImage2D
if format
and
+ internalformat
are DEPTH_COMPONENT
and
+ type
is not UNSIGNED_SHORT
or
+ UNSIGNED_INT
.
+ INVALID_OPERATION
is generated by
+ texImage2D
if format
and
+ internalformat
are not DEPTH_COMPONENT
+ and type
is UNSIGNED_SHORT
or
+ UNSIGNED_INT
.
+ INVALID_OPERATION
is generated by
+ texImage2D
if format
and
+ internalformat
are DEPTH_STENCIL
and
+ type
is not UNSIGNED_INT_24_8_WEBGL
.
+ INVALID_OPERATION
is generated by
+ texImage2D
if format
and
+ internalformat
are not DEPTH_STENCIL
and
+ type
is UNSIGNED_INT_24_8_WEBGL
.
+ INVALID_OPERATION
is generated in the following situations:
+ texImage2D
is called with format
and
+ internalformat
of DEPTH_COMPONENT
or
+ DEPTH_STENCIL
and
+ target
is not TEXTURE_2D, data
is not NULL, or level
is not zero. texSubImage2D
is called with format
of
+ DEPTH_COMPONENT
or DEPTH_STENCIL
.
+ copyTexImage2D
is called with an
+ internalformat
that has a base internal format of
+ DEPTH_COMPONENT
or DEPTH_STENCIL
.
+ copyTexSubImage2D
is called with a target texture
+ that has a base internal format of DEPTH_COMPONENT
+ or DEPTH_STENCIL
.
+ generateMipmap
is called on a texture that has a
+ base internal format of DEPTH_COMPONENT
or
+ DEPTH_STENCIL
.
+
+ As per the ANGLE_depth_texture specification, when a depth
+ texture is sampled, the value is stored into the RED channel.
+ The contents of the GREEN, BLUE and ALPHA channels are
+ implementation dependent. It is therefore recommended to use
+ only the r
component of variables in GLSL shaders
+ that are used to reference depth textures.
+
MAX_COLOR_ATTACHMENTS_WEBGL
parameter must be greater than or
+ equal to that of the MAX_DRAW_BUFFERS_WEBGL
parameter.
+ RGBA
+ and type UNSIGNED_BYTE
, and DEPTH
or DEPTH_STENCIL
attachment checkFramebufferStatus
against this framebuffer must not return
+ FRAMEBUFFER_UNSUPPORTED
. (In other words, the implementation must support the
+ use of RGBA/UNSIGNED_BYTE
textures as color attachments, plus either a
+ DEPTH
or DEPTH_STENCIL
attachment.)
+ n
consecutive color attachments starting at COLOR_ATTACHMENT0_WEBGL,
+ where n
is between 1 and MAX_DRAW_BUFFERS_WEBGL
, must not return
+ FRAMEBUFFER_UNSUPPORTED
from a call to checkFramebufferStatus
. In
+ other words, if MAX_DRAW_BUFFERS_WEBGL
is 4, then the implementation is
+ required to support the following combinations of color attachments:
+
+ COLOR_ATTACHMENT0_WEBGL = RGBA/UNSIGNED_BYTE
COLOR_ATTACHMENT0_WEBGL = RGBA/UNSIGNED_BYTE
COLOR_ATTACHMENT1_WEBGL = RGBA/UNSIGNED_BYTE
COLOR_ATTACHMENT0_WEBGL = RGBA/UNSIGNED_BYTE
COLOR_ATTACHMENT1_WEBGL = RGBA/UNSIGNED_BYTE
COLOR_ATTACHMENT2_WEBGL = RGBA/UNSIGNED_BYTE
COLOR_ATTACHMENT0_WEBGL = RGBA/UNSIGNED_BYTE
COLOR_ATTACHMENT1_WEBGL = RGBA/UNSIGNED_BYTE
COLOR_ATTACHMENT2_WEBGL = RGBA/UNSIGNED_BYTE
COLOR_ATTACHMENT3_WEBGL = RGBA/UNSIGNED_BYTE
#extension GL_EXT_draw_buffers
directive, as shown in the sample code, to use
+ the extension in a shader.
+
+ Likewise the shading language preprocessor #define GL_EXT_draw_buffers
, will be defined to 1 if the extension is supported.
+ gl_MaxDrawBuffers
must match MAX_DRAW_BUFFERS_WEBGL
from the API if the extension is enabled in a WebGL context; otherwise, the value must be 1. Whether or not the extension is enabled with the #extension GL_EXT_draw_buffers
directive in a shader does not affect the value of gl_MaxDrawBuffers
. The value of gl_MaxDrawBuffers
is a constant in the shader, and is guaranteed to be frozen at program link time. It is implementation-dependent whether it is frozen at shader compile time. (A consequence is that if a program is linked, and later the WEBGL_draw_buffers extension is enabled, the value of gl_MaxDrawBuffers
seen by that program will still be 1.)
+ #extension GL_EXT_draw_buffers
directive to enable it, then writes to gl_FragColor
are only written to COLOR_ATTACHMENT0_WEBGL
, and not broadcast to all color attachments. In this scenario, other color attachments are guaranteed to remain untouched.
+ gl_FragColor
nor gl_FragData
, the values of
+ the fragment colors following shader execution are untouched.
+
+ If a fragment shader contains the #extension GL_EXT_draw_buffers
directive, all
+ gl_FragData
variables (from gl_FragData[0]
to gl_FragData[MAX_DRAW_BUFFERS_WEBGL - 1]
)
+ default to zero if no values are written to them during a shader execution.
+ checkFramebufferStatus
+ returns FRAMEBUFFER_UNSUPPORTED
. An image can be an individual mip level, or a face of cube map.
+ + #extension GL_EXT_draw_buffers : require + precision mediump float; + void main() { + gl_FragData[0] = vec4(1.0, 0.0, 0.0, 1.0); + gl_FragData[1] = vec4(0.0, 1.0, 0.0, 1.0); + gl_FragData[2] = vec4(0.0, 0.0, 1.0, 1.0); + gl_FragData[3] = vec4(1.0, 1.0, 1.0, 1.0); + } ++
This extension exposes new functions which simulate losing and restoring the WebGL context, even on platforms where the context can never be lost. Consult the WebGL specification for documentation about the webglcontextlost
and webglcontextrestored
events.
When this extension is enabled: +
loseContext
and restoreContext
are allowed to generate INVALID_OPERATION errors even when the context is lost.+ Note that this extension is not disconnected from the WebGLRenderingContext if that + object loses its context as described in "The Context Lost Event" of the WebGL specification, either + through use of this API or via actual circumstances such as a system failure. +
+When this function is called and the context is not lost, simulate
+ losing the context so as to trigger the steps described in the WebGL
+ spec for handling context lost. The context will remain in the lost
+ state according to the WebGL specification until
+ restoreContext()
is called. If the context is already
+ lost when this function is called, generate an
+ INVALID_OPERATION
error.
Implementations should destroy the underlying graphics context and + all graphics resources when this method is called. This is the + recommended mechanism for applications to programmatically halt their + use of the WebGL API.
+ +loseContext()
,
+ generate an INVALID_OPERATION
error.
+ framebufferTextureMultiviewWEBGL
with a non-null texture
parameter that does not identify a 2D array texture generates an INVALID_OPERATION
error.
+ baseViewIndex
and numViews
can result in an error only if the texture
parameter is non-null.
+ baseViewIndex
is not the same for all framebuffer attachment points where the value of FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE
is not NONE
the framebuffer is considered incomplete. Calling getFramebufferStatus
for a framebuffer in this state returns FRAMEBUFFER_INCOMPLETE_VIEW_TARGETS_OVR
. Other rules for framebuffer completeness from the OVR_multiview specification also apply.
+ WebGLFramebuffer
objects that act as if they have multi-view attachments, but their attachments are not exposed as textures or renderbuffers and can not be changed. Opaque multiview framebuffers may have any combination of color, depth and stencil attachments.
+ framebufferRenderbuffer
, framebufferTexture2D
, framebufferTextureLayer
, framebufferTextureMultiviewWEBGL
, or any other call that could change framebuffer attachments with an opaque multiview framebuffer bound to target
generates an INVALID_OPERATION
error.
+ target
when calling getFramebufferAttachmentParameter
, then attachment
must be BACK
, DEPTH
, or STENCIL
.
+ target
when calling getFramebufferAttachmentParameter
, then pname
must not be FRAMEBUFFER_ATTACHMENT_OBJECT_NAME
.
+ FRAMEBUFFER_ATTACHMENT_TEXTURE_NUM_VIEWS_OVR
on an opaque multiview framebuffer attachment point that has attachments must return the number of views in the opaque multiview framebuffer.
+ FRAMEBUFFER_ATTACHMENT_TEXTURE_BASE_VIEW_INDEX_OVR
on an opaque multiview framebuffer must return 0.
+ FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE
on an opaque multiview framebuffer must return FRAMEBUFFER_DEFAULT
.
+ MAX_VIEWS_OVR
).
+ deleteFramebuffer
generates an INVALID_OPERATION
error.
+ #extension GL_OVR_multiview
directive, as shown in the sample code, to use
+ the extension in a shader.
+
+ Likewise the shading language preprocessor #define GL_OVR_multiview
, will be defined to 1 if the extension is supported.
+ gl_Position
can depend on ViewID in the vertex shader. With this change, view-dependent outputs like reflection vectors and similar are allowed.
+ gl_ViewID_OVR
will always evaluate to zero.
+ GL_OVR_multiview
with an extension directive, layout
is treated as a keyword rather than an identifier, and using a layout qualifier to specify num_views
is allowed. Other uses of layout qualifiers are not allowed in OpenGL ES shading language 1.00.
+ gl_ViewID_OVR
has the type int
as opposed to uint
.
+ clear
generates an INVALID_OPERATION
error.
+ INVALID_OPERATION
error.
+ GL_OVR_multiview
with an extension directive:
+ gl_ViewID_OVR
is a built-in input of the type uint.GL_OVR_multiview
is defined as 1.
+ pname
set to MAX_VIEWS_OVR
returns the maximum number of views. The implementation must support at least 2 views.
+ pname | returned type |
---|---|
MAX_VIEWS_OVR | GLint |
pname
parameter set to FRAMEBUFFER_ATTACHMENT_TEXTURE_NUM_VIEWS_OVR
returns the number of views of the framebuffer object attachment.
+ Calling with the pname
parameter set to FRAMEBUFFER_ATTACHMENT_TEXTURE_BASE_VIEW_INDEX_OVR
returns the base view index of the framebuffer object attachment.
+ pname | returned type |
---|---|
FRAMEBUFFER_ATTACHMENT_TEXTURE_NUM_VIEWS_OVR | GLsizei |
FRAMEBUFFER_ATTACHMENT_TEXTURE_BASE_VIEW_INDEX_OVR | GLint |
INVALID_OPERATION
is generated by calling framebufferTextureMultiviewWEBGL
with a texture
parameter that does not identify a 2D array texture.
+ INVALID_OPERATION
is generated by calling framebufferRenderbuffer
, framebufferTexture2D
, framebufferTextureLayer
, or framebufferTextureMultiviewWEBGL
with a target
parameter that identifies an opaque multiview framebuffer.
+ INVALID_OPERATION
is generated by calling deleteFramebuffer
with a buffer
parameter that identifies an opaque multiview framebuffer.
+ INVALID_ENUM
is generated by calling getFramebufferAttachmentParameter
with an attachment
parameter other than BACK
, DEPTH
or STENCIL
when the target
parameter identifies an opaque multiview framebuffer.
+ INVALID_ENUM
is generated by calling getFramebufferAttachmentParameter
with the pname
parameter set to FRAMEBUFFER_ATTACHMENT_OBJECT_NAME
when the target
parameter identifies an opaque multiview framebuffer.
+ INVALID_VALUE
is generated by calling framebufferTextureMultiviewWEBGL
with a non-null texture
in the following cases:
+ numViews
is less than onenumViews
is more than MAX_VIEWS_OVR
baseViewIndex
+ numViews
is larger than the value of MAX_ARRAY_TEXTURE_LAYERS
baseViewIndex
is negativeINVALID_FRAMEBUFFER_OPERATION
is generated by commands that read from the framebuffer such as BlitFramebuffer
, ReadPixels
, CopyTexImage*
, and CopyTexSubImage*
, if the number of views in the current read framebuffer is greater than one.
+ INVALID_OPERATION
is generated by attempting to draw if the active program declares a number of views and the number of views in the draw framebuffer does not match the number of views declared in the active program.
+ INVALID_OPERATION
is generated by attempting to draw if the number of views in the current draw framebuffer is greater than one and the active program does not declare a number of views.
+ INVALID_OPERATION
is generated by attempting to draw if the number of views in the current draw framebuffer is greater than one and transform feedback is active.
+ INVALID_OPERATION
is generated by attempting to draw or calling clear
if the number of views in the current draw framebuffer is greater than one and a timer query is active.
+ + var gl = document.createElement('canvas').getContext('webgl2'); + var ext = gl.getExtension('WEBGL_multiview'); + var fb = gl.createFramebuffer(); + gl.bindFramebuffer(gl.DRAW_FRAMEBUFFER, fb); + var colorTex = gl.createTexture(); + gl.bindTexture(gl.TEXTURE_2D_ARRAY, colorTex); + gl.texStorage3D(gl.TEXTURE_2D_ARRAY, 1, gl.RGBA8, 512, 512, 2); + ext.framebufferTextureMultiviewWEBGL(gl.DRAW_FRAMEBUFFER, gl.COLOR_ATTACHMENT0, colorTex, 0, 0, 2); + var depthStencilTex = gl.createTexture(); + gl.bindTexture(gl.TEXTURE_2D_ARRAY, depthStencilTex); + gl.texStorage3D(gl.TEXTURE_2D_ARRAY, 1, gl.DEPTH32F_STENCIL8, 512, 512, 2); + ext.framebufferTextureMultiviewWEBGL(gl.DRAW_FRAMEBUFFER, gl.DEPTH_STENCIL_ATTACHMENT, depthStencilTex, 0, 0, 2); + gl.drawElements(...); // draw will be broadcasted to the layers of colorTex and depthStencilTex. ++
+ var gl = document.createElement('canvas').getContext('webgl'); + var ext = gl.getExtension('WEBGL_multiview'); + // ... obtain opaque multiview framebuffer "fb" from another web API here ... + gl.bindFramebuffer(gl.FRAMEBUFFER, fb); + gl.drawElements(...); // draw will be broadcasted to the views of the opaque multiview framebuffer. + // You can not call framebufferTextureMultiviewWEBGL to change the attachments of "fb", only draw to it. ++
+ #version 300 es + #extension GL_OVR_multiview : require + precision mediump float; + layout (num_views = 2) in; + in vec4 inPos; + uniform mat4 u_viewMatrix0; + uniform mat4 u_viewMatrix1; + void main() { + if (gl_ViewID_OVR == 0u) { + gl_Position = u_viewMatrix0 * inPos; + } else { + gl_Position = u_viewMatrix1 * inPos; + } + } ++
In the WebGL API 1.0 specification, section 4.2 Origin Restrictions restricts the following sources for texture upload:
++
Document
that contains the WebGLRenderingContext
's canvas element.This extension allows these sources for texture uploads, with some restrictions regarding their uploading and use.
+ +Motivation:
+This extension enables the processing of cross-origin resources in WebGL. Additionally, it defines a foundation of concepts that can be used in future extensions to process other types of security sensitive content, including arbitrary HTML content.
+For an example of security sensitive content, consider the rendering of an HTML link. The color of the link can indicate its visited or unvisited state. Third parties must not be able to access or infer this information.
+Specifically, third parties must not be able read the pixel data of security sensitive content through WebGL or other APIs. Additionally, third parties must not be able to divulge or approximate the pixel data of security sensitive content by timing WebGL operations.
+Prior to this extension, WebGL restricted the upload of security sensitive content as a texture for graphical processing. This extension enables the uploading and processing of security sensitive content, with some restrictions. Note that this extension imposes no restrictions on the processing of regular, non-security sensitive content.
+To secure a user’s privacy, a WebGL implementation must not leak information about the contents of security sensitive textures through the execution time of its commands. To achieve this, no part of the underlying graphics pipeline may vary in execution time based on the contents of a security sensitive texture. For example, primitive assembly and depth testing must not vary based on the contents of a security sensitive texture.
+The vertex shading and fragment shading stages of the graphics pipeline require particular restrictions to keep their execution time independent of the contents of security sensitive textures. Specifically, the contents of a security sensitive texture must only appear in constant-time GLSL operations. A constant-time GLSL operation is an operation whose execution time does not vary based on the values of its operands. This extension will describe how a WebGL implementation can enforce this requirement.
+Additionally, this extension attempts to identify non-constant-time GLSL operations. All other GLSL operations are assumed to be constant time in both the WebGL implementation and the underlying GPU implementation. If this assumption is false on a particular implementation, then this extension must be disabled for that implementation. In the future, GPU vendors may be able to provide a mechanism to guarantee that the assumed GLSL operations are in fact constant-time.
+This extension relies on the definition of several constructs in GLSL. These constructs are determined statically, after preprocessing.
+ +S
is a regular sampler if an expression dependent on S
appears in one or more of the following constructs:
+
if
statement condition?
) condition&&
)||
)coord
, bias
, or lod
argument in a texture lookup function callgl_Position
gl_FragDepth
Otherwise, S
is a secure sampler.
An expression is dependent on the sampler S
if:
+
S
as the sampler
argument.S
.S
.A variable is dependent on the sampler S
if:
+
S
. (e.g. If a = b
and b
is dependent on sampler S
, then a
is dependent on sampler S
.)a[b] = c
and b
is dependent on the sampler S
, then a
is dependent on sampler S
.)S
.WebGLRenderingContext
's canvas's origin-clean flag is set to false if the context is created with a WebGLContextAttributes
dictionary with securitySensitiveDrawingBuffer
set to true.createFramebuffer
, except framebuffers created with this function are referred to as security sensitive framebuffers. Framebuffers created with createFramebuffer
are referred to as regular framebuffers.createTexture
, except textures created with this function are known as security sensitive textures. Textures created with createTexture
are known as regular textures.In summary, an author cannot:
+readPixels
,The error INVALID_OPERATION
is generated in the following situations:
drawArrays
or drawElements
is called and a security sensitive texture is bound to a regular sampler.drawArrays
or drawElements
is called and:
gl_FragColor
or gl_FragData[i]
is dependent on the secure sampler,the output variable writes to either:
+securitySensitiveDrawingBuffer
set to false in the WebGLContextAttributes
dictionary used to create the context.copyTexImage2D
or copyTexSubImage2D
is called and:
securitySensitiveDrawingBuffer
was set to true in the WebGLContextAttributes
dictionary used to create the context.copyTexImage2D
or copyTexSubImage2D
is called and:
readPixels
is called and:
securitySensitiveDrawingBuffer
was set to true in the WebGLContextAttributes
dictionary used to create the context.readPixels
is called and a security sensitive renderbuffer is selected as the source.framebufferRenderbuffer
is called and:
attachment
is DEPTH_ATTACHMENT
, STENCIL_ATTACHMENT
, or DEPTH_STENCIL_ATTACHMENT
,renderbuffer
is a security sensitive renderbuffer.acquireSharedResource
is called and:
resource
is a security sensitive texture or a security sensitive renderbuffer,createSecuritySensitive*
to the API consumer?+ #extension GL_EXT_clip_cull_distance : enable + + // Vertex shader + out highp float gl_ClipDistance[2]; + out highp float gl_CullDistance[2]; + + void main(){ + // Compute the clip and cull distances for this vertex + gl_ClipDistance[0] = ...; + gl_CullDistance[0] = ...; + gl_ClipDistance[1] = ...; + gl_CullDistance[1] = ...; + } ++
This extension exposes the EXT_multi_draw_arrays functionality to WebGL.
+ +CAD vendors rendering large models comprised of many individual parts face scalability issues issuing large numbers of draw calls from WebGL. This extension reduces draw call overhead by allowing better batching.
+multiDrawArraysEXT
and multiDrawElementsEXT
entry points are added. These provide a counterpoint to instanced rendering and are more flexible for certain scenarios.offset
arguments to multiDrawArraysEXT
and multiDrawElementsEXT
choose the starting offset into their respective typed arrays or sequences. This primarily avoids allocation of temporary typed array views.+var ext = gl.getExtension("EXT_multi_draw_arrays"); +{ + // multiDrawArrays variant. + let firsts = new Int32Array(...); + let counts = new Int32Array(...); + ext.multiDrawArraysEXT(gl.TRIANGLES, firsts, 0, counts, 0, firsts.length); +} + +{ + // multiDrawElements variant. + // Assumes that the indices which have been previously uploaded to the + // ELEMENT_ARRAY_BUFFER are to be treated as UNSIGNED_SHORT. + let counts = new Int32Array(...); + let offsets = new Int32Array(...); + ext.multiDrawElementsEXT(gl.TRIANGLES, counts, 0, gl.UNSIGNED_SHORT, offsets, 0, + counts.length); +} ++
This extension exposes the KHR_blend_equation_advanced_coherent functionality to WebGL.
+ +CanvasRenderingContext2D provides a series of common blend functions with globalComposeOperation, such as "multiply" and "screen". KHR_blend_equation_advanced_coherent provides, with the exception of "xor", exactly the same list of blend functions for WebGL, as detailed below:
+ +These effects are useful for high-quality artistic blends. They can be implemented using shaders and rendering via an intermediate texture. However this has a high performance overhead both in draw calls and GPU bandwidth. Advanced blend modes allow a much simpler, high-performance way of implementing these blends. Using shaders rendering to an intermediate texture can be used as a fallback if this extension is not supported.
+ +Note only the coherent variant of this extension is exposed in order to eliminate the possibility of undefined behavior in KHR_blend_equation_advanced. This also simplifies the extension and removes the need to insert blend barriers during rendering.
+blendEquation
entry point is extended to accept the enums in the IDL below+var ext = gl.getExtension("WEBGL_blend_equation_advanced_coherent"); +gl.blendEquation(ext.MULTIPLY); +gl.getParameter(gl.BLEND_EQUATION) == ext.MULTIPLY; ++
+ This extension allows the GL to notify applications when various events + occur that may be useful during application development, debugging and + profiling. +
+ +ObjectPtrLabel
and GetObjectPtrLabel
+ functions are replaced with ObjectLabel
and
+ GetObjectLabel
.
+ count
and ids
arguments of
+ DebugMessageControl
are replaced with a
+ sequence<GLuint> ids
argument.
+ length
and buf
arguments of
+ DebugMessageInsert
and PushDebugGroup
are
+ replaced with a DOMString message
argument.
+ identifier
and name
arguments of
+ ObjectLabel
and GetObjectLabel
are replaced
+ with a WebGLObject object
argument.
+ length
and label
arguments of
+ ObjectLabel
are replaced with a DOMString
+ label
argument.
+ bufSize
, length
and label
+ arguments of GetObjectLabel
are replaced with a
+ DOMString
return type.
+ WEBGL_debug
extension object is a DOM
+ EventTarget
, obeying the rules of the DOM Level 3 Events,
+ with a new WebGLDebugMessage
event that gets fired
+ whenever the driver, browser or application emits a debug message.
+ debugMessageInsertKHR
is exposed to allow the application
+ to insert debug messages into the WebGL stream.
+ objectLabelKHR
and getObjectLabelKHR
are
+ exposed, to assign a label to a WebGLObject
and retrieve
+ it.
+ pushDebugGroupKHR
and popDebugGroupKHR
make
+ it possible to group a list of WebGL calls together.
+ debugMessageControlKHR
allows the application to enable
+ and disable the debug messages which emit a
+ WebGLDebugMessage
event. This state is part of the debug
+ group they are part of, and gets poped on
+ popDebugGroupKHR
.
+ On WEBGL_debug
:
WebGLDebugMessage
+ events for the specified messages.
+ pushDebugGroupKHR
.
+ WebGLObject
.
+ WebGLObject
.
+ A dynamic texture is a texture whose image changes frequently. The
+ source of the stream of images may be a producer outside the control of
+ the WebGL application. The classic example is using a playing video to
+ texture geometry. Texturing with video is currently achieved by using the
+ TEXTURE2D
target and passing an HTMLVideoElement
+ to texImage2D
. It is difficult, if not impossible to
+ implement video texturing with zero-copy efficiency via this API and much
+ of the behavior is underspecified.
This extension provides a mechanism for streaming image frames from an
+ HTMLVideoElement
, HTMLCanvasElement
or
+ HTMLImageElement
(having multiple frames such those created
+ from animated GIF, APNG and MNG files) into a WebGL texture. This is done
+ via a new texture target, TEXTURE_EXTERNAL_OES
which can only
+ be specified as being the consumer of an image stream from a new
+ WDTStream
object which provides commands for connecting to a
+ producer element.
There is no support for most of the functions that manipulate other
+ texture targets (e.g. you cannot use *[Tt]ex*Image*()
+ functions with TEXTURE_EXTERNAL_OES
). Also,
+ TEXTURE_EXTERNAL_OES
targets never have more than a single
+ level of detail. These restrictions enable dynamic texturing with maximum
+ efficiency. They remove the need for a copy of the image data manipulable
+ via the WebGL API and allow sources which have internal formats not
+ otherwise supported by WebGL, such as planar or interleaved YUV data, to
+ be WebGL texture target siblings.
The extension extends GLSL ES with a new
+ samplerExternalOES
type and matching sampling functions that
+ provide a place for an implementation to inject code for sampling non-RGB
+ data when necessary without degrading performance for other texture
+ targets. Sampling a TEXTURE_EXTERNAL_OES
via a sampler of
+ type samplerExternalOES
always returns RGBA data. This allows
+ the implementation to decide the most efficient format to use whether it
+ be RGB or YUV data. If the underlying format was exposed, the application
+ would have to query the format in use and provide shaders to handle both
+ cases.
WDTStream
provides a command for latching an
+ image frame into the consuming texture as its contents. This is equivalent
+ to copying the image into the texture but, due to the restrictions
+ outlined above a copy is not necessary. Most implementations will be able
+ to avoid one so this can be much faster than using
+ texImage2D
. Latching can and should be implemented in a way
+ that allows the producer to run independently of 3D rendering.
Terminology note: throughout this specification + opaque black refers to the RGBA value (0,0,0,1).
+ +An HTMLVideoElement
, HTMLCanvasElement
or
+ HTMLImageElement
is the producer of the stream of images
+ being consumed by the dynamic texture rather than the unspecified
+ external producer referred to in the extension.
A WDTStream
is the deliverer of the stream of images
+ being consumed by the dynamic texture rather an
+ EGLStream
.
References to EGLImage
and associated state are
+ deleted.
WDTStream.connectSource
is used to connect a texture
+ to the image stream from an HTML element instead of the command
+ eglStreamConsumerGLTextureNV
or its equivalent
+ eglStreamConsumerGLTextureExternalKHR
referenced by the
+ extension.
WDTStream.acquireImage
and
+ WDTStream.releaseImage
are used to latch and unlatch
+ image frames instead of the commands
+ eglStreamConsumerAcquireNV
or its equivalent
+ eglStreamConsumerAcquireKHR
and
+ eglStreamConsumerReleaseNV
or its equivalent
+ eglStreamConsumerReleaseKHR
referenced by the
+ extension.
For ease of reading, this specification briefly describes the new + functions and enumerants of NV_EGL_stream_consumer_external. + Consult that extension for detailed documentation of their meaning and + behavior. Changes to the language of that extension are given later in this specification.
+The createStream
function is available. This command
+ is used for creating WDTStream
objects for streaming
+ external data to texture objects. WDTStream
objects have
+ a number of functions and attributes, the most important of which are
+ listed below.
The functions ustnow
,
+ getLastDrawingBufferPresentTime
and
+ setDrawingBufferPresentTime
are available. These commands
+ are used for accurate timing and specifying when the drawing buffer
+ should next be presented.
The functions WDTStream.connectSource
and
+ WDTStream.disconnect()
are available for binding and
+ unbinding the stream to HTML{Canvas,Image,Video}Elements
+ as is the WDTStream.getSource
function for querying the
+ current stream source.
The functions WDTStream.acquireImage
and
+ WDTStream.releaseImage
are available. These commands are
+ used before 3D rendering to latch an image that will not change during
+ sampling and after to unlatch the image.
On WEBGL_dynamic_texture
:
WDTStream
object whose consumer is the
+ WebGLTexture
bound to the TEXTURE_EXTERNAL_OES
+ target of the active texture unit at the time of the call.playbackRate
of the associated
+ MediaController
is 1.0.On WDTStream
:
StreamSource
specified by
+ source as the producer for the stream. StreamSource
+ can be an HTMLCanvasElement
, HTMLImageElement
or
+ HTMLVideoElement
.HTML{Canvas,Image,Video}Element
that is connected to the
+ WDTStream as the producer of images.WebGLTexture
, that is the
+ WDTStream
's consumer, will return values from the
+ latched image. The image data is guaranteed not to change as long as the
+ image is latched. WDTStream
returns true
when an
+ image is successfully latched, false
otherwise.WebGLTexture
, that was bound
+ to the TEXTURE_EXTERNAL_OES
target of the active texture unit
+ when the WDTStream was created, will return opaque black.The meaning and use of these tokens is exactly as described in NV_EGL_stream_consumer_external.
+ +TEXTURE_EXTERNAL_OES
is accepted as a
+ target by the target
parameter of
+ bindTexture()
SAMPLER_EXTERNAL_OES
can be returned in the
+ type
field of the WebGLActiveInfo
returned by
+ getActiveUniform()
TEXTURE_BINDING_EXTERNAL_OES
is accepted by
+ the pname
parameter of
+ getParameter()
.REQUIRED_TEXTURE_IMAGE_UNITS_OES
is accepted
+ as the pname
parameter of
+ GetTexParameter*()
This type is used for nanosecond time stamps and time periods.
+This interface is used to obtain information about the latched + frame.
+This interface is used to manage the image stream between the + producer and consumer.
+In section 4.3 Supported GLSL Constructs, replace the + paragraph beginning A WebGL implementation must ... with the + following paragraph:
A WebGL implementation must only accept + shaders which conform to The OpenGL ES Shading Language, Version 1.00 [GLES20GLSL], + as extended by NV_EGL_stream_consumer_external, + and which do not exceed the minimum functionality mandated in Sections 4 + and 5 of Appendix A. In particular, a shader referencing state variables + or commands that are available in other versions of GLSL (such as that + found in versions of OpenGL for the desktop), must not be allowed to + load.+ +
In section 5.14 The WebGL Context , add the following to + the WebGLRenderingContext interface. Note that until such time as this + extension enters core WebGL the tokens and commands mentioned below will + be located on the WebGL_dynamic_texture extension interface shown + above.
/* GetPName */
:TEXTURE_BINDING_EXTERNAL = 0x8D67;
/* TextureParameterName */
:REQUIRED_TEXTURE_IMAGE_UNITS = 0x8D68;
/* TextureTarget */
:TEXTURE_EXTERNAL = 0x8D65;
/* Uniform Types */
:SAMPLER_EXTERNAL = 0x8D66;
WDTStream? createStream(); +WDTNanoTime getLastDrawingBufferPresentTime(); +void setDrawingBufferPresentationTime(WDTNanoTime pt); +WDTNanoTime ustnow();
In section 5.14.3 Setting and getting state, add the
+ following to the table under getParameter
.
TEXTURE_BINDING_EXTERNAL | + +int | +
In section 5.14.8Texture objects, add the following to the
+ table under getTexParameter
.
REQUIRED_TEXTURE_IMAGE_UNITS | + +int | +
Add a new section 5.14.8.1 External textures.
+ +++ +5.14.8.1 External textures
+ +External textures are texture objects which receive image data from + outside of the GL. They enable texturing with rapidly changing image + data, e.g, a video, at low overhead and are used in conjunction with +
+WDTStream
+ objects to create dynamic textures. See Dynamic Textures for more information. An + external texture object is created by binding an unused +WebGLTexture
to the target +TEXTURE_EXTERNAL_OES
. Note that only unused WebGLTextures + or those previously used as external textures can be bound to +TEXTURE_EXTERNAL_OES
. Binding aWebGLTexture
+ previously used with a different target or binding a WebGLTexture + previously used with TEXTURE_EXTERNAL_OES to a different target + generates aGL_INVALID_OPERATION
error as documented in GL_NV_EGL_stream_consumer_external.txt.
In section 5.14.10 Uniforms and attributes, add the
+ following to the table under getUniform
.
samplerExternal | + +long | +
Add a new section 5.16 Dynamic Textures
+ +++ +5.16 Dynamic Textures
+ +Dynamic textures are texture objects that display a stream of images + coming from a producer outside the WebGL application, the + classic example ibeing using a playing video to texture geometry from. A +
+ +WDTStream
object mediates between the producer and the + consumer, the texture consuming the images.The command
WDTStream? createStream();creates + a WGTStream object whose consumer is the + texture object currently bound to theTEXTURE_EXTERNAL_OES
+ target in the active texture unit. The initialstate
of the + newly created stream will beSTREAM_CONNECTING
. If the + texture object is already the consumer of a stream, createStream + generates an INVALID_OPERATION error and returns null. When a texture + object that is the consumer of a stream is deleted, the stream is also + deleted. + +In order to maintain synchronization with other tracks of an + HTMLVideoElement's media group, most notably audio, the application must + be able to measure how long it takes to draw the scene containing the + dynamic texture and how long it takes the browser to compose and present + the canvas.
+ +The command
WDTNanoTime ustnow();+ returns the unadjusted system time, a monotonically increasing + clock, in units of nanoseconds. The zero time of this clock is not + important. It could start at system boot, browser start or navigation + start. + +The command
WDTNanoTime getLastDrawingBufferPresentTime();+ returns the UST the last time the composited page containing the drawing + buffer's content was presented to the user. + +To ensure accurate synchronization of the textured image with other + tracks of an HTMLVideoElement's media group, the application must be + able to specify the presentation time of the drawing + buffer.
+ +The command
void setDrawingBufferPresentTime(WDTNanoTime pt);+ tells the browser the UST when the drawing buffer must be presented + after the application returns to the browser. The browser must present + the composited page containing the canvas to the user at the specified + UST. If the specified time has already passed when control returns, the + browser should present the drawing buffer as soon as possible. Should an + explicit drawing buffer present function be added to WebGL, the + presentation time will become one of its parameters. + +5.16.1 WDTStreamFrameInfo
+ +The
+ +WDTStreamFrameInfo
interface represents information + about a frame acquired from a WDTStream.[NoInterfaceObject] interface WDTStreamFrameInfo { + readonly attribute double frameTime; + readonly attribute WDTNanoTime presentTime; +};+ +5.16.1.1 Attributes
+ +The following attributes are available:
+ ++
+ +- + +
frameTime
of type +double
- The time of the frame relative to the start of the producer's + MediaController timeline in seconds. Equivalent to +
+ +currentTime
in an HTMLMediaElement.- + +
presentTime
of type +WDTNanoTime
- The time the frame must be presented in order to sync with other + tracks in the element's mediagroup, particularly audio.
+5.16.2 WDTStream
+ +The
+ +WDTStream
interface represents a stream object used + for controlling an image stream being fed to a dynamic texture + object.[NoInterfaceObject] interface WDTStream { + typedef (HTMLCanvasElement or + HTMLImageElement or + HTMLVideoElement) StreamSource; + + const GLenum STREAM_CONNECTING = 0; + const GLenum STREAM_EMPTY = 1; + const GLenum STREAM_NEW_FRAME_AVAILABLE = 2; + const GLenum STREAM_OLD_FRAME_AVAILABLE = 3; + const GLenum STREAM_DISCONNECTED = 4; + + readonly attribute WebGLTexture consumer; + + readonly attribute WDTStreamFrameInfo consumerFrame; + readonly attribute WDTStreamFrameInfo producerFrame; + + readonly attribute WDTNanoTime minFrameDuration; + + readonly attribute GLenum state; + + attribute WDTNanotime acquireTimeout; + attribute WDTNanoTime consumerLatency; + + void connectSource(StreamSource source); + void disconnect(); + StreamSource? getSource(); + + boolean acquireImage(); + void releaseImage(); +};+ +5.16.2.1 Attributes
+ ++
+ +- + +
consumer
of type +WebGLTexture
- The
+ +WebGLTexture
that was bound to the + TEXTURE_EXTERNAL_OES target of the active texture unit at the time the + stream was created. Sampling this texture in a shader will return + samples from the image latched byacquireImage
.- + +
consumerFrame
of type +WDTStreamFrameInfo
- Information about the last frame latched by the consumer via +
+ +acquireImage.
- + +
producerFrame
of type +WDTStreamFrameInfo
- Information about the frame most recently inserted into the stream + by the producer.
+ +- + +
minFrameDuration
of type +WDTNanoTime
- The minimum duration of a frame in the producer. Ideally this + should be an attribute on HTMLVideoElement. Most video container + formats have metadata that can be used to calculate this. It can only + reflect the actual value once the stream is connected to a producer + and the producer's
+ +READY_STATE
is at least +HAVE_METADATA
. The initial value is +Number.MAX_VALUE
(i.e., infinity). Applications need this + information to determine how complex their drawing can be while + maintaining the video's frame rate.- + +
state
of type +GLenum
- The state of the stream. Possible states are +
+ +STREAM_CONNECTING
,STREAM_EMPTY
, +STREAM_NEW_FRAME_AVAILABLE
, +STREAM_OLD_FRAME_AVAILABLE
and +STREAM_DISCONNECTED
.- + +
consumerLatency
of type +WDTNanoTime
- The time between the application latching an image from the stream + and the drawing buffer being presented. This is the time by which the + producer should delay playback of any synchronized tracks such as + audio. The initial value is an implementation-dependent constant + value, possibly zero. This should only be changed when the video is + paused as producers will not be able to change the playback delay on, + e.g. audio, without glitches. It may only be possible to set this + prior to starting playback. Implementation experience is needed.
+ +- + +
acquireTimeout
of type +WDTNanoTime
- The maximum time to block in
+acquireImage
waiting for + a new frame. The initial value is 0.5.16.2.2 commands
+ +The command
void connectSource(StreamSource source);connects + the stream to the specifiedStreamSource
element. If +StreamSource
is anHTMLMediaElement
, the + element'sautoPlay
attribute is set tofalse
+ to prevent playback starting before the application is ready. If +state
is notSTREAM_CONNECTING
, an +InvalidStateError
exception is thrown. After connecting +state
becomesSTREAM_EMPTY
. + +The command
void disconnect();disconnects + the stream from its source. Subsequent sampling of the associated + texture will return opaque black.state
is set to +STREAM_DISCONNECTED
. + +The command
StreamSource? getSource();returns + the HTML element that is the producer for this stream. + +The command
boolean acquireImage();causes + consumer to latch the most recent image frame from the + currently connected source. The rules for selecting the image to be + latched mirror those for selecting the image drawn by the +drawImage
method of CanvasRenderingContext2D. + +For HTMLVideoElements, it latches the frame of video that will + correspond to the current + playback position of the audio channel, as defined in the HTML Living + Standard, at least latency nanoseconds from the call + returning, where latency is the
+ +consumerLatency
+ attribute of the stream. If the element'sreadyState
+ attribute is eitherHAVE_NOTHING
or +HAVE_METADATA
, the command returns without latching + anything and the texture remains incomplete. The effective size + of the texture will be the element's intrinsic + width and height.For animated HTMLImageElements it will latch the first frame of the + animation. The effective size of the texture will be the element's + intrinsic width and height.
+ +For HTMLCanvasElements it will latch the current content of the + canvas as would be returned by a call to
+ +toDataURL
.+ +
acquireImage
will block until either the timeout + specified byacquireTimeout
expires or state is neither +STREAM_EMPTY
norSTREAM_OLD_FRAME_AVAILABLE
, + whichever comes first.The model is a stream of images between the producer and the + WebGLTexture consumer.
+ +acquireImage
latches the most recent + image. If the producer has not inserted any new images since the last + call toacquireImage
thenacquireImage
will + latch the same image it latched last time it was called. If the producer + has inserted one new image since the last call then +acquireImage
will "latch" the newly inserted image. If the + producer has inserted more than one new image since the last call then + all but the most recently inserted image are discarded and +acquireImage
will "latch" the most recently inserted image. + ForHTMLVideoElements
, the application can use the value of + theframeTime
attribute in theconsumerFrame
+ attribute to identify which image frame was actually latched.
acquireImage
returnstrue
if an image has + been acquired, andfalse
if the timeout fired. It throws + the following exceptions:+
XXX Complete after resolving issue 22. XXX + +- +
InvalidStateError
, if no dynamic source is + connected to the stream.The command
void releaseImage();releases + the latched image.releaseImage
will prevent the producer + from re-using and/or modifying the image until all preceding WebGL + commands that use the image as a texture have completed. If +acquireImage
is called twice without an intervening call to +releaseImage
thenreleaseImage
is implicitly + called at the start ofacquireImage
. + +After successfully calling
+ +releaseImage
the texture + becomes "incomplete".If
+ +releaseImage
is called twice without a successful + intervening call toacquireImage
, or called with no + previous call toacquireImage
, then the call does nothing + and the texture remains in "incomplete" state. This is not an errorIt throws the following exceptions:
+
XXX Complete after resolving issue 22. XXX + +- +
InvalidStateError
, if no dynamic source is + connected to the stream.To sample a dynamic texture, the texture object must be bound to the + target
+TEXTURE_EXTERNAL_OES
and the sampler uniform must be + of typesamplerExternal
. If the texture object bound to +TEXTURE_EXTERNAL_OES
is not bound to a dynamic source then + the texture is "incomplete" and the sampler will return opaque + black.
At the end of section 6 Differences between + WebGL and OpenGL ES, add the following new sections. Note that + differences are considered with respect to the OpenGL ES 2.0 specification + as extended by NV_EGL_stream_consumer_external + in the absence of OES_EGL_image_external.
+ +++6.25 External Texture Support
+ +WebGL supports external textures but provides its own +
+ +WDTStream
interface instead ofEGLStream
. +WDTStream
connects an HTMLCanvasElement, HTMLImageElement + or HTMLVideoElement as the producer for an external texture. Specific + language changes follow.Section 3.7.14.1 External Textures as Stream Consumers + is replaced with the following.
++To use a TEXTURE_EXTERNAL_OES texture as the consumer of images + from a dynamic HTML element, bind the texture to the active texture + unit, and call
+ +createStream
to create a +WDTStream
. Use the stream'sconnectSource
+ command to connect the stream to the desired producer HTML element. + The width, height, format, type, internalformat, border and image + data of the TEXTURE_EXTERNAL_OES texture will all be determined + based on the specified dynamic HTML element. If the element does not + have any source or the source is not yet loaded, the width, height + & border will be zero, the format and internal format will be + undefined. Once the element's source has been loaded and one (or + more) images have been decoded these attributes are determined + (internally by the implementation), but they are not exposed to the + WebGL application and there is no way to query their values.The TEXTURE_EXTERNAL_OES texture remains the consumer of the + dynamic HTML element's image frames until the first of any of these + events occur:
+
+ +- The texture is associated with a different dynamic HTML + element (with a later call to +
+ +WDTStream.connectSource
).- The texture is deleted in a call to +
+deleteTextures
.Sampling an external texture which is not connected to a dynamic + HTML element will return opaque black. Sampling an external texture + which is connected to a dynamic HTML element will return opaque + black unless an image frame has been 'latched' into the texture by a + successful call to WDTStream.acquireImage.
+
XXX IGNORE THIS SAMPLE CODE. IT HAS NOT YET BEEN UPDATED TO MATCH THE + NEW SPEC TEXT. XXX
+ +<script>
tag is not
+ essential; it is merely one way to include shader text in an HTML
+ file.<script id="fshader" type="x-shader/x-fragment"> + #extension OES_EGL_image_external : enable + precision mediump float; + + uniform samplerExternalOES videoSampler; + + varying float v_Dot; + varying vec2 v_texCoord; + + void main() + { + vec2 texCoord = vec2(v_texCoord.s, 1.0 - v_texCoord.t); + vec4 color = texture2D(videoSampler, texCoord); + color += vec4(0.1, 0.1, 0.1, 1); + gl_FragColor = vec4(color.xyz * v_Dot, color.a); + } +</script>
<html> +<script type="text/javascript"> + + /////////////////////////////////////////////////////////////////////// + // Create a video texture and bind a source to it. + /////////////////////////////////////////////////////////////////////// + + // Array of files currently loading + g_loadingFiles = []; + + // Clears all the files currently loading. + // This is used to handle context lost events. + function clearLoadingFiles() { + for (var ii = 0; ii < g_loadingFiles.length; ++ii) { + g_loadingFiles[ii].onload = undefined; + } + g_loadingFiles = []; + } + + // + // createVideoTexture + // + // Load video from the passed HTMLVideoElement id, bind it to a new WebGLTexture object + // and return the WebGLTexture. + // + // Is there a constructor for an HTMLVideoElement so you can do like "new Image()?" + // + function createVideoTexture(ctx, videoId) + { + var texture = ctx.createTexture(); + var video = document.getElementById(videoId); + g_loadingFiles.push(video); + video.onload = function() { doBindVideo(ctx, video, texture) } + return texture; + } + + function doBindVideo(ctx, video, texture) + { + g_loadingFiles.splice(g_loadingFiles.indexOf(image), 1); + ctx.bindTexture(ctx.TEXTURE_EXTERNAL_OES, texture); + ctx.dynamicTextureSetSource(video); + // These are the default values of these properties so the following + // 4 lines are not necessary. + ctx.texParameteri(ctx.TEXTURE_EXTERNAL_OES, ctx.TEXTURE_MAG_FILTER, ctx.LINEAR); + ctx.texParameteri(ctx.TEXTURE_EXTERNAL_OES, ctx.TEXTURE_MIN_FILTER, ctx.LINEAR); + ctx.texParameteri(ctx.TEXTURE_EXTERNAL_OES, ctx.TEXTURE_WRAP_S, ctx.CLAMP_TO_EDGE); + ctx.texParameteri(ctx.TEXTURE_EXTERNAL_OES, ctx.TEXTURE_WRAP_T, ctx.CLAMP_TO_EDGE); + ctx.bindTexture(ctx.TEXTURE_EXTERNAL_OES, null); + } + + /////////////////////////////////////////////////////////////////////// + // Initialize the application. + /////////////////////////////////////////////////////////////////////// + + var g = {}; + var videoTexture; + + function init() + { + // Initialize + var gl = initWebGL( + // The id of the Canvas Element + "example"); + if (!gl) { + return; + } + var program = simpleSetup( + gl, + // The ids of the vertex and fragment shaders + "vshader", "fshader", + // The vertex attribute names used by the shaders. + // The order they appear here corresponds to their index + // used later. + [ "vNormal", "vColor", "vPosition"], + // The clear color and depth values + [ 0, 0, 0.5, 1 ], 10000); + + // Set some uniform variables for the shaders + gl.uniform3f(gl.getUniformLocation(program, "lightDir"), 0, 0, 1); + // Use the default texture unit 0 for the video + gl.uniform1i(gl.getUniformLocation(program, "samplerExternal"), 0); + + // Create a box. On return 'gl' contains a 'box' property with + // the BufferObjects containing the arrays for vertices, + // normals, texture coords, and indices. + g.box = makeBox(gl); + + // Load an image to use. Returns a WebGLTexture object + videoTexture = createVideoTexture(gl, "video"); + // Bind the video texture + gl.bindTexture(gl.TEXTURE_EXTERNAL_OES, videoTexture); + + // Create some matrices to use later and save their locations in the shaders + g.mvMatrix = new J3DIMatrix4(); + g.u_normalMatrixLoc = gl.getUniformLocation(program, "u_normalMatrix"); + g.normalMatrix = new J3DIMatrix4(); + g.u_modelViewProjMatrixLoc = + gl.getUniformLocation(program, "u_modelViewProjMatrix"); + g.mvpMatrix = new J3DIMatrix4(); + + // Enable all of the vertex attribute arrays. + gl.enableVertexAttribArray(0); + gl.enableVertexAttribArray(1); + gl.enableVertexAttribArray(2); + + // Set up all the vertex attributes for vertices, normals and texCoords + gl.bindBuffer(gl.ARRAY_BUFFER, g.box.vertexObject); + gl.vertexAttribPointer(2, 3, gl.FLOAT, false, 0, 0); + + gl.bindBuffer(gl.ARRAY_BUFFER, g.box.normalObject); + gl.vertexAttribPointer(0, 3, gl.FLOAT, false, 0, 0); + + gl.bindBuffer(gl.ARRAY_BUFFER, g.box.texCoordObject); + gl.vertexAttribPointer(1, 2, gl.FLOAT, false, 0, 0); + + // Bind the index array + gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, g.box.indexObject); + + return gl; + } + + // ... + + /////////////////////////////////////////////////////////////////////// + // Draw a frame + /////////////////////////////////////////////////////////////////////// + function draw(gl) + { + // Make sure the canvas is sized correctly. + reshape(gl); + + // Clear the canvas + gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); + + // Make a model/view matrix. + g.mvMatrix.makeIdentity(); + g.mvMatrix.rotate(20, 1,0,0); + g.mvMatrix.rotate(currentAngle, 0,1,0); + + // Construct the normal matrix from the model-view matrix and pass it in + g.normalMatrix.load(g.mvMatrix); + g.normalMatrix.invert(); + g.normalMatrix.transpose(); + g.normalMatrix.setUniform(gl, g.u_normalMatrixLoc, false); + + // Construct the model-view * projection matrix and pass it in + g.mvpMatrix.load(g.perspectiveMatrix); + g.mvpMatrix.multiply(g.mvMatrix); + g.mvpMatrix.setUniform(gl, g.u_modelViewProjMatrixLoc, false); + + // Acquire the latest video image + gl.dynamicTextureAcquireImage(); + + // Draw the cube + gl.drawElements(gl.TRIANGLES, g.box.numIndices, gl.UNSIGNED_BYTE, 0); + + // Allow updates to the image again + gl.dynamicTextureReleaseImage(); + + // Show the framerate + framerate.snapshot(); + + currentAngle += incAngle; + if (currentAngle > 360) + currentAngle -= 360; + } +</script> + +<body onload="start()"> +<video id="video" src="resources/video.ogv" autoplay="true" style="visibility: hidden"> +</video> +<canvas id="example"> + If you're seeing this your web browser doesn't support the <canvas> element. Ouch! +</canvas> +<div id="framerate"></div> +</body> + +</html>
Statistical fingerprinting is a privacy concern where a malicious web + site may determine whether a user has visited a third-party web site by + measuring the timing of cache hits and misses of resources in the + third-party web site. Though the ustnow method of this extension returns + time data to a greater accuracy than before, it does not make this privacy + concern significantly worse than it was already.
+What do applications need to be able to determine about the + source?
+ +RESOLVED. Two things
Neither the minimum inter-frame interval nor frame rate is exposed + by HTMLMediaElements. How can it be determined?
+ +RESOLVED. Although there have been requests to expose the frame + rate, in connection with non-linear editing and frame + accurate seeks to SMPTE time-code positions, there has been no + resolution. Therefore the stream object interface will have to provide + a query for the minimum inter-frame interval. It can easily be derived + from the frame-rate of fixed-rate videos or from information that is + commonly stored in the container metadata for variable-rate formats. + For example the Matroska and + WebM + containers provide a FrameRate item, albeit listed as "information + only." Note that there is a tracking + bug for this feature at WHATWG/W3C where browser vendors can + express interest in implementing it.
+How can the application determine whether it has missed a + frame?
+ +RESOLVED. If a frame's presentTime
is earlier than
+ ustnow() + consumerLatency then the application will have to drop the
+ frame and acquire the next one.
Why not use the TEXTURE2D
target and
+ texImage2D
?
RESOLVED. Use a new texture target and new commands. A new texture
+ target makes it easy to specify, implement and conformance test the
+ restrictions that enable a zero-copy implementation of dynamic
+ textures as described in the Overview. Given
+ that one of those restriction is not allowing modification of the
+ texture data, which is normally done via texImage2D
using
+ a new command will make the usage model clearer.
Why not use sampler2D uniforms?
+ +RESOLVED. Use a new sampler type. Many zero-copy implementations + will need special shader code when sampling YUV format dynamic + textures. Implementations may choose to (a) re-compile at run time or + (b) inject conditional code which branches at run time according to + the format of the texture bound to TEXTURE_EXTERNAL_OES in the texture + unit to which the sampler variable is set. Without a new sampler type, + such conditional code would have to be injected for every sampler + fetch increasing the size of the shader and slowing sampling of other + texture targets. In order to preserve the possibility of using + approach (b), a new sampler type will be used.
+Should the API be implemented as methods on the texture object or + as commands taking a texture object as a parameter?
+ +RESOLVED. Neither. The WebGLTexture
object represents
+ an OpenGL texture name. No object is created until the name is bound
+ to a texture target. Therefore the new commands should operate on a
+ the currently bound texture object.
Should dynamic textures be a new texture type or can
+ WebGLTexture
be reused?
RESOLVED. WebGLTexture
can be reused. As noted in the
+ previous issue a WebGLTexture
represents a texture name
+ and is a handle to multiple texture types. The type of texture is set
+ according to the target to which the name is initially bound.
Should this extension use direct texture access commands or should
+ it use texParameter
and getTexParameter
?
RESOLVED. Use the latter. There is no directly accessible texture + object to which such commands can be added. Changing the API to have + such objects is outside the scope of this extension.
+Should we re-use #extension
+ NV_EGL_stream_consumer_external
, create our own GLSL extension
+ name or have both this and a WebGL-specific name?
RESOLVED. Any of WEBGL_dynamic_texture
or the aliases
+ GL_NV_EGL_stream_consumer_external
or
+ GL_OES_EGL_image_external
can be used to enable this
+ extension's features in the shader. This permits the same shader to be
+ used with both WebGL and OpenGL ES 2.0.
What should happen when an object of type
+ HTMLCanvasElement
, HTMLImageElement
or
+ HTMLVideoElement
is passed to the existing
+ tex*Image2D
commands?
UNRESOLVED. This behavior is outside the scope of this extension
+ but handling of these objects is very underspecified in the WebGL
+ specification and needs to be clarified. Suggestion: for single-frame
+ HTMLImageElement set the texture image to the HTMLImageElement; for an
+ animated HTMLImageElement set the texture image to the first frame of
+ the animation; for an HTMLCanvasElement, set the texture image to the
+ current canvas image that would be returned by toDataURL; for an
+ HTMLVideoElement, set the texture image to the current frame. In all
+ cases, the texture image does not change until a subsequent call to a
+ tex*Image2D
command. Is this a change from the way
+ any of these elements are handled today?
Should acquireImage
and releaseImage
+ generate errors if called when the stream is already in the state to
+ be set or ignore those extra calls?
RESOLVED. They should not generate errors.
+ acquireImage
will be defined to implicitly call
+ releaseImage
if there has not been an intervening
+ call.
This API is implementable on any platform at varying levels of + efficiency. Should it therefore move directly to core rather than + being an extension?
+ +RESOLVED. No, unless doing so would result in implementations + appearing sooner.
+Should this extension support HTMLImageElement?
+ +UNRESOLVED. The HTML 5 Living Standard provides virtually no rules
+ for handling of animated HTMLImageElements and specifically no
+ definition of a current frame. In order to texture the animations from
+ such elements, this specification will need to provide rules. If we
+ are tracking the behavior of CanvasRenderingContext2D.drawImage
+ then there is no point supporting HTMLImageElement as the
+ specification says to draw the first frame of animated
+ HTMLImageElements
.
Should this extension extend HTMLMediaElement
with an
+ acquireImage/releaseImage API?
RESOLVED. No. The API would have no purpose and would require + HTML{Video,Canvas,Image}Element becoming aware of WebGLTexture or, + even worse, aware of texture binding within WebGL. No similar API was + exposed to support CanvasRenderingContext2D.drawImage. The HTMLElement + is simply passed to drawImage.
+Should DOMHighResolutionTime
+ and window.performance.now()
from the W3C High-Resolution
+ Time draft be used for the timestamps and as UST?
RESOLVED. No. The specified unit is milliseconds and, although the
+ preferred accuracy is microseconds, the required accuracy is only
+ milliseconds. At millisecond accuracy it is not possible to
+ distinguish between 29.97 fps and 30 fps which means sound for a 29.97
+ fps video will be ~3.5 seconds out of sync after 1 hour. Also
+ fractional double
values must be used to represent times
+ < 1 ms with the attendant issues of variable time steps as the
+ exponent changes. Feedback has been provided. Hopefully the draft
+ specification will be updated.
Should UST 0 be system start-up, browser start-up or navigationStart + as defined in the W3C Navigation + Timing proposed recommendation?
+ +RESOLVED. If DOMHighResolutionTime
is used, then
+ navigationStart makes sense otherwise it can be left to the
+ implementation.
Should UST wrap rather then increment the exponent, so as to + maintain precision?
+ +UNRESOLVED. The exponent will need to be incremented after 2**53 + nanoseconds (~ 41 days). UST could wrap to 0 after that or just keep + counting. If it keeps counting, the precision will be halved so each + tick will be 2 nanoseconds. The next precision change will occur after + a further ~82 days.
+Should WDTStream.state be a proper idl enum?
+ +UNRESOLVED.
+Does the application need to be able to find out if it has missed a + potential renderAnimationFrame callback, i.e, it has taken longer than + the browser's natural rAF period? If so, how?
+ +UNRESOLVED.
+What are the base and units of a renderbuffer's present time on + iOS?
+ +UNRESOLVED.
+CanvasRenderingContext2D.drawImage
requires an
+ InvalidStateError be thrown if either width or height of the source
+ canvas is 0? Do we need to do mirror this?
RESOLVED. Treating this situation as failing to acquire an image + and so returning opaque black when sampled provides more consistent + handling across StreamSource types and is more consistent with OpenGL + ES.
+Should exceptions be used for errors on WDTStreams or should + GL-style error handling be used?
+ +UNRESOLVED.
+texStorage2DMultisample()
and the TEXTURE_2D_MULTISAMPLE
+ target from OpenGL ES 3.1.
+ This extension enables WebGL implementations to bind an HTMLIFrameElement object as the data source + to a texture. While bound, the extension provides an API to allow applications to request the latest + iframe rendering results to be blitted to the texture. The extension also provides an API to allow + applications to transform and forward related user input events from WebGL canvas to the bound iframe, + thus enabling the bound iframe to be interative inside a WebGL scene.
+Due to security concerns, currently this extension only supports same origin iframes. This + limitaion may be lifted in the future.
+
+ This function connects an iframe
to the texture currently bound to target
and returns a promise
+ that will be fulfilled once the iframe is rendered and ready to be blitted to the texture. If the iframe
+ is null
, any existing binding between the texture and an iframe is broken.
+ If there are any errors, generate the GL error synchronously and the returned promise is
+ rejected
+ with an InvalidStateError
.
+
+ Once the function returns successfully, the texture is defined as following: its effective
+ internal format becomes RGBA8
; its width and height becomes iframe element's
+ width and height.
+
Error cases are listed below: +
target
must be TEXTURE_2D
; otherwise an INVALID_ENUM
+ error is generated.target
, an INVALID_OPERATION
is
+ generated.iframe
is not the same origin, an INVALID_OPERATION
is
+ generated.Note this function returns a promise asynchronously because wiring an iframe rendering results + to a WebGL texture could take multiple frames. The iframe could be invisible, therefore not part of + the rendering pipeline and needs to be inserted into it. The iframe could also be in a seperate + process from the one where WebGL is in, although this is likely not the case right now because we + currently limit iframe to be same origin only.
++ This function instructs implementations to update the texture with the latest iframe rendering + results. The function returns a promise that will be fulfilled when the iframe rendering results from the same animation frame + when this function is called has been blitted to the texture. +
+
+ If an application uses requestAnimationFrame
, implementations must guarantee once
+ this function is called, the iframe rendering results from the same frame has been blitted to the
+ texture when entering the next animation frame. Therefore, it is not necessary for an
+ application to depend on the state of the returned promise. The promise is for applications that do not
+ use requestAnimationFrame
.
+
+ Once this function called, it is not recommended to read from the texture until the returns promise is + fulfilled. The content of the texture during this period is undefined. +
++ This function allows an application to define an event forwarding + function that decides whether to forward user input events received on + the WebGL canvas to the iframe. If yes, this function needs to transform + event locations and displacements as needed. With this, an application + can allow users to interact the iframe rendered inside the WebGL scene. +
++ The event forwarding function takes an event as input, and output a + bool. If returning true, the event is forwarded to the iframe and event + data might have been modified to transform the event from WebGL canvas + coordinates to iframe coordinates. +
++ TODO(zmo@chromium.org): need some help how to define the forwarding function + signature. +
+TEXTURE_VIDEO_IMAGE
.HTMLVideoElement
stream to video texture targets.HTMLVideoElement
's texture binding.WEBGL_video_texture
+ binding of HTMLVideoElement.This a fragment shader that samples a video texture.
++ #extension GL_WEBGL_video_texture : require + precision mediump float; + varying vec2 v_texCoord; + + uniform samplerVideoWEBGL uSampler; + + void main(void) { + gl_FragColor = texture2D(uSampler, v_texCoord); + } ++ +
This shows application that renders video using proposed extension.
++ var videoElement = document.getElementById("video"); + var videoTexture = gl.createTexture(); + + function update() { + var ext = gl.getExtension('WEBGL_video_texture'); + if(ext !=== null){ + gl.bindTexture(ext.TEXTURE_VIDEO_IMAGE, videoTexture); + ext.VideoElementTargetVideoTexture(ext.TEXTURE_VIDEO_IMAGE, videoElement); + gl.bindTexture(ext.TEXTURE_VIDEO_IMAGE, null); + } + } + + function render() { + gl.clearColor(0.0, 0.0, 1.0, 1.0); + gl.clear(gl.COLOR_BUFFER_BIT); + + gl.bindBuffer(gl.ARRAY_BUFFER, squareVerticesBuffer); + gl.vertexAttribPointer(vertexPositionAttribute, 3, gl.FLOAT, false, 0, 0); + + gl.activeTexture(gl.TEXTURE0); + gl.bindTexture(ext.TEXTURE_VIDEO_IMAGE, videoTexture); + gl.uniform1i(gl.getUniformLocation(shaderProgram, "uSampler"), 0); + + gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4); + } ++ +
Application renders each video frames into WebGL canvas based on game-loop pattern.
++ + while (true) { + update(); + processInput(); + render(); + } ++ +
texImage2D
,
+ compressedTexImage2D
and copyTexImage2D
+ methods of a WebGL context is changed. After a successful call to texStorage2DEXT
,
+ the value of TEXTURE_IMMUTABLE_FORMAT_EXT
for this texture
+ object is set to TRUE
, and no further changes to the dimensions
+ or format of the texture may be made. Using texImage2D
,
+ compressedTexImage2D
, copyTexImage2D
or
+ texStorage2DEXT
with the same texture will result
+ in the error INVALID_OPERATION
being generated.
+ + var extension = gl.getExtension('OES_depth24'); + if(extension !=== null){ + var depth = gl.createRenderbuffer(); + gl.bindRenderbuffer(gl.RENDERBUFFER, depth); + gl.renderbufferStorage(gl.RENDERBUFFER, extension.DEPTH_COMPONENT24_OES, 128, 128); + gl.bindRenderbuffer(gl.RENDERBUFFER, null); + + var fbo = gl.createFramebuffer(); + gl.bindFramebuffer(gl.FRAMEBUFFER, fbo); + gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, depth); + + var fboStatus = gl.checkFramebufferStatus(gl.FRAMEBUFFER); + console.assert(fboStatus == gl.FRAMEBUFFER_COMPLETE, 'Framebuffer is not complete'); + + gl.bindFramebuffer(gl.FRAMEBUFFER, null); + console.assert(gl.getError() == gl.NO_ERROR, 'A GL error occured'); + } ++
+ This extension exposes the compressed texture formats defined in the + + AMD_compressed_ATC_texture OpenGL extension to WebGL. +
+COMPRESSED_RGB_ATC_WEBGL
,
+ COMPRESSED_RGBA_ATC_EXPLICIT_ALPHA_WEBGL
, and
+ COMPRESSED_RGBA_ATC_INTERPOLATED_ALPHA_WEBGL
may be passed to
+ the compressedTexImage2D
and compressedTexSubImage2D
entry points.
+
+ These formats correspond to the 3 formats defined in the AMD_compressed_ATC_texture OpenGL
+ extension. Although the enum names are changed, their numeric values are the same. The correspondence
+ is given by this table:
+ WebGL format enum | +OpenGL format enum | +Numeric value | +
---|---|---|
COMPRESSED_RGB_ATC_WEBGL | +ATC_RGB_AMD | +0x8C92 | +
COMPRESSED_RGBA_ATC_EXPLICIT_ALPHA_WEBGL | +ATC_RGBA_EXPLICIT_ALPHA_AMD | +0x8C93 | +
COMPRESSED_RGBA_ATC_INTERPOLATED_ALPHA_WEBGL | +ATC_RGBA_INTERPOLATED_ALPHA_AMD | +0x87EE | +
getParameter
with the argument COMPRESSED_TEXTURE_FORMATS
+ will include the 3 formats from this specification.
+ The following format-specific restrictions must be enforced:
+The byteLength
of the ArrayBufferView, pixels
, passed to
+ compressedTexImage2D
, must be equal to the following number of bytes:
+ floor((width + 3) / 4) * floor((height + 3) / 4) * 8
+
+ If it is not, an INVALID_VALUE
error is generated.
The byteLength
of the ArrayBufferView, pixels
, passed to
+ compressedTexImage2D
, must be equal to the following number of bytes:
+ floor((width + 3) / 4) * floor((height + 3) / 4) * 16
+
+ If it is not, an INVALID_VALUE
error is generated.
+ This extension enables software emulation of mediump and lowp floating point arithmetic in GLSL shaders for the purposes of shader testing. The emulation is enabled for shaders that are compiled after the extension has been enabled. Shaders compiled before the extension is enabled are not subject to emulation. The emulation does not model any specific device, but hypothetical shader units that implement IEEE half-precision floating point and the minimum requirements for lowp in The OpenGL ES Shading Language specification version 1.00.17. It is suggested that the software emulation is implemented in shader code, but a large performance cost on the order of one magnitude is still expected on all operations subject to emulation. +
++ The emulation applies to: +
++ Examples of operations not subject to emulation: +
++ Operations listed above resulting in a mediump precision floating point scalar, vector or matrix must perform the following steps on the resulting scalar value or each of the components of the vector or matrix: +
++ Operations listed above resulting in a lowp precision floating point scalar, vector or matrix must perform the following steps on the resulting scalar value or each of the components of the vector or matrix: +
++ sign, floor, and abs in the above steps refer to built-in functions in The OpenGL ES Shading Language. +
++ The handling of positive and negative infinity and NaN by the above steps is undefined. +
+
+ The #pragma webgl_debug_shader_precision
directive is accepted by the shader compiler, and can be used to disable the emulation for specific shaders. By default, emulation is enabled by all shaders, but including #pragma webgl_debug_shader_precision(off)
in a shader must disable emulation for that shader.
+
+ Due to hardware limitations, it is allowed that some shaders which compile and link successfully when the extension is disabled do not compile or link when the extension is enabled. +
+drawElements
robustness is currently ensured by checking
+ indices in the element array buffer against the size of the array buffer
+ they are indexing. These checks are undesirable from a performance
+ perspective, since they introduce CPU overhead to the API and require index
+ buffers to have a copy in CPU-accessible memory.
This extension changes the behavior of drawElements
to use
+ security features built into hardware, bypassing the CPU-side range check
+ and improving performance. The drawback is that if out-of-range indices are
+ referenced by drawElements
, no error is generated and the
+ rendering results of that call will be undefined. However, supplying
+ out-of-range indices to drawElements
will not result in
+ reading vertex data from outside the enabled vertex buffer objects, nor
+ abnormal program termination, as specified in the OpenGL extension
+ ARB_robust_buffer_access_behavior.
It is suggested that this extension is left disabled when debugging. Any
+ INVALID_OPERATION
errors from drawElements
seen
+ while the extension is off mean that the application is supplying incorrect
+ indices to the API, even if rendering results would seem correct when this
+ extension is enabled.
This extension interacts with + ANGLE_instanced_arrays.
+ +drawElements
will not produce an INVALID_OPERATION
+ error if a referenced index lies outside the storage of the bound buffer. Instead,
+ rendering is performed and attribute indices that are outside the valid range will produce
+ undefined rendering results.
+ drawElementsInstancedANGLE
+ will not produce an INVALID_OPERATION
error if a referenced index
+ lies outside the storage of the bound buffer, but will instead produce undefined
+ rendering results similarly to drawElements
.
+ + This extension allows asynchronous buffer readback in WebGL 2.0. +
+getBufferSubData
but returns a Promise
+ instead of an immediate readback result.
+ dstBuffer
.
+ buf
be the buffer bound to target
at the time
+ getBufferSubDataAsync
is called.
+ If length
is 0, let copyLength
be
+ dstBuffer.length - dstOffset
; otherwise, let
+ copyLength
be length
.
+ copyLength
is greater than zero,
+ copy copyLength
typed elements (each of size dstBuffer.BYTES_PER_ELEMENT
)
+ from buf
into dstBuffer
,
+ reading buf
starting at byte index srcByteOffset
and
+ writing into dstBuffer
starting at element index dstOffset
.
+ If copyLength
is 0, no data is written to dstBuffer
, but
+ this does not cause a GL error to be generated.
+ target
,
+ an INVALID_OPERATION
error is generated.
+ target
is TRANSFORM_FEEDBACK_BUFFER
,
+ and any transform feedback object is currently active,
+ an INVALID_OPERATION
error is generated.
+ dstOffset
is greater than dstBuffer.length
,
+ an INVALID_VALUE
error is generated.
+ dstOffset + copyLength
is greater than dstBuffer.length
,
+ an INVALID_VALUE
error is generated.
+ srcByteOffset
is less than zero,
+ an INVALID_VALUE
error is generated.
+ srcByteOffset + copyLength*dstBuffer.BYTES_PER_ELEMENT
+ is larger than the length of buf
,
+ an INVALID_OPERATION
is generated.
+ getBufferSubDataAsync
must run these steps:
+ promise
be a Promise to be returned.
+ promise
with an InvalidStateError
.
+ buf
into the GL command stream, using the range
+ defined above.
+ promise
, but continue running these steps in parallel.
+ dstBuffer
has been neutered,
+ reject
+ promise
with an InvalidStateError
. In this case, no GL
+ error is generated.
+ dstBuffer
, using the range defined
+ above.
+ promise
with dstBuffer
.
+ dstBuffer
.
+
+ getBufferSubDataAsync
is called multiple times in a row with the same
+ dstBuffer
, then
callbacks added synchronously will never see
+ results of subsequent getBufferSubDataAsync
calls.
+ getBufferSubData
, this version may
+ impose less overhead on applications. Intended use cases include reading pixels into a
+ pixel buffer object and examining that data on the CPU. It does not force the graphics
+ pipeline to be stalled as getBufferSubData
does.
+ + This extension exposes the ability to share WebGL resources with multiple WebGLRenderingContexts. +
++ Background: +
++ The OpenGL ES spec defines that you can share a resource (texture, buffer, shader, program, + renderbuffer) with 2 or more GL contexts but with some caveats. To guarantee you'll see a + change made in one context in other context requires calling glFinish on the context that + made the change and call glBind on the context that wants to see the change. +
++ Not calling glFinish and/or glBind does not guarantee you won't see the results which means + that users may do neither and their app might just happen to work on some platforms and + mysteriously have glitches, rendering corruption, gl errors or program failure on others. +
++ WebGL must present consistent behavior for sharing and so this extension provides + an API so that implementions can enforce and optimize these requirements. +
+Adds a new context creation parameter:
+group
+ attribute from the WEBGL_shared_resources
object from an existing context
+ then resources from the existing context are shared with the newly created context.
+ +var canvas1 = document.createElement("canvas"); +var canvas2 = document.createElement("canvas"); +var ctx1 = canvas1.getContext("webgl"); +var sharedResourcesExtension = ctx1.getExtension("WEBGL_shared_resources"); +var ctx2 = canvas2.getContext("webgl", { + shareGroup: sharedResourcesExtension.group +}); ++
+ In order for a context to use a resouce it must first acquire it. + Contexts can make a request to acquire a resource by calling acquireSharedResource + in one of 2 modes, EXCLUSIVE or READ_ONLY. A resource may be acquired by multiple + contexts in READ_ONLY mode. The resource may only be acquired by one context + if the mode is EXCLUSIVE. acquireSharedResource returns an id you can use to cancel the acquire + by calling cancelAcquireSharedResource. + When the resource is available in the requested mode the callback + will be invoked. Resources start their life as acquired in EXCLUSIVE mode in the context + in which they are created. +
++ To release a resource so it may be acquired by another context call releaseSharedResource and + pass it the resource to be released. +
++ After a resource is acquired it must be bound before it is used. Binding + means for buffers calling bindBuffer, for textures either bindTexture or + framebufferTexture2D, for renderbuffers either bindRenderbuffer or framebufferRenderbuffer, + for shaders attachShader, for programs useProgram. Binding once is sufficient to satisfy + this requirement. In other words, if you have a texture attached to more than one texture + unit the texture only needs to be re-bound to 1 texture unit. Attemping to use a resource + which has not been bound since it was acquired generates INVALID_OPERATION. +
++ Bind Requirement Algorithm: +
++ Each resource has a per-context bound flag. When a resource is acquired in a context its + bound flag for that context is set to false. If one of the functions listed above + is called the bound flag for that context is set to true. Drawing and reading functions, + clear, drawArrays, drawElements, readPixels, that would access a resource whose bound flag + for that context is false generate INVALID_FRAMEBUFFER_OPERATION. All other functions that + use a resource whose bound flag for that context is false generate INVALID_OPERATION. +
++ Note: In the specific case of programs, it is not an error to call draw with a program + or call useProgram for a program which has shaders that have + been acquired but not re-attached. Nor it is an error to draw with or call useProgram + for a program which has shaders that have not been acquired. It is an error to call linkProgram + for a program that is using shaders that have been acquired but not re-attached. +
++ When an attempt is made to use a resource that is not acquired in the current context + the implementation must generate the error INVALID_OPERATION or INVALID_FRAMEBUFFER_OPRATION. + This includes all gl calls + that would access the given resource directly or indirectly. For example, a + draw call must fail if any of the resources it would access is not acquired in the + correct mode for the call. In other words, if the draw call would read from a buffer + or texture and that buffer or texture is not acquired for READ_ONLY or EXCLUSIVE mode the draw + must fail with INVALID_FRAMEBUFFER_OPERATION. If the draw would render to a texture or renderbuffer + that is not acquired for EXCLUSIVE mode the draw must fail and generate INVALID_FRAMEBUFFER_OPERATION. + If a program used in the draw is not acquired for READ_ONLY or EXCLUSIVE mode the draw or clear must fail + and generate INVALID_FRAMEBUFFER_OPERATION. +
++ For buffers not acquired this includes but is not limited to +
++ bindBuffer + bufferData + bufferSubData + deleteBuffer + drawArrays + drawElements + getParameter with parameter (BUFFER_SIZE or BUFFER_USAGE) + isBuffer + vertexAttribPointer ++
+ For a buffer acquired in READ_ONLY mode this includes but is not limited to +
++ bufferData + bufferSubData ++
+ For programs not acquired this includes but is not limited to +
++ attachShader + bindAttribLocation + drawArrays + drawElements + deleteProgram + getActiveAttrib + getActiveUniform + getAttribLocation + getUniformLocation + getProgramParameter + getProgramInfoLog + isProgram + linkProgram + useProgram + validateProgram ++
+ For programs acquired in READ_ONLY mode includes but is not limited to +
++ bindAttribLocation + deleteProgram + linkProgram ++
+ For renderbuffers not acquired this includes but is not limited to +
++ bindRenderbuffer + clear + deleteRenderbuffer + drawArrays + drawElements + framebufferRenderbuffer + isRenderbuffer + renderbufferStorage ++
+ For renderbuffers acquired in READ_ONLY mode this includes +
++ clear + deleteRenderbuffer + drawArrays + drawElements + renderbufferStorage ++
+ For shaders not acquired this includes but is not limited to +
++ attachShader + compileShader + deleteShader + getShaderSource + getShaderParameter + isShader + shaderSource ++
+ For shaders acquired in READ_ONLY mode this includes but is not limited to +
++ deleteShader + compileShader + shaderSource ++
+ For textures not acquired this includes but is not limited to +
++ bindTexture + clear + compressedTexImage2D + compressedTexSubImage2D + copyTexImage2D + copyTexSubImage2D + drawArrays + drawElements + deleteTexture + framebufferTexture2D + getTexParameter + isTexture + texImage2D + texParameter + texSubImage2D ++
+ For textures acquired in READ_ONLY mode this includes but is not limited to +
++ clear + compressedTexImage2D + compressedTexSubImage2D + copyTexImage2D + copyTexSubImage2D + drawArrays + drawElements + deleteTexture + texImage2D + texParameter + texSubImage2D ++
+ The term "not limited to" is intended to point out that extension may enable + other functions to which these rule should apply. For example drawArraysInstancedANGLE + must follow the same rules as drawArrays. +
++ Calling checkFramebufferStatus with the argument FRAMEBUFFER or DRAW_FRAMEBUFFER must + return FRAMEBUFFER_INCOMPLETE_ATTACHMENT if any of the resources referenced by the currently + bound framebuffer are not acquired for EXCLUSIVE access. + Calling checkFramebufferStatus with the argument READ_FRAMEBUFFER will return + FRAMEBUFFER_INCOMPLETE_ATTACHMENT if any of the resources referenced by the currently bound + framebuffer are not acquired for EXCLUSIVE or READ_ONLY access. +
++ Note: This extension exposes the constants READ_FRAMEBUFFER and DRAW_FRAMEBUFFER only for + the purpose of calling checkFramebufferStatus. In particular, this extension does not enable + calling bindFramebuffer with either constant. +
++ A context that is deleted automatically releases all resources it has acquired. Note that + currently there is no way to explicitly delete a context. Contexts are deleted through + garbage collection. +
++ Note that implementing this extension changes the base class of the sharable resources. + Specifically: WebGLBuffer, WebGLProgram, WebGLRenderbuffer, WebGLShader, and WebGLTexture + change their base class from WebGLObject to WebGLSharedObject. +
+This extension exposes the ability to subscribe to a set of uniform targets + which can be used to populate uniforms within shader programs. This extension + is generic, but currently only supports mouse position as a subscription target. +
Background: +
The depth of the web pipeline makes it difficult to support low latency + interaction as event information retrieved via javascript + is outdated by the time it's displayed to clients. By populating event + information later in the pipeline one can reduce perceived input latency. +
This extension creates a new buffer type 'Valuebuffer'
to
+ maintain the active state for predefined subscription targets. Since a
+ mechanism for buffering uniform information isn't available pre 2.0 (UBOs)
+ an additional data type was needed. See 'New Types' for additional
+ information.
When this extension is enabled:
+Valuebuffer
object.Valuebuffer
+ object.Valuebuffer
object.Valuebuffer
object.Valuebuffer
object to a subscription target.Valuebuffer
object with the state of the subscriptions to
+ which it is subscribed.Valuebuffer
object.This interface is used to maintain a reference to internal
+ Valuebuffer
subscription states.
A Valuebuffer
abstracts the mapping of subscription targets to internal
+ state and acts as a single storage object for subscription information (e.g. current
+ mouse position). Clients can then use the objects data to populate uniform variables.
Post WebGL API 2.0, this abstraction could exist as a layer ontop of UBOs
+ which managers the mapping of subscription targets to internal state and the mapping
+ of subscription targets to offsets within the buffer. The UBO would be used to store the
+ active buffer state as well as the uniform location mapping. Clients would be required to
+ state all their subscription targets at once to allocate the appropriate amount of memory.
+ Aside from this small change the implementation is essentially the same, with UBOs replacing
+ Valuebuffers
and relevant create, delete, bind methods being replaced.
+ Additionally, the inclusion of UBOs would replace the need for
+ uniformValueBuffer(...)
.
SUBSCRIBED_VALUES_BUFFER
+ is accepted as the target parameter to bindValuebuffer
SUBSCRIBED_VALUES_BUFFER
+ is accepted as the target parameter to subscribeValuebuffer
MOUSE_POSITION
+ is accepted as the subscription parameter to subscribeValuebuffer
SUBSCRIBED_VALUES_BUFFER
+ is accepted as the target parameter to populateSubscribedValues
SUBSCRIBED_VALUES_BUFFER
+ is accepted as the target parameter to uniformValuebuffer
MOUSE_POSITION
+ is accepted as the subscription parameter to uniformValuebuffer
+<script id="vshader" type="x-shader/x-vertex"> + uniform ivec2 uMousePosition; + + void main() + { + gl_Position = vec4(uMousePosition, 0, 1); + } +</script> + +function init(gl) { + shader.uMousePosition = gl.getUniformLocation(shader, "uMousePosition"); + ... + + var ext = gl.getExtension('WEBGL_subscribe_uniform'); + + // Create the value buffer and subscribe. + var valuebuffer = ext.createValuebuffer(); + ext.bindValuebuffer(SUBSCRIBED_VALUES_BUFFER, valuebuffer); + ext.subscribeValue(MOUSE_POSITION); + ... +} + +function draw(gl) { + // Populate buffer and populate uniform + ext.bindValuebuffer(SUBSCRIBED_VALUES_BUFFER, valuebuffer); + ext.populateSubscribedValues(SUBSCRIBED_VALUES_BUFFER); + ext.uniformValuebuffer(shader.uMousePosition, + SUBSCRIBED_VALUES_BUFFER, + MOUSE_POSITION); + + gl.drawElements(...); +} +
This extension provides support for the Media
+ Capture Depth Stream Extensions. Specifically, it supports
+ uploading to a WebGL texture a video
element whose
+ source is a MediaStream
object containing a
+ depth track.
video
element whose source is a MediaStream
+ object containing a
+ depth track may be uploaded to a WebGL
+ texture of format RGB
and type
+ UNSIGNED_SHORT_5_6_5
.
+ var ext = gl.getExtension("WEBGL_texture_from_depth_video"); +if (ext) { + navigator.getUserMedia({ video: true }, successVideo, failureVideo); +} + +var depthVideo; + +function successVideo(s) { + // wire the stream into a <video> element for playback + depthVideo = document.querySelector('#video'); + depthVideo.src = URL.createObjectURL(s); + depthVideo.play(); +} + +// ... later, in the rendering loop ... + +if (ext) { + gl.texImage2D( + gl.TEXTURE_2D, + 0, + gl.RGB, + gl.RGB, + gl.UNSIGNED_SHORT_5_6_5, + depthVideo + ); +} + +<script id="fragment-shader" type="x-shader/x-fragment"> + varying vec2 v_texCoord; + // u_tex points to the texture unit containing the depth texture. + uniform sampler2D u_tex; + void main() { + vec4 floatColor = texture2D(u_tex, v_texCoord); + vec3 rgb = floatColor.rgb; + ... + float depth = 63488. * rgb.r + 2016. * rgb.g + 31. * rgb.b; + ... + } +</script>
FLOAT
textures as FBO
+ attachments.This template for WebGL extensions is derived from the OpenGL extension + template. Refer to the OpenGL extension template for full + documentation of the content that should be contained in the sections + below. Because WebGL is fundamentally a Web API, its extensions are + specified in XML transformed with XSLT into HTML for easier + hyperlinking.
+ +Because most WebGL extensions are expected to simply mirror existing + OpenGL and OpenGL ES extensions, it is desirable to keep the WebGL + extension specifications as small as possible and simply refer to the + other specifications for the behavioral definitions.
+After attempting a connection, the plumber is no longer the same.
+