It uses existing standards wherever possible, even if those standards have some shortcomings when used with VRML. Using existing standards instead of inventing new, incompatible standards makes it much easier for the Web developer, who can use existing tools to help create VRML content.
It also makes it much easier for somebody implementing the VRML standard, since libraries of code for popular standards already exist. VRML files may contain references to files in many other standard formats. Files containing Java or JavaScript code may be referenced and used to implement programmed behavior for the objects in your worlds. Each of these is an independent standard, chosen to be used with VRML because of its widespread use on the Internet. This book and the VRML97 specification describes how they are used with VRML; it does not attempt to define these other standards or describe how to create files in these other file formats.
The definition of how VRML should be used with other standards is generally done by the organizations that define those standards. Combining both 2D and 3D information is often much better than either 2D or 3D alone. The following list summarizes the possibilities:. Therefore, VRML files based on the August version should read successfully into implementations based on the April revision. At present, standard file compression tools, such as gzip, are used to compress VRML files.
This yields, on average, about a compression ratio and requires no extra implementation by the VRML browser.
However, many users and developers have requested even more compression. A binary format for VRML could produce much higher compression ratios e. This proposal has been adopted as the working document for VRML's binary file format and will continue to be reviewed and revised, and eventually ratified as an addendum to the VRML specification.
Many users have requested that a standard programmer interface to VRML browsers be defined. This feature allows technical users to write external programs that communicate with a VRML browser. This work will continue to be reviewed and revised, and eventually ratified as an addendum to the VRML specification.
Many feel that the most important long-term issue for VRML is adding multiuser capabilities. The first implementations of these proposals will appear in and should be interesting to see. VRML files describe 3D objects and worlds using a hierarchical scene graph. Entities in the scene graph are called nodes. Nodes store their data in fields , and VRML 2.
The VRML scene graph is a directed acyclic graph. Nodes can contain other nodes some types of nodes may have " children " and may be contained in more than one node they may have more than one " parent " , but a node must not contain itself.
This scene graph structure makes it easy to create large worlds or complicated objects from subparts. Each node type defines the names and types of events that instances of that type may generate or receive, and ROUTE statements define event paths between event generators and receivers. Sensors are the basic user interaction and animation primitives of VRML. This generates the vaultUnlocked eventOut which starts a 'click' sound. Here is the example:. The following IndexedFaceSet contained in a Shape node uses all four of the geometric property nodes to specify vertex coordinates, colours per vertex, normals per vertex, and texture coordinates per vertex note that the material sets the overall transparency :.
VRML 2. The following is an example of a new node RefractiveMaterial. This node behaves as a Material node with an added field, indexOfRefraction.
If the browser recognizes the URN, urn:inet:foo. The target parameter can be used by the anchor node to send a request to load a URL into another frame:.
An Anchor may be used to bind the viewer to a particular viewpoint in a virtual world by specifying a URL ending with viewpointName , where viewpointName is the DEF name of a viewpoint defined in the world. For example:. A directional light source illuminates only the objects in its enclosing grouping node.
The light illuminates everything within this coordinate system including the objects that precede it in the scene graph as shown below:. This simple example defines a PointSet composed of 3 points.
The first point is red 1 0 0 , the second point is green 0 1 0 , and the third point is blue 0 0 1. The second PointSet instances the Coordinate node defined in the first PointSet, but defines different colours:. The LOD node is typically used for switching between different versions of geometry at specified distances from the viewer. However, if the range field is left at its default value, the browser selects the most appropriate child from the list given.
It can make this selection based on performance or perceived importance of the object. Background nodes are specified in the local coordinate system and are affected by the accumulated rotation of their ancestors as described below.
Background nodes are bindable nodes as described in " 2. Once active, the Background is then bound to the browsers view. More details on the bind stack may be found in " 2. The backdrop is conceptually a partial sphere the ground enclosed inside of a full sphere the sky in the local coordinate system with the viewer placed at the centre of the spheres. Both spheres have infinite radius one epsilon apart and each is painted with concentric circles of interpolated colour perpendicular to the local Y-axis of the sphere.
The Background node is subject to the accumulated rotations of its ancestors' transformations. Scaling and translation transformations are ignored. The sky sphere is always slightly farther away from the viewer than the ground sphere causing the ground to appear in front of the sky in cases where they overlap. The skyColor field specifies the colour of the sky at various angles on the sky sphere.
The first value of the skyColor field specifies the colour of the sky at 0. The skyAngle field specifies the angles from the zenith in which concentric circles of colour appear.
The zenith of the sphere is implicitly defined to be 0. There must be one more skyColor value than there are skyAngle values.
The first colour value is the colour at the zenith, which is not specified in the skyAngle field. If the last skyAngle is less than pi , then the colour band between the last skyAngle and the nadir is clamped to the last skyColor. The sky colour is linearly interpolated between the specified skyColor values. The groundColor field specifies the colour of the ground at the various angles on the ground hemisphere. The first value of the groundColor field specifies the colour of the ground at 0.
The groundAngle field specifies the angles from the nadir that the concentric circles of colour appear. The nadir of the sphere is implicitly defined at 0. There must be one more groundColor value than there are groundAngle values. The first colour value is for the nadir which is not specified in the groundAngle field.
The ground colour is linearly interpolated between the specified groundColor values. The panorama consists of six images, each of which is mapped onto a face of an infinitely large cube contained within the backdrop spheres and centred in the local coordinate system. The images are applied individually to each face of the cube. On the front, back, right, and left faces of the cube, when viewed from the origin looking down the negative Z-axis with the Y-axis as the view up direction, each image is mapped onto the corresponding face with the same orientation as if the image were displayed normally in 2D backUrl to back face, frontUrl to front face, leftUrl to left face, and rightUrl to right face.
On the bottom face of the box, when viewed from the origin along the negative Y-axis with the negative Z-axis as the view up direction, the bottomUrl image is mapped onto the face with the same orientation as if the image were displayed normally in 2D. Alpha values in the panorama images i. See " 2. Often, the bottomUrl and topUrl images will not be specified, to allow sky and ground to show. The other four images may depict surrounding mountains or other distant scenery. CGM that can be rendered into a 2D image.
Details on the url fields may be found in " 2. Panorama images may be one component greyscale , two component greyscale plus alpha , three component full RGB colour , or four-component full RGB colour plus alpha.
Ground colours, sky colours, and panoramic images do not translate with respect to the viewer, though they do rotate with respect to the viewer. That is, the viewer can never get any closer to the background, but can turn to examine all sides of the panorama cube, and can look up and down to see the concentric rings of ground and sky if visible.
Background is not affected by Fog nodes. Therefore, if a Background node is active i. It is the author's responsibility to set the Background values to match the Fog values e. The Billboard node is a grouping node which modifies its coordinate system so that the Billboard node's local Z-axis turns to point at the viewer. The Billboard node has children which may be other children nodes. The axisOfRotation field specifies which axis to use to perform the rotation.
This axis is defined in the local coordinate system. A special case of billboarding is viewer-alignment. In this case, the object rotates to keep the billboard's local Y-axis parallel with the viewer's up vector.
This special case is distinguished by setting the axisOfRotation to 0, 0, 0. The following steps describe how to align the billboard's Y-axis to the viewer's up vector:. When the axisOfRotation and the billboard-to-viewer line are coincident, the plane cannot be established and the resulting rotation of the billboard is undefined.
For example, if the axisOfRotation is set to 0,1,0 Y-axis and the viewer flies over the billboard and peers directly down the Y-axis, the results are undefined.
The bboxCenter and bboxSize fields specify a bounding box that encloses the Billboard node's children. A default bboxSize value, -1, -1, -1 , implies that the bounding box is not specified and if needed must be calculated by the browser.
A description of the bboxCenter and bboxSize fields is contained in " 2. The Box node specifies a rectangular parallelepiped box centred at 0, 0, 0 in the local coordinate system and aligned with the local coordinate axes. The Box node's size field specifies the extents of the box along the X-, Y-, and Z-axes respectively and each component value must be greater than 0. Figure illustrates the Box node. Textures are applied individually to each face of the box. TextureTransform affects the texture coordinates of the Box.
The Box node's geometry requires outside faces only. When viewed from the inside the results are undefined. The Collision node is a grouping node that specifies the collision detection properties for its children and their descendants , specifies surrogate objects that replace its children during collision detection, and sends events signaling that a collision has occurred between the user's avatar and the Collision node's geometry or surrogate.
By default, all geometric nodes in the scene are collidable with the viewer except IndexedLineSet, PointSet, and Text. Browsers shall detect geometric collisions between the user's avatar see NavigationInfo and the scene's geometry, and prevent the avatar from 'entering' the geometry.
If there are no Collision nodes specified in a scene, browsers shall detect collision with all objects during navigation. The Collision node's collide field enables and disables collision detection.
If collide is set to FALSE, the children and all descendants of the Collision node shall not be checked for collision, even though they are drawn. Collision nodes with the collide field set to TRUE detect the nearest collision with their descendent geometry or proxies.
Not all geometry is collidable. Each geometry node specifies its own collision characteristics. When the nearest collision is detected, the collided Collision node sends the time of the collision through its collideTime eventOut.
This behaviour is recursive. If a Collision node contains a child, descendant, or proxy see below that is a Collision node, and both Collision nodes detect that a collision has occurred, both send a collideTime event at the same time. The bboxCenter and bboxSize fields specify a bounding box that encloses the Collision node's children.
A description of the bboxCenter and bboxSize fields may be found in " 2. The collision proxy, defined in the proxy field, is any legal children node as described in " 2. The proxy is used strictly for collision detection; it is not drawn. If the value of the collide field is FALSE, collision detection is not performed with the children or proxy descendent nodes.
If the root node of a scene is a Collision node with the collide field set to FALSE, collision detection is disabled for the entire scene regardless of whether descendent Collision nodes have set collide TRUE. If the value of the collide field is TRUE and the proxy field is non-NULL, the proxy field defines the scene on which collision detection is performed. If the proxy value is NULL, collision detection is performed against the children of the Collision node.
If proxy is specified, any descendent children of the Collision node are ignored during collision detection. If children is empty, collide is TRUE, and proxy is specified, collision detection is performed against the proxy but nothing is displayed. In this manner, invisible collision objects may be supported. The collideTime eventOut generates an event specifying the time when the user's avatar see NavigationInfo intersects the collidable children or proxy of the Collision node.
An ideal implementation computes the exact time of intersection. Implementations may approximate the ideal by sampling the positions of collidable objects and the user. The NavigationInfo node contains additional information for parameters that control the user's size.
Color nodes are only used to specify multiple colours for a single geometric shape, such as a colours for the faces or vertices of an IndexedFaceSet. A Material node is used to specify the overall material parameters of lit geometry. If both a Material and a Color node are specified for a geometric shape, the colours shall replace the diffuse component of the material. Textures take precedence over colours; specifying both a Texture and a Color node for geometric shape will result in the Color node being ignored.
Details on lighting equations are described in " 2. The number of colours in the keyValue field shall be equal to the number of keyframes in the key field. Results are undefined when interpolating between two consecutive keys with complementary hues. The Cone node specifies a cone which is centred in the local coordinate system and whose central axis is aligned with the local Y-axis.
The bottomRadius field specifies the radius of the cone's base, and the height field specifies the height of the cone from the centre of the base to the apex. By default, the cone has a radius of 1. Both bottomRadius and height must be greater than 0. Figure illustrates the Cone node.
The side field specifies whether sides of the cone are created and the bottom field specifies whether the bottom cap of the cone is created. A value of TRUE specifies that this part of the cone exists, while a value of FALSE specifies that this part does not exist not rendered or eligible for collision or sensor intersection tests.
When a texture is applied to the sides of the cone, the texture wraps counterclockwise from above starting at the back of the cone. The bottom cap texture appears right side up when the top of the cone is rotated towards the -Z-axis. TextureTransform affects the texture coordinates of the Cone. The Cone geometry requires outside faces only.
This node linearly interpolates among a set of MFVec3f values. The number of coordinates in the keyValue field shall be an integer multiple of the number of keyframes in the key field. The Cylinder node specifies a capped cylinder centred at 0,0,0 in the local coordinate system and with a central axis oriented along the local Y-axis. The radius field specifies the radius of the cylinder and the height field specifies the height of the cylinder along the central axis.
Both radius and height shall be greater than 0. Figure 3. Parts which do not exist are not rendered and not eligible for intersection tests e. When a texture is applied to a cylinder, it is applied differently to the sides, top, and bottom. On the sides, the texture wraps counterclockwise from above starting at the back of the cylinder. TextureTransform affects the texture coordinates of the Cylinder node.
The Cylinder node's geometry requires outside faces only. The CylinderSensor node maps pointer motion e. The CylinderSensor uses the descendent geometry of its parent node to determine whether it is liable to generate events. The enabled exposed field enables and disables the CylinderSensor node.
If TRUE, the sensor reacts appropriately to user events. If enabled receives a TRUE event the sensor is enabled and ready for user activation. A CylinderSensor node generates events when the pointing device is activated while the pointer is indicating any descendent geometry nodes of the sensor's parent group.
Upon activation of the pointing device while indicating the sensor's geometry, an isActive TRUE event is sent. The initial acute angle between the bearing vector and the local Y-axis of the CylinderSensor node determines whether the sides of the invisible cylinder or the caps disks are used for manipulation. The perpendicular vector from the initial intersection point to the Y-axis defines zero rotation about the Y-axis.
If the initial acute angle between the bearing vector and the local Y-axis of the CylinderSensor node is greater than or equal to diskAngle , then the sensor behaves like a cylinder. The shortest distance between the point of intersection between the bearing and the sensor's geometry and the Y-axis of the parent group's local coordinate system determines the radius of an invisible cylinder used to map pointing device motion and marks the zero rotation value.
More details are available in " 2. When the sensor generates an isActive TRUE event, it grabs all further motion events from the pointing device until it is released and generates an isActive FALSE event other pointing-device sensors cannot generate events during this time. If a 3D pointing device e. If the initial angle results in cylinder rotation as opposed to disk behaviour and if the pointing device is dragged off the cylinder while activated, browsers may interpret this in a variety of ways e.
The minAngle and maxAngle fields are restricted to the range [-2 , 2 ]. Further information about this behaviour may be found in " 2. The DirectionalLight node defines a directional light source that illuminates along rays parallel to a given 3-dimensional vector.
A description of the ambientIntensity , color , intensity , and on fields is in " 2. The direction field specifies the direction vector of the illumination emanating from the light source in the local coordinate system. Light is emitted along parallel rays from an infinite distance away. A directional light source illuminates only the objects in its enclosing parent group.
The light may illuminate everything within this coordinate system, including all children and descendants of its parent group. The accumulated transformations of the parent nodes affect the light. DirectionalLight nodes do not attenuate with distance.
A precise description of VRML's lighting equations is contained in " 2. The geometry is described by a scalar array of height values that specify the height of a surface above each point of the grid. The xDimension and zDimension fields indicate the number of elements of the grid height array in the X and Z directions. Both xDimension and zDimension must be greater than or equal to zero. The vertex locations for the rectangles are defined by the height field and the xSpacing and zSpacing fields:.
The color field specifies per-vertex or per-quadrilateral colours for the ElevationGrid node depending on the value of colorPerVertex. The colorPerVertex field determines whether colours specified in the colour field are applied to each vertex or each quadrilateral of the ElevationGrid node. The normal field specifies per-vertex or per-quadrilateral normals for the ElevationGrid node. If the normal field is NULL, the browser shall automatically generate normals, using the creaseAngle field to determine if and how normals are smoothed across the surface see " 2.
The normalPerVertex field determines whether normals are applied to each vertex or each quadrilateral of the ElevationGrid node depending on the value of normalPerVertex. The texCoord field specifies per-vertex texture coordinates for the ElevationGrid node. The default texture coordinates range from 0,0 at the first vertex to 1,1 at the last vertex. The S texture coordinate is aligned with the positive X-axis, and the T texture coordinate with positive Z-axis.
The ccw , solid , and creaseAngle fields are described in " 2. By default, the quadrilaterals are defined with a counterclockwise ordering. Hence, the Y-component of the normal is positive. Backface culling is enabled when the solid field is TRUE. The Extrusion node specifies geometric shapes based on a two dimensional cross-section extruded along a three dimensional spine in the local coordinate system.
The cross-section can be scaled and rotated at each spine point to produce a wide variety of shapes. Shapes are constructed as follows. It is then translated by the first spine point and oriented using the first orientation parameter as explained later.
The same procedure is followed to place a cross-section at the second spine point, using the second scale and orientation values. Corresponding vertices of the first and second cross-sections are then connected, forming a quadrilateral polygon between each pair of vertices.
This same procedure is then repeated for the rest of the spine points, resulting in a surface extrusion along the spine. The final orientation of each cross-section is computed by first orienting it relative to the spine segments on either side of point at which the cross-section is placed.
This is known as the spine-aligned cross-section plane SCP , and is designed to provide a smooth transition from one spine segment to the next see Figure The SCP is then rotated by the corresponding orientation value. This rotation is performed relative to the SCP. For example, to impart twist in the cross-section, a rotation about the Y-axis 0 1 0 would be used. Other orientations are valid and rotate the cross-section out of the SCP.
The SCP is computed by first computing its Y-axis and Z-axis, then taking the cross product of these to determine the X-axis.
This results in a plane that is the approximate tangent of the spine at each point, as shown in Figure First the Y-axis is determined, as follows:. Once the Y- and Z-axes have been computed, the X-axis can be calculated as their cross-product. If the number of scale or orientation values is greater than the number of spine points, the excess values are ignored. If they contain one value, it is applied at all spine points.
If the number of scale or orientation values is greater than one but less than the number of spine points, the results are undefined. The scale values shall be positive.
If the three points used in computing the Z-axis are collinear, the cross-product is zero so the value from the previous point is used instead.
If the Z-axis of the first point is undefined because the spine is not closed and the first two spine segments are collinear then the Z-axis for the first spine point with a defined Z-axis is used. If the entire spine is collinear, the SCP is computed by finding the rotation of a vector along the positive Y-axis v1 to the vector formed by the spine points v2.
If two points are coincident, they both have the same SCP. If each point has a different orientation value, then the surface is constructed by connecting edges of the cross-sections as normal. This is useful in creating revolved surfaces. Note: combining coincident and non-coincident spine segments, as well as other combinations, can lead to interpenetrating surfaces which the extrusion algorithm makes no attempt to avoid.
Extrusion has three parts : the sides, the beginCap the surface at the initial end of the spine and the endCap the surface at the final end of the spine. When the beginCap or endCap fields are specified as TRUE, planar cap surfaces will be generated regardless of whether the crossSection is a closed curve.
If crossSection is not a closed curve, the caps are generated by adding a final point to crossSection that is equal to the initial point. An open surface can still have a cap, resulting for a simple case in a shape analogous to a soda can sliced in half vertically. These surfaces are generated even if spine is also a closed curve. Texture coordinates are automatically generated by Extrusion nodes. This simple example defines a PointSet composed of 3 points. The first point is red 1 0 0 , the second point is green 0 1 0 , and the third point is blue 0 0 1.
The second PointSet instances the Coordinate node defined in the first PointSet, but defines different colours:. The LOD node is typically used for switching between different versions of geometry at specified distances from the viewer.
However, if the range field is left at its default value, the browser selects the most appropriate child from the list given. It can make this selection based on performance or perceived importance of the object. Children should be listed with most detailed version first just as for the normal case.
This "performance LOD" feature can be combined with the normal LOD function to give the browser a selection of children from which to choose at each distance. In this example, the browser is free to choose either a detailed or a less-detailed version of the object when the viewer is closer than 10 meters as measured in the coordinate space of the LOD.
The browser should display the less detailed version of the object if the viewer is between 10 and 50 meters and should display nothing at all if the viewer is farther than 50 meters. Browsers should try to honor the hints given by authors, and authors should try to give browsers as much freedom as they can to choose levels of detail based on performance.
For best results, ranges should be specified only where necessary and LOD nodes should be nested with and without ranges. The TimeSensor is very flexible. The following are some of the many ways in which it can be used:. Shuttles and pendulums are great building blocks for composing interesting animations.
0コメント