Term
Frame buffers typically store _____ for each pixel which may be recalled and used to redisplay the image over and over again |
|
Definition
|
|
Term
Frame buffers typically store information for each pixel which may be recalled and used to redisplay the image over and over again through _____ technology |
|
Definition
|
|
Term
Raster graphics file formats which are built on the frame buffer concept and are used commercially include... |
|
Definition
TARGA, TIFF, JPEG, GIF, BMP, and other variations |
|
|
Term
The generic term in computer graphics for “creating a picture” from a scene’s models, surfaces, lighting, and camera angles |
|
Definition
|
|
Term
Basic aspects of any rendering algorithm include... |
|
Definition
Visible surface determination, Shading, and Scanline conversion |
|
|
Term
In rendering, what the camera sees of objects in a scene |
|
Definition
Visible surface determination |
|
|
Term
In rendering, assigning surface properties to objects |
|
Definition
|
|
Term
In rendering, the projection of a 3D scene’s world coordinate into the 2D raster image while determining the resulting color of each pixel based on lighting, orientation, and other factors |
|
Definition
|
|
Term
The most commonly used rendering methods in commercially use are currently based on the _____ concept |
|
Definition
|
|
Term
The visible-surface determination method in which each pixel records (in addition to color) its distance from the camera, its angle, light source orientation, and other information which define the visible structure of the scene |
|
Definition
|
|
Term
Z-buffer rendering solves what three questions? |
|
Definition
1) Is this object (surface) visible from this pixel (point) on the screen using the current point of view?
2) Is it the closest object to that pixel (point) that has been encountered so far?
3) Given the closest visible point on the surface from this pixel (point) on the screen, what color is the surface there? |
|
|
Term
Attributed to Ed Catmull, University of Utah, 1974 |
|
Definition
|
|
Term
The generic term used to describe the optical qualities of a surface such as color and shininess |
|
Definition
|
|
Term
The four most commonly used shading algorithms, in order of complexity |
|
Definition
Lambert, Gouraud, Phong, and Blinn |
|
|
Term
properties are typically calculated based on polygon "_____" |
|
Definition
|
|
Term
A vector which is at a right angle (perpendicular) to a polygonal face on a surface and is used to define which way a surface faces for rendering calculations |
|
Definition
|
|
Term
A normal vector is also known as...? |
|
Definition
|
|
Term
The basic diffuse reflection component found in matte or dull surfaces. Each surface on a 3D model reflects and scatters light equally in every direction, thus creating a faceted or flat look to a polygon mesh model. |
|
Definition
|
|
Term
An interpolated shader method adding a smooth alternative to flat shading. It creates the illusion of a continuous smooth surface by interpolating color values across adjacent faces in a polygonal model. |
|
Definition
|
|
Term
Shader method developed by Henri G. (Ph.D., University of Utah, 1971) |
|
Definition
|
|
Term
Shader method introducing the specular felection component of surfaces giving shiny highlights. It interpolates the vertex normal as opposed to the vertex intensity as in Gouraud shading. It works very well for plastic surfaces. |
|
Definition
|
|
Term
A shader method developed by Bui Tuong-P. (Ph.D., University of Utah, 1975). |
|
Definition
|
|
Term
Shader method that uses diffuse, specular, eccentricity, and refractive index attributes. It allows variations in shading that can incorporate metallic, glass, and other material property attributes. |
|
Definition
|
|
Term
A shader method developed by Jim B. in 1977 |
|
Definition
|
|
Term
Shader that creates surfaces with elliptical to crescent shaped highlights. These highlights are good for modeling hair, glass, or brushed metal. The basic parameters are similar to those for Blinn and Phong shaders, with the exception that the elliptical |
|
Definition
|
|
Term
Shader that creates two highlights that consist of two layers, each of them anisotropic. The highlights are transparent to each other. Where they overlap, the shader blends their colors. They may be overlapped, blended or positioned at opposing angles to |
|
Definition
|
|
Term
Shaders that create a cartoon or comic book effect. Rather than the three-dimensional, realistic effect most other materials provide, Ink 'n Paint provides flat shading with “inked” borders. Instead of gradient color display, calculated variations in val |
|
Definition
|
|
Term
Ink/Paint Shaders aka...? |
|
Definition
|
|
Term
Refers to the parameters described that define how a surface’s appearance is calculated with a renderer |
|
Definition
|
|
Term
Surface attributes such as _____ are generally contained in a material description. |
|
Definition
color, textural roughness, opacity, shininess, reflectivity, and specularity |
|
|
Term
The application of any 2D raster or procedurally generated image to a 3D geometric surface, usually for the purpose of adding detail or realism |
|
Definition
|
|
Term
Most commonly applied a color information, _____ also can be applied to modify an object’s surface normals as a bump map or to actually modify or displace the geometry of the surface itself |
|
Definition
|
|
Term
Rendering technique which builds an image by tracing rays from the observer, bouncing them off the surfaces of objects in the scene and tracing them back to the light sources that illuminate the scene |
|
Definition
|
|
Term
Ray Traced Rendering is the _____ of the Physics principle of bouncing rays of light from their source back to the observer |
|
Definition
|
|
Term
Ray Traced Rendering characteristics can take into account... |
|
Definition
transparency level, refraction, and depth (a finite limit to the number of bounces or secondary rays can be set). |
|
|
Term
Attributed to Bob Goldstein in the late 1960’s |
|
Definition
|
|
Term
This is a coordinate system based method of controlling how a 2D raster or procedurally generated image is wrapped, placed, oriented, scaled, or repeated (tiled) on the surface(s) of a 3D geometric object |
|
Definition
Mapping Coordinate Systems |
|
|
Term
Usually referred to as U,V,W coordinates |
|
Definition
Mapping Coordinate Systems |
|
|
Term
First developed by Edwin Catmull at the University of Utah in 1974 |
|
Definition
Mapping Coordinate Systems |
|
|
Term
4 types of Mapping Coordinate Systems |
|
Definition
Planar, Cylindrical, Spherical, Box |
|
|
Term
This is to procedurally repeat an image more than once, as with a brick wall or a pattern or tiles on a vinyl floor |
|
Definition
|
|
Term
This technique is handy for covering large surfaces which consist of geometrically repeating patterns. It is not well suited for more complex organic or natural textures such as skin, fur, or wood grain, which have a more randomized quality |
|
Definition
|
|
Term
. A good tileable image can be repeated seamlessly without noticing the _____ of each image |
|
Definition
|
|
Term
The technique for creating surface detail through surface normal perturbation |
|
Definition
|
|
Term
The bump map shader function is defined by _____ data in which black creates the illusion of low areas and white creates the illusion of high or peaked areas on the surface and all values in between create gradient transitions |
|
Definition
|
|
Term
Bump Mapping only perturbs _____ and does not directly affect _____ the way displacement mapping does |
|
Definition
surface normals, underlying geometry |
|
|
Term
Developed by Jim Blinn as a graduate student at the University of Utah in 1976 |
|
Definition
|
|
Term
The compliment of transparency |
|
Definition
|
|
Term
An object that is 20% opaque would be _____ transparent |
|
Definition
|
|
Term
Shader function that is defined by 8-bit (grayscale) data in which black creates the illusion of completely transparent and white creates the illusion of being completely opaque on the surface; all values in between create gradient variation in opacity. |
|
Definition
|
|
Term
A _____ is one that is not an applied image but rather a pattern created mathematically by a programming procedure such as fractal or random noise functions |
|
Definition
|
|
Term
A few procedural type examples are... |
|
Definition
Checker, Cellular, Wood, Splat, Noise, Tile |
|
|
Term
Can be used on top of otherwise plain surfaces to break up evenly colored areas and to add a more realistic look |
|
Definition
|
|
Term
An image that is literally projected (as if with a slide projector) onto objects in a scene. This image is not dependent upon surface mapping structure such as UV coordinates and is therefore, an easy way to apply label maps and animated images to surfac |
|
Definition
|
|
Term
The most common type of parallel projection |
|
Definition
|
|
Term
Also is called flat multiview projection |
|
Definition
|
|
Term
Historically this is type of projection used in engineering and architectural drawing |
|
Definition
|
|
Term
Views are typically classified as top, front, right side, bottom, left side, and back when used in 3D software programs. There is no apparent depth in the view because the scale is constant for both X, Y, and Z directions thus causing parallel edges and |
|
Definition
|
|
Term
This is the optically correct foreshortening of an edges or feature on an object based on its distance from the observer (viewer) or camera |
|
Definition
|
|
Term
Parallel surfaces, edges and lines that recede away from the camera eventually converge at a vanishing point. Cameras in 3D software programs utilize this method for representing 3D objects in viewport scenes |
|
Definition
|
|
Term
Viewing scenes in 3D computer graphics programs are normally dealt with in these two different manners... |
|
Definition
1) They are projected into a viewport on a screen display as orthographic (parallel projection) views (top, front, side
2) They are projected into a viewport screen display as convergent perspective projections based on your eye point (camera location) and where you are looking (the direction in which the camera is pointed) |
|
|
Term
_____ can also be adjusted to simulate various lens types from wide angle through telephoto and all types in between. _____ can be moved and rotated to follow action in a scene.
(same word for both blanks) |
|
Definition
|
|
Term
Camera rotations are _____ |
|
Definition
|
|
Term
Camera linear motions are called _____ |
|
Definition
|
|
Term
The camera does not move when a _____ occurs. |
|
Definition
zoom (dynamic lens change) |
|
|
Term
To _____ is to roll and translate the camera in the same direction simultaneously, as in banking an airplane into a turn. This gives a more natural transition of direction to a virtual camera than simple rotations and translations alone. |
|
Definition
|
|
Term
When setting up a CG camera, the _____ defines how much of the 3D environment you can see |
|
Definition
|
|
Term
In a real camera, the _____ is defined by a lens with a given focal length and file gauge. A wide-angle (12 mm) lens has a much greater _____ than a 200 mm telephoto lens. This is often incorrectly referred to as a field of view.
(same word in both b |
|
Definition
|
|
Term
This is a camera term for varying the focal length of the camera lens, thereby making the subject larger or smaller in the view |
|
Definition
|
|
Term
In a _____, the camera does not move physically as it does in a dolly or truck move |
|
Definition
|
|
Term
_____ are planes in front of the camera in CG that define the depth boundaries in the scene for allowable viewing and rendering operations |
|
Definition
|
|
Term
_____ are parallel to the viewport containing the camera view |
|
Definition
|
|
Term
|
Definition
|
|
Term
|
Definition
|
|
Term
Mathematically clipping planes define the allowable ‘_____’ that will be calculated in the rendering and display process |
|
Definition
|
|
Term
This is a traditional camera term denoting the area in front of and beyond the subject that falls within focus |
|
Definition
|
|
Term
To achieve a greater depth of field with a given f-stop, a _____ aperture can be used with proportionally more light added and/or a longer exposure time to compensate for less light entering the lens. |
|
Definition
|
|
Term
Type of image mapping where different types of images are applied to the same surface to achieve a higher level of realism when rendering objects in a scene |
|
Definition
Hybrid/Combined Image Mapping |
|
|
Term
Method of laying out an image and selectively mapping or "wrapping" the image to specific faces on a 3D model |
|
Definition
|
|