3D Glossary

100 3D Terms
3 4 5 6 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
3D Audio

An audio technology that creates a three-dimensional sound experience by simulating the perception of sound in a virtual 3D space. 3D audio can provide a more immersive and realistic audio experience by accurately reproducing the spatial location, distance, and movement of sound sources in a 3D scene. It is commonly used in virtual reality, gaming, and multimedia applications to enhance the sense of presence and immersion.

3D Display

A display technology that presents visual content with three-dimensional depth perception, giving the illusion of objects appearing as if they are in three-dimensional space.

3D Modeling

The process of creating a digital representation of a physical object or scene in three-dimensional space, often used in computer-aided design (CAD), animation, and virtual reality (VR) applications.

3D Printing

The process of creating physical objects from digital models by layering or adding material, often using additive manufacturing techniques, resulting in three-dimensional objects with complex geometries.

3D Rendering

The process of generating a two-dimensional image or animation from a three-dimensional model or scene using computer graphics techniques, often used in visualizations, animations, and simulations.

3D Scanning

The process of capturing the three-dimensional geometry, appearance, and other attributes of real-world objects or scenes using specialized scanning devices, such as laser scanners or depth sensors. 3D scanning can be used to create accurate digital representations of physical objects or environments, which can then be used in various applications such as computer graphics, virtual reality, reverse engineering, cultural heritage preservation, and industrial design.


User interface and user experience design for 3D applications and environments. 3D UI/UX involves the design and implementation of interactive elements, controls, menus, and navigation systems in a three-dimensional virtual space. It requires consideration of spatial relationships, depth perception, and user interaction in a 3D environment, and it is commonly used in virtual reality, augmented reality, and other 3D applications to provide intuitive and immersive user experiences.

3D Visualization

The use of computer-generated imagery (CGI) or other techniques to create visual representations of 3D objects, scenes, or data for communication, presentation, or analysis purposes. 3D visualization can involve the use of various rendering techniques, lighting, shading, and texturing to create realistic or stylized images or animations of 3D content. It is widely used in fields such as architecture, product design, medical visualization, scientific visualization, and entertainment to communicate complex information or concepts visually.

3D Web

A term used to refer to the use of 3D graphics, animations, or interactive elements on the World Wide Web. 3D web technologies enable the creation of immersive and interactive 3D experiences that can be accessed and interacted with through web browsers. This can include 3D models, virtual reality (VR) experiences, augmented reality (AR) applications, and other forms of 3D content embedded in websites or web-based applications. 3D web can provide a more engaging and interactive user experience, allowing users to explore and interact with 3D content directly in their web browsers without the need for additional software or plugins.

3D Wireframe

A visual representation of a 3D object or scene using only lines and vertices to define its geometry, without any surface or shading information. 3D wireframe models are often used in computer graphics for visualization, design, and modeling purposes. They provide a simplified and uncluttered view of the underlying geometry and structure of 3D objects, allowing for a clear understanding of their form, proportions, and spatial relationships.

360-Degree Video

A type of video that captures the entire spherical view around the camera, allowing viewers to look in any direction and explore the entire scene in a 360-degree field of view. 360-degree videos are typically captured using specialized cameras or rigs that can capture video from multiple angles simultaneously, and they are often used in virtual reality, augmented reality, and other interactive 3D experiences to provide an immersive and interactive viewing experience.

360-Degree Virtual Tour

A virtual tour that allows users to explore and navigate a virtual environment in a 360-degree field of view. 360-degree virtual tours are often used in real estate, tourism, and other industries to showcase properties, destinations, or other locations in a realistic and interactive way. Users can move around, look in any direction, and interact with the virtual environment to get a sense of the space and its surroundings.

4D Printing

An emerging technology that combines 3D printing with the ability to change shape, transform, or adapt over time in response to external stimuli, such as temperature, humidity, pressure, or light. 4D printing involves the use of materials that can undergo controlled changes in their physical properties or structures after being printed, resulting in objects that can self-assemble, self-repair, or change their shape or function over time. This technology has potential applications in areas such as manufacturing, healthcare, robotics, and aerospace.

5D BIM (Building Information Modeling)

A type of building information modeling that includes the dimension of time as a fifth dimension, in addition to the traditional three dimensions of space (length, width, height) and the fourth dimension of scheduling (time). 5D BIM involves the integration of time-related information, such as construction schedules, project timelines, and cost estimations, into the 3D digital models of building projects. This allows for better visualization, analysis, and management of construction projects, enabling more efficient planning, scheduling, and coordination among stakeholders.

6DOF (Six Degrees of Freedom)

A term used in virtual reality, robotics, and computer graphics to describe the ability of an object or user to move and interact in a 3D space with six independent degrees of freedom, which are translation along three axes (X, Y, Z) and rotation around three axes (roll, pitch, yaw). 6DOF allows for more natural and immersive movement and interaction in virtual environments, enabling users to move, rotate, and interact with objects in a realistic and intuitive way.

Anaglyph 3D

A stereoscopic 3D display technique that uses color filters to separate the left and right eye images, creating a perception of depth when viewed with color-filtered glasses.

Augmented Reality (AR)

An interactive technology that overlays digital content, such as virtual objects or information, onto the real world, enhancing the perception of the physical environment by blending virtual and real elements.


A type of 3D display that does not require the use of special glasses or other devices to perceive depth, allowing for a more natural viewing experience.

Binocular Parallax

The difference in perspective between the left and right eye views, used in stereoscopic vision to perceive depth and create a sense of three-dimensionality.

CAD (Computer-Aided Design)

The use of computer software to create, modify, and optimize designs for various industries, including architecture, engineering, product design, and more.

Computer Graphics

The field of study and practice that involves the creation, manipulation, and rendering of visual content using computers, including images, videos, animations, and interactive graphics.

Depth Perception

The ability to perceive the relative distance of objects in three-dimensional space, allowing for the perception of depth and distance.

Digital Sculpting

The process of creating three-dimensional digital models using specialized software that allows for sculpting and shaping of virtual objects with digital tools, similar to sculpting physical objects from clay or other materials.

Digital Twins

Virtual representations of physical objects, systems, or processes that are connected to their real-world counterparts, often used for simulation, analysis, and monitoring purposes.

DLP (Digital Light Processing)

A display technology that uses micro mirrors to project images by reflecting light, often used in projection systems, displays, and digital cinema.

Extended Reality (XR)

A term that encompasses various technologies, including virtual reality (VR), augmented reality (AR), and mixed reality (MR), which blur the line between the physical and virtual worlds to create immersive experiences.

Field of View (FoV)

The extent of the observable or visible world at any given moment, often used in reference to virtual reality (VR) and augmented reality (AR) headsets to describe the extent of the virtual or augmented content that can be seen by the user.


The material used in 3D printing to create physical objects by adding or layering material, often made of plastics, metals, ceramics, or other materials.

Flat Shading

A shading technique used in computer graphics where each polygon or surface is rendered with a single, constant color, without considering lighting or other surface properties.

Glasses-Based 3D

A type of stereoscopic 3D display that requires the use of specialized glasses or eyewear to perceive depth and create a three-dimensional image.

Glasses-Free 3D

A technology that enables the viewing of three-dimensional content without the need for special glasses or other external devices. It creates the perception of depth and dimensionality in images or videos without requiring additional accessories.

GPU (Graphics Processing Unit)

A specialized electronic circuit or processor designed to accelerate the creation and rendering of images, videos, and animations in computer graphics. It is responsible for processing and manipulating visual data for display on computer screens or other output devices.

Haptic Feedback

The use of tactile sensations or vibrations to provide feedback to users in virtual or augmented reality environments. It allows users to feel and experience virtual objects or interactions through touch or vibrations, enhancing the sense of immersion and realism.

Head-Mounted Display (HMD)

A wearable device that is worn on the head and typically covers the eyes to provide a virtual reality or augmented reality experience. It often includes displays, sensors, and other components to create an immersive visual experience.

Head Tracking

A technology that allows the tracking of the user's head movements in virtual or augmented reality environments. It enables the virtual or augmented content to respond to the user's head movements, allowing for a more natural and interactive experience.

High Poly

Refers to 3D models with a high number of polygons, which are the individual faces or surfaces that make up a 3D object. High poly models typically have more detail and smoother surfaces, but require more processing power and resources to render compared to low poly models.


A three-dimensional image created by the interference of light waves, which appears to be floating in space and can be viewed from different angles. Holograms are used in various applications, including holographic displays, holographic projections, and holographic imaging.

Holographic Display

A display technology that creates three-dimensional images using holograms. It allows for the projection of holographic content without the need for special glasses or other viewing aids, providing a more immersive and realistic viewing experience.


An artistic style or technique that aims to create highly realistic or lifelike representations of objects or scenes. Hyper-realistic artworks often exhibit an extreme level of detail, precision, and accuracy, blurring the line between reality and art.

Immersive Experience

Refers to a virtual reality, augmented reality, or other digital experience that fully engages the user's senses and creates a sense of presence, making the user feel as if they are physically present in the virtual environment. Immersive experiences often involve high-quality graphics, sound, and interactive elements to create a convincing and realistic virtual world.

Inverse Kinematics

A technique used in computer animation that calculates the motion of a character's joints based on the desired position or motion of its end effectors, such as hands or feet. It is used to create realistic and natural movements of characters in animation or virtual reality applications.

Laser Scanning

A process of capturing the shape, geometry, and appearance of objects or environments using laser beams. Laser scanning is commonly used in 3D modeling, computer graphics, and virtual reality to create accurate and detailed representations of real-world objects or scenes.

Light Field Display

A display technology that recreates the light field, which is the distribution of light rays in a scene. Light field displays provide more realistic and interactive visual experiences, as they allow for changes in viewing perspective and depth cues, similar to how the human eye perceives the real world.

Lenticular Lens

A sheet of material with a lenticular surface that can be used to create 3D or animated images. Lenticular lenses work by directing light differently to different viewing angles, creating the illusion of depth or motion when viewed from different angles.

Low Poly

Refers to 3D models with a low number of polygons, which are the individual faces or surfaces that make up a 3D object. Low poly models typically have less detail and simpler surfaces, but require less processing power and resources to render compared to high poly models.

Material Mapping

A technique used in computer graphics to apply textures or materials to 3D models. Material mapping involves associating 2D images or patterns with 3D objects to create realistic surface appearances, such as wood, metal, or fabric.

Material Properties

Refers to the visual and physical characteristics of materials used in computer graphics, such as reflectivity, roughness, transparency, and color. Material properties are used to define the appearance and behavior of objects in virtual environments, allowing for realistic rendering and shading.

Motion Capture

A process of recording the movements of real-world objects or human actors and translating them into digital data for use in computer animation, virtual reality, or other applications. Motion capture technology enables realistic and natural movement of virtual objects or characters based on real-world actions.

Multi-View Display

A display technology that allows for the simultaneous viewing of multiple perspectives or angles of a 3D scene or object. Multi-view displays provide a more immersive and interactive experience, allowing users to view objects or scenes from different viewpoints, enhancing the perception of depth and dimensionality.

Normal Mapping

A technique used in computer graphics to create the illusion of high-resolution surface detail on low-resolution 3D models. Normal maps store information about the direction of surface normals, allowing for realistic shading and lighting effects without the need for additional geometry.

Occlusion Culling

A technique used in computer graphics to optimize rendering performance by determining which objects or parts of objects are not visible to the camera and can be skipped during rendering. Occlusion culling helps reduce the amount of processing required to render a scene, improving real-time performance in 3D graphics applications.

Oculus Rift

A virtual reality (VR) headset developed by Oculus VR, a subsidiary of Facebook. The Oculus Rift provides an immersive VR experience by displaying 3D virtual environments to the user and tracking their head movements to update the view in real-time, allowing for an interactive and immersive virtual reality experience.

Parallax Barrier

A technology used in glasses-free 3D displays that creates the perception of depth by using a barrier with small slits or ridges to direct different images to each eye. The parallax barrier allows for the display of stereoscopic 3D content without the need for special glasses, providing a more accessible 3D viewing experience.

Particle System

A technique used in computer graphics to simulate the behavior of particles, such as smoke, fire, water, or dust, in a virtual environment. Particle systems generate and manipulate large numbers of small, independent particles to create realistic and dynamic visual effects in games, simulations, and animations.

Physically Based Rendering (PBR)

A rendering technique that uses physically accurate models to simulate the behavior of light on materials in a virtual environment. PBR aims to achieve realistic and accurate lighting and shading effects by simulating the physical properties of materials, such as reflectivity, roughness, and metallicity, based on real-world physics.


A technique used to create 3D models of real-world objects or scenes by analyzing and processing photographs taken from multiple viewpoints. Photogrammetry uses computer vision algorithms to extract depth and geometric information from photographs, allowing for the creation of accurate and detailed 3D models for use in computer graphics, virtual reality, and other applications.


A style of computer-generated graphics or rendering that aims to create images or animations that are visually indistinguishable from photographs or the real world. Photorealistic graphics strive to achieve high levels of detail, accuracy, and visual fidelity to create a realistic and believable visual experience.


A basic geometric shape used in computer graphics to represent 3D objects. A polygon is a flat, 2D shape with straight sides that is defined by a series of vertices (points in 3D space) connected by edges. Polygons are the building blocks of 3D models and are used to create complex shapes and surfaces.

Projection Mapping

A technique used in computer graphics to project visual content onto irregular or non-flat surfaces, such as buildings, objects, or sculptures, to create an illusion of depth, dimensionality, or motion. Projection mapping can be used for artistic, entertainment, or informational purposes and can create visually stunning and immersive experiences.

Ray Tracing

A rendering technique used in computer graphics to generate realistic lighting and shading effects by simulating the behavior of light rays in a virtual environment. Ray tracing calculates the paths of light rays from the light sources to the camera or viewer, accurately simulating how light interacts with materials, reflections, and refractions, resulting in high-quality and physically accurate renderings. Ray tracing is computationally intensive but produces realistic and visually appealing graphics.

Real-Time Rendering

The process of generating and displaying computer graphics in real-time, typically at interactive frame rates (e.g., 30 frames per second or higher). Real-time rendering is used in video games, virtual reality, augmented reality, simulations, and other applications where immediate feedback and interactivity are important. Real-time rendering techniques focus on optimizing rendering performance to achieve smooth and responsive graphics in real-time.

Real-Time Tracking

The process of continuously monitoring and updating the position, orientation, or other parameters of objects or elements in a virtual environment in real-time. Real-time tracking is commonly used in virtual reality, augmented reality, motion capture, and other applications where accurate and timely tracking of objects or movements is required for realistic and interactive experiences.


A brand of depth-sensing camera technology developed by Intel that enables depth perception and gesture recognition in computing devices. RealSense cameras use infrared sensors and specialized software to capture depth information and track hand gestures, facial expressions, and other movements, allowing for more immersive and interactive experiences in applications such as gaming, virtual reality, and augmented reality.


A property of materials in computer graphics that describes how much light is reflected by a surface. Reflectance determines the amount and direction of light that is reflected from a surface, affecting its appearance and how it interacts with other objects or lighting sources in a virtual environment. Reflectance is often represented using texture maps or material properties in rendering techniques.

Refresh Rate

The number of times per second that a display updates its image or refreshes its pixels. Refresh rate is measured in Hertz (Hz), with higher refresh rates resulting in smoother motion and reduced motion blur in computer graphics and video playback. Higher refresh rates are particularly important in gaming and virtual reality to provide a more immersive and smooth visual experience.

Render Farm

A collection of computer systems or servers that are networked together and used for rendering computer graphics or animations. Render farms are used to distribute rendering tasks across multiple machines, allowing for faster and more efficient rendering of complex and time-consuming graphics or animations. Render farms are commonly used in animation studios, visual effects studios, and other graphics-intensive industries.

Retinal Projection

A technique used in augmented reality and virtual reality to project images or visual information directly onto the retina of the eye. Retinal projection can provide a more immersive and realistic visual experience by directly stimulating the retina, bypassing the need for external displays or screens. Retinal projection is still an emerging technology and is being researched for its potential applications in future AR and VR systems.


The process of creating a virtual skeleton or structure for a 3D model to control its movement or deformation. Rigging is commonly used in character animation and involves attaching bones or joints to a 3D model and defining their relationships and movements. Rigging allows for realistic animations of characters, creatures, or objects in computer graphics and animations.

Scanline Rendering

A rendering technique used in computer graphics to generate images by scanning or rasterizing objects or surfaces one scanline at a time, from top to bottom. Scanline rendering calculates the intersections between the scanlines and the objects in a scene to determine the visible portions of the objects and their colors, shading, or textures. Scanline rendering is a fast and efficient rendering technique used in real-time graphics and older rendering pipelines.


Programs or scripts used in computer graphics to define the appearance or behavior of materials, lighting, or other visual elements in a virtual environment. Shaders are used to calculate the colors, shading, reflections, and other visual effects of 3D objects or scenes in real-time or offline rendering. Shaders can be written in various programming languages and are an important tool for creating realistic and visually appealing graphics in computer games, virtual reality, augmented reality, and other applications.

Shadow Mapping

A technique used in computer graphics to generate shadows by projecting the view of a light source onto a depth buffer or texture, which is then used to determine the areas of a scene that are in shadow. Shadow mapping is commonly used in real-time rendering to simulate realistic shadows cast by objects in a virtual environment, providing depth and dimensionality to the scene.


The outline or contour of an object or scene in computer graphics. The silhouette is the boundary between the object and its background and is often used to define the shape, form, and recognition of objects in a scene. Silhouettes can be used for various purposes, such as object recognition, scene composition, or artistic design in computer graphics and animation.

Skeletal Animation

A technique used in character animation to control the movement and deformation of 3D models by using a virtual skeleton or rigging. Skeletal animation involves attaching bones or joints to a 3D model and animating their movements to create realistic animations of characters or creatures. Skeletal animation is commonly used in video games, movies, and other digital media to create lifelike movements and expressions in characters.

Solid Modeling

A technique used in computer-aided design (CAD) and computer graphics to create virtual 3D models of solid objects with defined geometries and physical properties. Solid modeling is used for designing and simulating physical objects in engineering, architecture, industrial design, and other fields. Solid models are typically represented as closed 3D volumes with surfaces that enclose a defined space and can be manipulated, analyzed, and rendered in various ways.

Specular Mapping

A technique used in computer graphics to simulate the reflective properties of materials by using texture maps or material properties to control the specular or glossy reflections on a surface. Specular mapping is used to create realistic and dynamic reflections on surfaces that are shiny or reflective, such as metal, glass, or wet surfaces. Specular mapping adds visual details and enhances the appearance of materials in rendered images or real-time graphics.


A 2D image or pattern that contains hidden 3D information that can be revealed by using a specific viewing technique, such as stereograms or autostereograms. Stereograms create an optical illusion that allows the viewer to perceive depth or 3D shapes from a 2D image by focusing or converging their eyes in a specific way.

Stereoscopic 3D

A technique used in computer graphics and display technology to create the perception of depth or three-dimensionality in images or videos. Stereoscopic 3D involves presenting slightly different images or viewpoints to each eye to create the illusion of depth when viewed with special glasses or other devices. Stereoscopic 3D is commonly used in movies, video games, and virtual reality to create more immersive and realistic visual experiences.

Subdivision Surface

A technique used in 3D computer graphics to create smooth surfaces with high levels of detail by recursively subdividing a base mesh or polygonal model. Subdivision surfaces are commonly used in character modeling, organic modeling, and other applications where smooth and detailed surfaces are required. Subdivision surfaces allow for efficient representation of complex shapes and can be further modified or sculpted to create more intricate details.

Subsurface Scattering

A rendering technique used in computer graphics to simulate the scattering of light within translucent or semi-transparent materials, such as skin, wax, or milk. Subsurface scattering is used to create realistic and natural-looking materials that exhibit the soft and diffuse appearance of light scattering through a medium. Subsurface scattering is commonly used in character rendering, product visualization, and other applications where realistic light interaction with translucent materials is important.


The process of applying a 2D image or pattern, called a texture map, onto the surface of a 3D model in computer graphics. Texturing is used to add visual details, color, and surface properties to 3D models, such as skin, wood, metal, or fabric. Texture maps are typically created from photographs, hand-painted images, or procedurally generated patterns, and can be applied using various mapping techniques, such as UV mapping or projection mapping.

Texture Mapping

A technique used in computer graphics to apply a 2D image, called a texture map, onto the surface of a 3D model to add visual details, color, and surface properties. Texture mapping involves associating each point on the surface of a 3D model with a corresponding point on a 2D texture map, which determines how the color or properties of the texture are applied to the surface. Texture mapping is a fundamental technique used in computer graphics to create realistic and detailed 3D models.


The property of a material or object in computer graphics that allows light to pass through and be transmitted, partially or fully, resulting in the visibility of objects or surfaces behind it. Transparency is commonly used to create realistic rendering of materials such as glass, water, or plastics, and can be controlled by adjusting the opacity or alpha value of the material. Transparency is an important property used in various applications, such as product visualization, architectural rendering, and visual effects.


A popular game engine and development platform used in computer graphics and interactive 3D applications. Unity provides tools, libraries, and a runtime environment for creating real-time 3D graphics, simulations, virtual reality, and augmented reality experiences. Unity is widely used by game developers, designers, and artists to create a wide range of interactive and immersive applications across different platforms, including desktop, mobile, console, and virtual reality devices.

Unreal Engine

A powerful game engine and development framework used in computer graphics and interactive 3D applications. Developed by Epic Games, Unreal Engine provides a wide range of tools, features, and a real-time rendering engine that allows developers to create high-quality and realistic 3D graphics, simulations, virtual reality, and augmented reality experiences. Unreal Engine is widely used in the game industry, as well as in industries such as architecture, automotive, film and television, and virtual production.

UV Mapping

A technique used in computer graphics to map a 2D texture onto a 3D model by associating each vertex of the model with corresponding coordinates in a 2D texture space. UV mapping allows for the precise placement of texture maps onto a 3D model, defining how the 2D image or pattern is applied to the 3D surface. UV mapping is a common technique used in texturing, as it allows artists to create detailed and realistic surface appearance on 3D models.


A popular and powerful rendering engine used in computer graphics for creating high-quality and photorealistic 3D images and animations. Developed by Chaos Group, V-Ray is widely used in industries such as architecture, automotive, product design, and visual effects to achieve realistic lighting, shading, and material properties in 3D renderings. V-Ray provides advanced features such as global illumination, ray tracing, and physically-based rendering, making it a popular choice for professional 3D artists and studios.


A fundamental element in 3D computer graphics that represents a single point in 3D space. Vertices are used to define the geometry of 3D models, forming the building blocks for creating complex shapes and objects. Vertices are typically connected together to form edges, faces, and polygons, which collectively create the structure and appearance of 3D models.

Vertex Animation

A technique used in computer graphics to animate 3D models by manipulating the position, rotation, or scaling of individual vertices in real-time or during rendering. Vertex animation allows for precise control over the deformation and movement of 3D models, and is commonly used for character animation, facial animation, and other applications where detailed control over the geometry of a model is required.

Vertex Buffer Object (VBO)

A memory buffer used in computer graphics to efficiently store and manage vertex data, such as positions, normals, colors, and texture coordinates, for rendering 3D models. VBOs are used to optimize the performance of 3D graphics rendering by reducing the overhead of transferring vertex data between the CPU and GPU. VBOs are commonly used in modern graphics APIs, such as OpenGL and Vulkan, to achieve high-performance rendering of 3D graphics.

Virtual Reality (VR)

A technology that allows users to experience and interact with computer-generated environments or simulations in a realistic and immersive way. Virtual reality typically involves wearing a head-mounted display (HMD) that tracks the user's head movements and provides stereoscopic visuals, along with other input devices such as controllers or hand-tracking devices for interacting with the virtual environment. VR is used in various applications, such as gaming, training simulations, education, and therapy.

Volumetric Display

A type of display technology that creates three-dimensional visual representations by emitting or scattering light within a physical volume of space. Volumetric displays can produce true 3D images that can be viewed from different angles without the need for special glasses or other devices. Volumetric displays are used in applications such as medical imaging, scientific visualization, and virtual reality to provide more realistic and immersive visual experiences.


A volumetric pixel, or 3D pixel, that represents a discrete element in a 3 D space, similar to how a pixel represents a discrete element in a 2D image. Voxels are used to represent the volumetric data of 3D objects, such as in medical imaging, computer-aided design (CAD), and video games. Voxels can store various properties such as color, density, and material information, and they are used in techniques such as voxelization, volumetric rendering, and physics simulations.


A visual representation of a 3D model or scene that shows only the lines and edges that define the geometry of the objects, without any surface or texture information. Wireframes are commonly used in computer graphics for modeling, visualization, and debugging purposes. They provide a simplified and uncluttered view of the underlying geometry, allowing for a better understanding of the shape and structure of 3D models.

World Building

The process of creating a virtual world or environment for use in a video game, simulation, or other digital media. World building involves designing and constructing the terrain, landscapes, structures, objects, and other elements of a virtual world to create a cohesive and immersive environment. It encompasses various aspects such as level design, environmental storytelling, lighting, and sound design, and it plays a critical role in the overall user experience of the virtual world.

X-Y-Z Axis

A Cartesian coordinate system used in 3D computer graphics to represent spatial positions and orientations. The X-Y-Z axis consists of three perpendicular axes that intersect at a common origin point. The X-axis represents the horizontal direction, the Y-axis represents the vertical direction, and the Z-axis represents the depth or distance from the viewer. The X-Y-Z axis is used to define the position, rotation, and scaling of 3D objects in a 3D virtual space.


A term used in 3D computer graphics to describe the rotation around the vertical Y-axis of an object or camera in a virtual 3D environment. Yaw is one of the three rotational movements, along with pitch (rotation around the horizontal X-axis) and roll (rotation around the depth Z-axis), that define the orientation of a 3D object or camera. Yaw is commonly used in animations, simulations, and virtual reality applications to control the orientation and viewpoint of objects or cameras in a 3D scene.


The Z-axis is one of the three orthogonal axes in a three-dimensional Cartesian coordinate system. It is perpendicular to both the X-axis and the Y-axis, and extends in a direction that is commonly referred to as the "depth" or "height" axis. In a right-handed coordinate system, the Z-axis typically points upwards towards the positive direction, while in a left-handed coordinate system, it points downwards towards the negative direction.The Z-axis is commonly used to represent the third dimension in 3D graphics, computer modeling, and virtual reality applications. It is often used to represent the depth or elevation of objects or points in a 3D scene. For example, in a 3D video game, the Z-axis can be used to determine the position of objects in the game world along the depth or height axis, allowing for objects to be placed in front of or behind other objects, creating a sense of depth and perspective. Similarly, in computer-aided design (CAD) or architectural modeling, the Z-axis can be used to represent the elevation or height of objects or structures in a 3D model, allowing for accurate representation of the vertical dimension.


A technique used in computer graphics to determine the visibility of objects in a 3D scene and to efficiently render them in the correct order. Z-buffering involves the use of a buffer, or a memory buffer, to store the depth (Z) information of each pixel in the scene. As objects are rendered, their Z-depth values are compared with the values in the Z-buffer, and pixels with closer Z-depth values are rendered on top of pixels with farther Z-depth values. This helps to ensure that objects are properly occluded and displayed in the correct order to create a visually accurate 3D scene.


The depth or distance from the viewer along the Z-axis in a 3D scene. Z-depth is a measure of the position of objects or pixels in a 3D scene relative to the viewer, with objects closer to the viewer having smaller Z-depth values and objects farther away having larger Z-depth values. Z-depth is used in various computer graphics techniques such as depth testing, Z-buffering, and depth of field effects to create realistic depth perception and visual accuracy in 3D scenes.

Zero Parallax

The distance at which the left and right eye views converge and appear to intersect in a stereoscopic 3D display or device, such as a stereoscope or a virtual reality headset. Zero parallax is the point where the left and right eye views align perfectly, creating a comfortable and natural 3D viewing experience without any visible depth disparity or eye strain. It is an important parameter in stereoscopic 3D displays to ensure accurate depth perception and visual comfort for the viewer.


Magnetic 3D

450 Lexington Ave 4th Floor
New York, NY 10017
(877) 225 8950

Recent Projects

Meet the Drapers Miami
I/ITSEC 2023
NAMA 2023