Content Creation
Content Creation
INTRODUCTION
3D modeling is the process of creating a 3D representation of objects, animals,
machines, or humans. 3D models are used in many industries, including film,
television, video games, architecture, construction, product development, science,
and medical. 3D modeling can be an art form and a tool, used in art, design,
planning, testing, simulations, marketing, advertising, and education.
This article is about computer modeling within an artistic medium. For scientific
usage, see Computer simulation.
In 3D computer graphics, 3D modeling is the process of developing a mathematical
coordinate-based representation of a surface of an object (inanimate or living) in
three dimensions via specialized software by manipulating edges, vertices, and
polygons in a simulated 3D space.
3D MODELING OUTLINE
The product is called a 3D model, while someone who works with 3D models may
be referred to as a 3D artist or a 3D modeler. A 3D model can also be displayed as
a two-dimensional image through a process called 3D rendering or used in a
computer simulation of physical phenomena.
HISTORY
3D models are now widely used anywhere in 3D graphics and CAD but their history predates the
widespread use of 3D graphics on personal computers.
In the past, many computer games used pre-rendered images of 3D models as sprites before
computers could render them in real-time. The designer can then see the model in various
directions and views, this can help the designer see if the object is created as intended to
compared to their original vision. Seeing the design this way can help the designer or company
figure out changes or improvements needed to the product.
Representation
A modern render of the iconic Utah teapot model developed by Martin Newell (1975). The Utah
teapot is one of the most common models used in 3D graphics education.
Almost all 3D models can be divided into two categories:
Solid – These models define the volume of the object they represent (like a rock). Solid models
are mostly used for engineering and medical simulations, and are usually built with constructive
solid geometry
Shell or boundary – These models represent the surface, i.e. the boundary of the
object, not its volume (like an infinitesimally thin eggshell). Almost all visual models
used in games and film are shell models.
Solid and shell modeling can create functionally identical objects. Differences
between them are mostly variations in the way they are created and edited and
conventions of use in various fields and differences in types of approximations
between the model and reality.
Polygonal Modeling
Boolean Operations
Procedural Modeling
Digital sculpting
Polygonal Modeling –
Boolean Operations –
Combine input objects to make shapes that would otherwise be difficult to model using other
techniques
Procedural Modeling –
Modular modeling
Auto-variation modeling
Vary and change existing models based on a sequence of rules, instructions, or algorithms
Digital sculpting –
Digital sculpting, also known as sculpt modeling or 3D sculpting, is the use of
software that offers tools to push, pull, smooth, grab, pinch or otherwise manipulate
a digital object as if it were made of a real-life substance such as clay.
Sculpting technology:
The geometry used in digital sculpting programs to represent the model can vary;
each offers different benefits and limitations. The majority of digital sculpting tools on
the market use mesh-based geometry, in which an object is represented by an
interconnected surface mesh of polygons that can be pushed and pulled around.
This is somewhat similar to the physical process of beating copper plates to sculpt a
scene in relief. Other digital sculpting tools use voxel-based geometry, in which the
volume of the object is the basic element. Material can be added and removed,
much like sculpting in clay. Still other tools make use of more than one basic
geometry representation.
A benefit of voxel-based sculpting is that voxels allow complete freedom over form.
The topology of a model can be altered continually during the sculpting process as
material is added and subtracted, which frees the sculptor from considering the
layout of polygons on the model's surface. After sculpting, it may be necessary to
Retopologize the model to obtain a clean mesh for use in animation or real-time
rendering. Voxels, however, are more limited in handling multiple levels of detail.
Unlike mesh-based modeling, broad changes made to voxels at a low level of detail
may completely destroy finer details.
Uses:
Sculpting can often introduce details to meshes that would otherwise have been
difficult or impossible to create using traditional 3D modeling techniques. This
makes it preferable for achieving photorealistic and hyper realistic results, though,
many stylized results are achieved as well.
Sculpting is primarily used in high poly organic modeling (the creation of 3D models
which consist mainly of curves or irregular surfaces, as opposed to hard surface
modeling). It is also used by auto manufacturers in their design of new cars.
It can create the source meshes for low poly game models used in video games. In
conjunction with other 3D modeling and texturing techniques and Displacement and
Normal mapping, it can greatly enhance the appearance of game meshes often to
the point of photorealism. Some sculpting programs like 3D-Coat, Z brush, and Mud
box offer ways to integrate their workflows with traditional 3D modeling and
rendering programs. Conversely, 3D modeling applications like 3ds Max, Maya and
MODO are now incorporating sculpting capability as well, though these are usually
less advanced than tools found in sculpting-specific applications.
High poly sculpts are also extensively used in CG artwork for movies, industrial
design, art, photorealistic illustrations, and for prototyping in 3D printing.
Virtual clothes are digital garments used for video game characters (avatars / 3D
models), in animation films and commercials, and as clothing for digital doubles in
films such as "The Hobbit", for dangerous scenes or when it is simply impossible to
use a real-life actor. Virtual clothing is commonly also used for dressing up a
player's avatar in a virtual world game as well as for making selling virtual clothes in
3D marketplaces like Second Life. Additional uses for digital clothes is for VR and AI
technologies, online shop catalogs of fashion retailers, and scene of crime
recreation purposes.
Digital sculpture:
Sculptors and digital artists use digital sculpting to create a model (or Digital Twin) to
be materialized through CNC technologies including 3D printing. The final sculptures
are often called Digital Sculpture or 3D printed art. While digital technologies have
emerged in many art disciplines (painting, photography), this is less the case for
digital sculpture due to the higher complexity and technology limitations to produce
the final sculpture.
Sculpting Process:
The best way to learn sculpture is by understanding primary, secondary and tertiary
forms. First, break down the object you want to make down its basic shapes, such
as a sphere or cube. Focus on making the large, overall shape of the object. After
that, work on the bigger shapes on top of or inside the object. These can be
protrusions or cut outs. Then, do a final detail pass, such as pores or lines to break
up the shape.
Sculpting programs:
There are a number of digital sculpting tools available. Some popular tools for
creating are:
3D-Coat
Adobe Substance
Autodesk Alias
CB model pro
Curvy 3D
Geo magic Freeform
Geo magic Sculpt
Kodon
Medium by Adobe
Mud box
Nomad Sculpt
Sculpt GL
Shape lab (VR)
Sharp Construct
Z Brush
3ds Max
Blender
Bryce
Cinema4D
Form-Z
Houdini
Lightwave 3D
Maya
MODO
Poser
Rhinoceros 3D
Self CAD
Silo
SketchUp
Softimage XSI
Strata 3D
True Space
UV MAPPING
Process:
UV texturing permits polygons that make up a 3D object to be painted with
color (and other surface attributes) from an ordinary image. The image is called a
UV texture map. The UV mapping process involves assigning pixels in the image to
surface mappings on the polygon, usually done by "programmatically" copying a
triangular piece of the image map and pasting it onto a triangle on the object. UV
texturing is an alternative to projection mapping (e.g., using any pair of the model's
X, Y, Z coordinates or any transformation of the position); it only maps into a texture
space rather than into the geometric space of the object. The rendering computation
uses the UV texture coordinates to determine how to paint the three-dimensional
surface.
APPLICATION TECHNIQUES:
In the example to the right, a sphere is given a checkered texture in two ways. On
the left, without UV mapping, the sphere is carved out of three-dimensional checkers
tiling Euclidean space. With UV mapping, the checkers tile the two-dimensional UV
space, and points on the sphere map to this space according to their latitude and
longitude.
UV unwrapping:
When a model is created as a polygon mesh
using a 3D modeler, UV coordinates (also
known as texture coordinates) can be
generated for each vertex in the mesh. One
way is for the 3D modeler to unfold the triangle
mesh at the seams, automatically laying out
the triangles on a flat page. If the mesh is a
UV sphere, for example, the modeler might
transform it into an equirectangular projection.
Once the model is unwrapped, the artist can
paint a texture on each triangle individually,
using the unwrapped mesh as a template.
When the scene is rendered, each triangle will map to the appropriate texture from
the "decal sheet".
UV coordinates are optionally applied per face. This means a shared spatial vertex
position can have different UV coordinates for each of its triangles, so adjacent
triangles can be cut apart and positioned on different areas of the texture map.
The UV mapping process at its simplest requires three steps: unwrapping the mesh, creating the
texture, and applying the texture to a respective face of polygon.
UV mapping may use repeating textures, or an injective 'unique' mapping as a prerequisite for
baking.
TEXTURE MAPPING:
Texture mapping is a method for mapping a texture on a computer-generated
graphic. Texture here can be high frequency detail, surface texture, or color.
HISTORY:
The original technique was pioneered by Edwin Catmull in 1974 as part of his
doctoral thesis.
By 1983 work done by Johnson Yan, Nicholas Szabo, and Lish-Yaan Chen in their
invention "Method and Apparatus for Texture Generation" provided for texture
generation in real time where texture could be generated and superimposed on
surfaces (curvilinear and planar) of any orientation. Texture patterns could be
modeled suggestive of the real world material they were intended to represent in a
continuous way and free of aliasing, ultimately providing level of detail and gradual
(imperceptible) detail level transitions. Texture generating became repeatable and
coherent from frame to frame and remained in correct perspective and appropriate
occultation. Because the application of real time texturing was applied to early three
dimensional flight simulator CGI systems, many of these techniques were later
widely used in graphics computing and gaming and applications for years to follow
as Texture was often the first prerequisite for realistic looking graphics.
Texture maps:
Texture maps" redirects here. For the album by Steve Roach, see Texture Maps:
The Lost Pieces Vol. 3.
A texture map is an image applied (mapped) to the surface of a shape or polygon.
This may be a bitmap image or a procedural texture. They may be stored in
common image file formats, referenced by 3D model formats or material definitions,
and assembled into resource bundles.
They may have 1-3 dimensions, although 2 dimensions are most common for visible
surfaces. For use with modern hardware, texture map data may be stored in
swizzled or tiled orderings to improve cache coherency. Rendering APIs typically
manage texture map resources (which may be located in device memory) as buffers
or surfaces, and may allow 'render to texture' for additional effects such as post
processing or environment mapping.
They usually contain RGB color data (either stored as direct color, compressed
formats, or indexed color), and sometimes an additional channel for alpha blending
(RGBA) especially for billboards and decal overlay textures. It is possible to use the
alpha channel (which may be convenient to store in formats parsed by hardware) for
other uses such as specularity.
Multiple texture maps (or channels) may be combined for control over specularity,
normals, displacement, or subsurface scattering e.g. for skin rendering.
Creation:
Texture maps may be acquired by scanning/digital photography, designed in image
manipulation software such as GIMP, Photoshop, or painted onto 3D surfaces
directly in a 3D paint tool such as Mud box or z brush.
Texture application:
This process is akin to applying patterned paper to a plain white box. Every vertex in
a polygon is assigned a texture coordinate (which in the 2d case is also known as
UV coordinates). This may be done through explicit assignment of vertex attributes,
manually edited in a 3D modelling package through UV unwrapping tools. It is also
possible to associate a procedural transformation from 3D space to texture space
with the material. This might be accomplished via planar projection or, alternatively,
cylindrical or spherical mapping. More complex mappings may consider the distance
along a surface to minimize distortion. These coordinates are interpolated across
the faces of polygons to sample the texture map during rendering. Textures may be
repeated or mirrored to extend a finite rectangular bitmap over a larger area, or they
may have a one-to-one unique "injective" mapping from every piece of a surface
(which is important for render mapping and light mapping, also known as baking).
Texture space:
Texture mapping maps the model surface (or screen space during rasterization) into
texture space; in this space, the texture map is visible in its undistorted form. UV
unwrapping tools typically provide a view in texture space for manual editing of
texture coordinates. Some rendering techniques such as subsurface scattering may
be performed approximately by texture-space operations.
Multi texturing:
Multi texturing is the use of more than one texture at a time on a polygon. For
instance, a light map texture may be used to light a surface as an alternative to
recalculating that lighting every time the surface is rendered. Micro textures or detail
textures are used to add higher frequency details, and dirt maps may add
weathering and variation; this can greatly reduce the apparent periodicity of
repeating textures. Modern graphics may use more than 10 layers, which are
combined using shaders, for greater fidelity. Another multitexture technique is bump
mapping, which allows a texture to directly control the facing direction of a surface
for the purposes of its lighting calculations; it can give a very good appearance of a
complex surface (such as tree bark or rough concrete) that takes on lighting detail in
addition to the usual detailed coloring. Bump mapping has become popular in recent
video games, as graphics hardware has become powerful enough to accommodate
it in real-time.
Texture filtering:
The way that samples (e.g. when viewed as pixels on the screen) are calculated
from the texels (texture pixels) is governed by texture filtering. The cheapest method
is to use the nearest-neighbour interpolation, but bilinear interpolation or trilinear
interpolation between mipmaps are two commonly used alternatives which reduce
aliasing or jaggies. In the event of a texture coordinate being outside the texture, it is
either clamped or wrapped. Anisotropic filtering better eliminates directional
artefacts when viewing texturesfrom oblique viewing angles.
Texture streaming:
Texture streaming is a means of using data streams for textures, where each texture
is available in two or more different resolutions, as to determine which texture
should be loaded into memory and used based on draw distance from the viewer
and how much memory is available for textures. Texture streaming allows a
rendering engine to use low resolution textures for objects far away from the
viewer's camera, and resolve those into more detailed textures, read from a data
source, as the point of view nears the objects.
Baking:
Baking can be used as a form of level of detail generation, where a complex scene
with many different elements and materials may be approximated by a single
element with a single texture, which is then algorithmically reduced for lower
rendering cost and fewer draw calls. It is also used to take high-detail models from
3D sculpting software and point cloud scanning and approximate them with meshes
more suitable for real time rendering.
RASTERISATION ALGORITHMS:
Various techniques have evolved in software and hardware implementations. Each
offers different trade-offs in precision, versatility and performance.
For the case of rectangular objects, using quad primitives can look less incorrect
than the same rectangle split into triangles, but because interpolating 4 points adds
complexity to the rasterization, most early implementations preferred triangles only.
Some hardware, such as the forward texture mapping used by the Nvidia NV1, was
able to offer efficient quad primitives. With perspective correction (see below)
triangles become equivalent and this advantage disappears.
Applications:
Beyond 3D rendering, the availability of texture mapping hardware has inspired its use for
accelerating other tasks:
Tomography:
It is possible to use texture mapping hardware to accelerate both the reconstruction
of voxel data sets from tomographic scans, and to visualize the results.
User interfaces:
Many user interfaces use texture mapping to accelerate animated transitions of
screen elements, e.g. Exposé in Mac OS X
1. Unwrapping:
To start the 3D texturing process, you need to unwrap the model first; which
basically means unfolding a 3D mesh. Texture artists will create a UV map for each
3D object as soon as they receive the final models from the 3D modeling
department. SUVs are in fact 2D representations of 3D models. UV mapping will
help wrap a 2D image (texture) around a 3D object by directly relating it to vertices
on a polygon. The resulting map will be directly used in the process of texturing and
shading.
Unwrapping a 3D model in the texturing component is most often a must; unless you
want to use other options such as procedural textures. These are 2D or 3D textures
created using a mathematical algorithm (procedure) rather than directly stored data.
Correct display of an objects’ overall look and its interaction with light is a key step
towards its believability and appeal. The wrong material or surface properties can
end up being rejected by the viewer’s mind. This sums up the overall purpose of the
texturing and shading process, going hand-in-hand. The texture is usually a 2D
image and shading is a group of functions that determines the way light affects the
2D image.
The process of defining color information, surface details, and visual properties of a
3D model is called “texture mapping”. Texture maps most used by Dream Farm
texture artists include a Base Color map, Normal map, Height amp, Diffuse map,
Specular map, Roughness map, and Self-Illumination map. There are tons of other
texture maps as well, including Ambient occlusion map, Displacement map
Specularity/reflection map, Roughness/glossiness map, Metalness map, Refraction
map, etc.
In short, the process of calculating the different maps assigned to the object’s
shader and also the lights is called rendering. Generally speaking, texturing, 3D
lighting, and rendering processes relatively rely on each other. So it is important to
choose your texture maps based on the preferences of the render engine you’ll be
using at the end of the production stage.
4. Texture mapping:
3D texturing software:
The first question that comes to mind is which software is the best for texturing
works in animation. While there is some software that does some things better than
others, there’s not a single software that does everything flawlessly. If you want to
choose appropriate software, then you should answer some questions:
1. What is the scope of my project? Am I doing it just to get the hang of texturing
or not? If so, then you should start with beginner-friendly software that makes
learning fun and easy like Blender.