I. The Basics---Introduction----Graphics and Rendering
http://www.arcsynthesis.org/gltut/Basics/Intro%20Graphics%20and%20Rendering.html
Graphics and Rendering
This is an overview of the process of rendering. Do not worry if you do not understand everything right away; every step will be covered in lavish detail in later tutorials.
Everything you see on your computer's screen, even the text you are reading right now (assuming you are reading this on an electronic display device, rather than a printout) is simply a two-dimensional array of pixels. If you take a screenshot of something on your screen, and blow it up, it will look very blocky.
Each of these blocks is a pixel. The word image.
The purpose of graphics of any kind is therefore to determine what color to put in what pixels. This determination is what makes text look like text, windows look like windows, and so forth.
Since all graphics are just a two-dimensional array of pixels, how does 3D work? 3D graphics is thus a system of producing colors for pixels that convince you that the scene you are looking at is a 3D world rather than a 2D image. The process of converting a 3D world into a 2D image of that world is called rendering.
There are several methods for rendering a 3D world. The process used by real-time graphics hardware, such as that found in your computer, involves a very great deal of fakery. This process is called rasterization, and a rendering system that uses rasterization is called a rasterizer.
In rasterizers, all objects that you see are empty shells. There are techniques that are used to allow you to cut open these empty shells, but this simply replaces part of the shell with another shell that shows what the inside looks like. Everything is a shell.
All of these shells are made of triangles. Even surfaces that appear to be round are merely triangles if you look closely enough. There are techniques that generate more triangles for objects that appear closer or larger, so that the viewer can almost never see the faceted silhouette of the object. But they are always made of triangles.
geometry, a model or a mesh. These terms are used interchangeably.
The process of rasterization has several phases. These phases are ordered into a pipeline, where triangles enter from the top and a 2D image is filled in at the bottom. This is one of the reasons why rasterization is so amenable to hardware acceleration: it operates on each triangle one at a time, in a specific order. Triangles can be fed into the top of the pipeline while triangles that were sent earlier can still be in some phase of rasterization.
The order in which triangles and the various meshes are submitted to the rasterizer can affect its output. Always remember that, no matter how you submit the triangular mesh data, the rasterizer will process each triangle in a specific order, drawing the next one only when the previous triangle has finished being drawn.
OpenGL is an API for accessing a hardware-based rasterizer. As such, it conforms to the model for rasterization-based 3D renderers. A rasterizer receives a sequence of triangles from the user, performs operations on them, and writes pixels based on this triangle data. This is a simplification of how rasterization works in OpenGL, but it is useful for our purposes.
Triangles and Vertices. Triangles consist of 3 vertices. A vertex is a collection of arbitrary data. For the sake of simplicity (we will expand upon this later), let us say that this data must contain a point in three dimensional space. It may contain other data, but it must have at least this. Any 3 points that are not on the same line create a triangle, so the smallest information for a triangle consists of 3 three-dimensional points.
A point in 3D space is defined by 3 numbers or coordinates. An X coordinate, a Y coordinate, and a Z coordinate. These are commonly written with parenthesis, as in (X, Y, Z).
clip space. The positions of the triangle's vertices in clip space are called clip coordinates.在opengl用法中,这个即将把三角形转换到的volume,我们称之为clip space. 在clip space中的三角形的坐标称为 clip坐标.
Clip coordinates are a little different from regular positions. A position in 3D space has 3 coordinates. A position in clip space has clipping. This breaks the triangle apart into a number of smaller triangles, such that the smaller triangles are all entirely within clip space. Hence the name clipping. 它将三角形break成许多小的三角形,这样的话小三角形都会出在clip space中. 因此称为 “normalized device coordinates.
This process is very simple. The X, Y, and Z of each vertex's position is divided by W to get normalized device coordinates. That is all.
The space of normalized device coordinates is essentially just clip space, except that the range of X, Y and Z are [-1, 1]. The directions are all the same. The division by W is an important part of projecting 3D triangles onto 2D images; we will cover that in a future tutorial.
The cube indicates the boundaries of normalized device coordinate space.
Window Transformation. The next phase of rasterization is to transform the vertices of each triangle again. This time, they are converted from normalized device coordinates to window coordinates. As the name suggests, window coordinates are relative to the window that OpenGL is running within.
Window Transformation. 光栅化的下一个阶段是再次转换每个三角形.这次,是将三角形从normalized device coordinates转换到window coordinates. 像名称所表示的,窗口坐标与OpenGL正在运行的窗口有关.
Even though they refer to the window, they are still three dimensional coordinates. The X goes to the right, Y goes up, and Z goes away, just as for clip space. The only difference is that the bounds for these coordinates depends on the viewable window. It should also be noted that while these are in window coordinates, none of the precision is lost. These are not integer coordinates; they are still floating-point values, and thus they have precision beyond that of a single pixel.
The bounds for Z are [0, 1], with 0 being the closest and 1 being the farthest. Vertex positions outside of this range are not visible.
Note that window coordinates have the bottom-left position as the (0, 0) origin point. This is counter to what users are used to in window coordinates, which is having the top-left position be the origin. There are transform tricks you can play to allow you to work in a top-left coordinate space if you need to.
The full details of this process will be discussed at length as the tutorials progress.
Scan Conversion. After converting the coordinates of a triangle to window coordinates, the triangle undergoes a process called scan conversion. This process takes the triangle and breaks it up based on the arrangement of window pixels over the output image that the triangle covers.
Scan Conversion.在将三角形转换到窗口坐标后,接下来的操作称为scan conversion.这个操作根据窗口像素的排列将三角形所cover的图像将三角形break up.
The center image shows the digital grid of output pixels; the circles represent the center of each pixel. The center of each pixel represents a sample: a discrete location within the area of a pixel. During scan conversion, a triangle will produce a fragment for every pixel sample that is within the 2D area of the triangle.
中间的图像表示了输出像素网格,圆环代表每个像素的中心.每个像素的中心代表一个 sample:一个像素区域内的不连续的location.在scan conversion阶段,一个三角形将会为2D三角形区域中的每个像素的 sample产生一个 fragment .
The image on the right shows the fragments generated by the scan conversion of the triangle. This creates a rough approximation of the triangle's general shape.
右图展示了一个三角形scan conversion阶段产生的fragment.这将产生一个粗略的近似三角形形状.
It is very often the case that triangles are rendered that share edges. OpenGL offers a guarantee that, so long as the shared edge vertex positions are
To make it easier to use this, OpenGL also offers the guarantee that if you pass the same input vertex data through the same vertex processor, you will get identical output; this is called the invariance guarantee. So the onus is on the user to use the same input vertices in order to ensure gap-less scan conversion.
为了更容易使用,OpenGL同样提出了一个保证,如果你给相同的顶点processor提供相同的顶点数据,那么你将会得到相同的输出,这称为 invariance guarantee. 这样用户的负担就是要使用相同的输入顶点来使产生gap-less scan conversion.
Scan conversion is an inherently 2D operation. This process only uses the X and Y position of the triangle in window coordinates to determine which fragments to generate. The Z value is not forgotten, but it is not directly part of the actual process of scan converting the triangle.
Scan conversion是一个固有的2D操作.这个过程只使用窗口坐标系中的position的X,Y坐标来决定产生哪些 fragment.Z值没有被忘记,但是它不是scan converting三角形过程中的直接参与部分.
The result of scan converting a triangle is a sequence of fragments that cover the shape of the triangle. Each fragment has certain data associated with it. This data contains the 2D location of the fragment in window coordinates, as well as the Z position of the fragment. This Z value is known as the depth of the fragment. There may be other information that is part of a fragment, and we will expand on that in later tutorials.
scan convert一个三角形的结果是产生一系列的fragment,这些fragment覆盖了一个三角形的形状.每个fragment都有与之相连的数据.这个数据包括2D窗口坐标系中的位置,同样也包括fragment的Z position.这个Z值被认为是 fragment的深度值.还有其他的一些fragment的信息,接下来会讲到.
Fragment Processing. This phase takes a fragment from a scan converted triangle and transforms it into one or more color values and a single depth value. The order that fragments from a single triangle are processed in is irrelevant; since a single triangle lies in a single plane, fragments generated from it cannot possibly overlap. However, the fragments from another triangle can possibly overlap. Since order is important in a rasterizer, the fragments from one triangle must all be processed before the fragments from another triangle.
This phase is quite arbitrary. The user of OpenGL has a lot of options of how to decide what color to assign a fragment. We will cover this step in detail throughout the tutorials.
Fragment Processing. 这个阶段将一个fragment从scan converted 后三角形取出来,赋给它一个或多个颜色值和一个单独的深度值.从scan converted 后三角形中取出fragment的顺序是不相干的.因为一个特定三角形只能处于一个平面,所以三角形产生的 fragments不会重叠.但是从其他三角形产生的 fragments会与上个三角形产生的 fragments 重叠.光栅化过程的顺序很重要,一个三角形产生的fragments必须在其他三角形产生fragments之前全部产生完.
colorspace. The most common color space for screens is RGB, where the reference colors are Red, Green and Blue. Printed works tend to use CMYK (Cyan, Magenta, Yellow, Black). Since we're dealing with rendering to a screen, and because OpenGL requires it, we will use the RGB colorspace.
在计算机图形学里,我们通常使用一些列0到1范围内的数字来描述颜色.每个数值对应相应的颜色值的强度,最后的结果是这些值的混合.
reference colors 的几何称作color space. 屏幕的最常见的颜色空间是RGB,红色绿色和蓝色.打印工作长使用CMYK颜色空间.我们使用RGB模式.
shader is a program designed to be run on a renderer as part of the rendering operation. Regardless of the kind of rendering system in use, shaders can only be executed at certain points in that rendering process. These shader stages represent hooks where a user can add arbitrary algorithms to create a specific visual effect.
In term of rasterization as outlined above, there are several shader stages where arbitrary processing is both economical for performance and offers high utility to the user. For example, the transformation of an incoming vertex to clip space is a useful hook for user-defined code, as is the processing of a fragment into final colors and depth.
Shaders for OpenGL are run on the actual rendering hardware. This can often free up valuable CPU time for other tasks, or simply perform operations that would be difficult if not impossible without the flexibility of executing arbitrary code. A downside of this is that they must live within certain limits that CPU code would not have to.
There are a number of shading languages available to various APIs. The one used in this tutorial is the primary shading language of OpenGL. It is called, unimaginatively, the OpenGL Shading Language, or GLSL. for short. It looks deceptively like C, but it is very much not C.