WebGL can draw three different standard types of primitives - POINTS, LINES and
        TRIANGLES - whereas POINTS use 1 vertex,
        LINES 2 vertices and TRIANGLES 3 vertices.
        
        
        In your WebGL-Application you provide a set of vertices which make up your object you want to draw. Those
        vertices have to be uploaded to your GPU at first and subsequently run through your vertex shader when executing
        the application.
        When all information has been uploaded to your GPU the functions drawArrays or drawElements
        will start the execution of the rendering pipeline.
    
The vertex shader is a function you write in GLSL. It gets called once for each vertex. You do
        some math and set
        the special variable gl_Position with a clipspace value for the current vertex. The GPU takes
        that value and stores it internally.
        
        
        Note: Do as much as you can in the vertex shader, rather than in the fragment shader. Because,
        per rendering pass,
        fragment shaders run many more times than vertex shaders, any calculation that can be done on the vertices and
        then just interpolated among fragments is a performance improvement (this interpolation is done "automagically" for
        you, through the fixed functionality rasterization phase of the rendering pipeline).
    
        The WebGL pipeline needs to assemble the shaded vertices into individual geometric primitives such as triangles,
        lines or points.
        Then for each primitive, WebGL needs to decide whether the primitive is within the 3D region that is visible on
        the screen or not. Each primitive that is located outside the 3D region will be discarded and not passed further
        to
        the rasterizer. Each primitive that is partially inside the 3D region will be clipped to the visible part and
        passed to the rasterizer.
        
        
        Note: The visible 3D region is specified by a matrix, usually known as the viewing
        matrix, which has to
        be set within your application and given to you vertex shader. Common viewing matrices are
        perspective, frustum and ortho.
        
        
        Assuming you're drawing TRIANGLES, every time the vertex shader generates 3 vertices the
        primitive
        processing &
        assembly uses them to make a triangle. It figures out which pixels the 3 points of the triangle correspond to.
        Each visible triangle will then be passed to the rasterizer.
The rasterizer convert the primitives to fragments. For each fragment it will call your fragment shader asking you what color to make that fragment. You can think of a fragment as a pixel that can finally be drawn on the screen.
× An important operation in computer graphics is applying a texture to a surface. You can think of texturing as a
        process that "glues" images onto geometrical objects. These images are called textures.
        In your WebGL-Application you can load a texture which afterwards make up the color of the object you want to
        draw. The
        texture has to be stored in a texture buffer and uploaded to your GPU.
        
        
        Note: In order to apply textures onto objects we need to pass in texture coordinates for each
        vertex. The most common
        way is to just
        pass them straight through to the fragment shader. Texture coordinates tell WebGL the location of the pixels
        within the texture you want to be applied onto a specific vertex.
        
        
        All textures will be normalized to a
        2-dimensional [0,1] picture area, therefore each pixel of a texture can be accessed within the range of [0,1]
        for each axis.
    
Your textures need to be stored so that the GPU can access them quickly and efficiently. Usually the GPU has a special texture memory to store the textures.
×The fragment shader is a function you write in GLSL. It gets called once for each pixel, also
        known as fragment. Your fragment
        shader has to set a special variable gl_FragColor with the color it wants for each fragment.
        
        You may calculate and set the color directly in the fragment shader. Moreover you can use a texture in order to
        define the color for each fragment. Another way would be to declare a varying variable holding a color value for
        each vertex and pass that data from the vertex shader to the fragment shader.
        
        
        Note: WebGL will connect the varying in the vertex shader to the varying of the same name and
        type in the fragment shader.
        All data passed through by a varying variable will be interpolated by the rasterizer (this interpolation is done
        "automagically" for
        you, through the fixed functionality rasterization phase of the rendering pipeline).
    
Frame buffer is a portion of graphics memory that hold the scene data. This buffer contains details such as width and height of the surface (in pixels), color of each pixel, and depth and stencil buffers. You can think of a frame buffer as a digital copy that is drawn onto your screen, whose pixels are continuously updated by your fragment shader
×