Polygon Count and File Size
The two most common factors that attribute to a files size are the polygon count and the vertex count and a larger file size makes it harder to run if it's a game for example a high end pc could run an object with thousands of polygons whereas a iphone could probably only manage a few hundred per object
When a game artist talks about the polygon count of a game they normally mean the triangle count games almost always use triangles over any other polygons as modern hardware is optimized for rendering triangles so it would be foolish to make a game out of other polygons the polygon count on most modelling software is also normally wrong because it doesn't count all of the triangles therefore it is usually better to swap the polygon counter to a triangle counter so you can get a more accurate representation of how many polygons you have used however a lot of the time quadrilateral polygons are used during modelling as they speed up the modelling process and can be transformed into triangles later on down the line
Vertex count ultimately is more important than triangle count as it uses a lot more memory but artists more commonly use triangle count as a measurement of performance triangles are connected to one another. 1 triangle uses 3 vertices, 2 triangles use 4 vertices, 3 triangles use 5 vertices, and 4 triangles use 6 vertices and so on. However, seams in UVs, changes to shading/smoothing groups, and material changes from triangle to triangle etc. are all treated as a physical break in the model's surface, when the model is rendered by the game. The vertices must be duplicated at these breaks, so the model can be sent in renderable chunks to the graphics card.
The two most common factors that attribute to a files size are the polygon count and the vertex count and a larger file size makes it harder to run if it's a game for example a high end pc could run an object with thousands of polygons whereas a iphone could probably only manage a few hundred per object
When a game artist talks about the polygon count of a game they normally mean the triangle count games almost always use triangles over any other polygons as modern hardware is optimized for rendering triangles so it would be foolish to make a game out of other polygons the polygon count on most modelling software is also normally wrong because it doesn't count all of the triangles therefore it is usually better to swap the polygon counter to a triangle counter so you can get a more accurate representation of how many polygons you have used however a lot of the time quadrilateral polygons are used during modelling as they speed up the modelling process and can be transformed into triangles later on down the line
Vertex count ultimately is more important than triangle count as it uses a lot more memory but artists more commonly use triangle count as a measurement of performance triangles are connected to one another. 1 triangle uses 3 vertices, 2 triangles use 4 vertices, 3 triangles use 5 vertices, and 4 triangles use 6 vertices and so on. However, seams in UVs, changes to shading/smoothing groups, and material changes from triangle to triangle etc. are all treated as a physical break in the model's surface, when the model is rendered by the game. The vertices must be duplicated at these breaks, so the model can be sent in renderable chunks to the graphics card.
Overuse of smoothing groups, over-splittage of UVs, too many material assignments (and too much misalignment of these three properties), all of these lead to a much larger vertex count. This can stress the transform stages for the model, slowing performance. It can also increase the memory cost for the mesh because there are more vertices to send and store.
Rendering Time
Rendering is the final process of creating the actual 2D image or animation from the scene that you have made there are several different and specialized methods of rendering These range from the distinctly non-realistic wireframe rendering through polygon-based rendering, to more advanced techniques such as: scanline rendering, ray tracing, or radiosity. Rendering may take from fractions of a second to days for a single image/frame. In general, different methods are better suited for either photo-realistic rendering, or real-time rendering.
Real time
Real time rendering is used for interactive media such as games this works by rendering the frames as you see them and the speed that it renders at is shown in FPS (Frames Per Second) the idea is to show information as quickly as the eye can process it The primary goal is to achieve an as high as possible degree of photorealism at an acceptable minimum rendering speed (usually 24 frames per second, as that is the minimum the human eye needs to see to successfully create the illusion of movement). In fact, exploitations can be applied in the way the eye 'perceives' the world, and as a result the final image presented is not necessarily that of the real-world, but one close enough for the human eye to tolerate. Rendering software may simulate such visual effects as lens flares, depth of field or motion blur. These are attempts to simulate visual phenomena resulting from the optical characteristics of cameras and of the human eye. These effects can lend an element of realism to a scene, even if the effect is merely a simulated artefact of a camera. This is the basic method employed in games, interactive worlds and VRML. The rapid increase in computer processing power has allowed a progressively higher degree of realism even for real-time rendering, including techniques such as HDR rendering. Real-time rendering is often polygonal and aided by the computer's GPU.
Non Real time
Non real time rendering is the opposite to real time instead of rendering as it goes along the whole thing is pre rendered it is used for non interactive media and is a very slow process. Non-real time rendering enables the leveraging of limited processing power in order to obtain higher image quality. Rendering times for individual frames may vary from a few seconds to several days for complex scenes. Rendered frames are stored on a hard disk then can be transferred to other media such as motion picture film or optical disk. These frames are then displayed sequentially at high frame rates, typically 24, 25, or 30 frames per second, to achieve the illusion of movement.
When the goal is photo-realism, techniques such as ray tracing or radiosity are employed. This is the basic method employed in digital media and artistic works. Techniques have been developed for the purpose of simulating other naturally-occurring effects, such as the interaction of light with various forms of matter. Examples of such techniques include particle systems (which can simulate rain, smoke, or fire), volumetric sampling (to simulate fog, dust and other spatial atmospheric effects), caustics (to simulate light focusing by uneven light-refracting surfaces, such as the light ripples seen on the bottom of a swimming pool), and subsurface scattering (to simulate light reflecting inside the volumes of solid objects such as human skin).
The rendering process is computationally expensive, given the complex variety of physical processes being simulated. Computer processing power has increased rapidly over the years, allowing for a progressively higher degree of realistic rendering. Film studios that produce computer-generated animations typically make use of a render farm to generate images in a timely manner. However, falling hardware costs mean that it is entirely possible to create small amounts of 3D animation on a home computer system. The output of the renderer is often used as only one small part of a completed motion-picture scene. Many layers of material may be rendered separately and integrated into the final shot using compositing software.