Thursday 7 May 2015

Constraints

Polygon Count and File Size
The two most common factors that attribute to a files size are the polygon count and the vertex count and a larger file size makes it harder to run if it's a game for example a high end pc could run an object with thousands of polygons whereas a iphone could probably only manage a few hundred per object 

When a game artist talks about the polygon count of a game they normally mean the triangle count games almost always use triangles over any other polygons as modern hardware is optimized for rendering triangles so it would be foolish to make a game out of other polygons the polygon count on most modelling software is also normally wrong because it doesn't count all of the triangles therefore it is usually better to swap the polygon counter to a triangle counter so you can get a more accurate representation of how many polygons you have used however a lot of the time quadrilateral polygons are used during modelling as they speed up the modelling process and can be transformed into triangles later on down the line 

Vertex count ultimately is more important than triangle count as it uses a lot more memory but artists more commonly use triangle count as a measurement of performance triangles are connected to one another. 1 triangle uses 3 vertices, 2 triangles use 4 vertices, 3 triangles use 5 vertices, and 4 triangles use 6 vertices and so on. However, seams in UVs, changes to shading/smoothing groups, and material changes from triangle to triangle etc. are all treated as a physical break in the model's surface, when the model is rendered by the game. The vertices must be duplicated at these breaks, so the model can be sent in renderable chunks to the graphics card.
Overuse of smoothing groups, over-splittage of UVs, too many material assignments (and too much misalignment of these three properties), all of these lead to a much larger vertex count. This can stress the transform stages for the model, slowing performance. It can also increase the memory cost for the mesh because there are more vertices to send and store.

Rendering Time
Rendering is the final process of creating the actual 2D image or animation from the scene that you have made there are several different and specialized methods of rendering These range from the distinctly non-realistic wireframe rendering through polygon-based rendering, to more advanced techniques such as: scanline rendering, ray tracing, or radiosity. Rendering may take from fractions of a second to days for a single image/frame. In general, different methods are better suited for either photo-realistic rendering, or real-time rendering.

Real time
Real time rendering is used for interactive media such as games this works by rendering the frames as you see them and the speed that it renders at is shown in FPS (Frames Per Second) the idea is to show information as quickly as the eye can process it The primary goal is to achieve an as high as possible degree of photorealism at an acceptable minimum rendering speed (usually 24 frames per second, as that is the minimum the human eye needs to see to successfully create the illusion of movement). In fact, exploitations can be applied in the way the eye 'perceives' the world, and as a result the final image presented is not necessarily that of the real-world, but one close enough for the human eye to tolerate. Rendering software may simulate such visual effects as lens flares, depth of field or motion blur. These are attempts to simulate visual phenomena resulting from the optical characteristics of cameras and of the human eye. These effects can lend an element of realism to a scene, even if the effect is merely a simulated artefact of a camera. This is the basic method employed in games, interactive worlds and VRML. The rapid increase in computer processing power has allowed a progressively higher degree of realism even for real-time rendering, including techniques such as HDR rendering. Real-time rendering is often polygonal and aided by the computer's GPU.

Non Real time
Non real time rendering is the opposite to real time instead of rendering as it goes along the whole thing is pre rendered it is used for non interactive media and is a very slow process. Non-real time rendering enables the leveraging of limited processing power in order to obtain higher image quality. Rendering times for individual frames may vary from a few seconds to several days for complex scenes. Rendered frames are stored on a hard disk then can be transferred to other media such as motion picture film or optical disk. These frames are then displayed sequentially at high frame rates, typically 24, 25, or 30 frames per second, to achieve the illusion of movement.

When the goal is photo-realism, techniques such as ray tracing or radiosity are employed. This is the basic method employed in digital media and artistic works. Techniques have been developed for the purpose of simulating other naturally-occurring effects, such as the interaction of light with various forms of matter. Examples of such techniques include particle systems (which can simulate rain, smoke, or fire), volumetric sampling (to simulate fog, dust and other spatial atmospheric effects), caustics (to simulate light focusing by uneven light-refracting surfaces, such as the light ripples seen on the bottom of a swimming pool), and subsurface scattering (to simulate light reflecting inside the volumes of solid objects such as human skin).

The rendering process is computationally expensive, given the complex variety of physical processes being simulated. Computer processing power has increased rapidly over the years, allowing for a progressively higher degree of realistic rendering. Film studios that produce computer-generated animations typically make use of a render farm to generate images in a timely manner. However, falling hardware costs mean that it is entirely possible to create small amounts of 3D animation on a home computer system. The output of the renderer is often used as only one small part of a completed motion-picture scene. Many layers of material may be rendered separately and integrated into the final shot using compositing software.
 

3D Development Software

3D Studio Max

Autodesk 3D Studio Max is 3D computer graphics software for making 3D models, animations and images it was developed by Autodesk media and entertainment. It has modelling capability, a flexible plugin architecture and can be used on Microsoft windows it is frequently used by video game developers, TV commercial producers and architectural visualization studios it can also be used for movie effects 

In addition to modelling and animation tools the latest version of 3D studio max also features shaders, dynamic simulation, particle systems, radiosity, normal map creation and rendering, global illumination, a customizable user interface and it's own scripting language    



Maya
Autodesk Maya is 3D development software that runs on Microsoft windows, Mac OS and Linux it is used to create interactive 3D applications such as video games, animated films, TV series and visual effects it was released in February 1997 and bought by autodesk in 2005 

Lightwave
Lightwave is a software package used for rendering 3D images both animated and static it includes a rendering engine that supports advanced features such as realistic reflection and refraction. The 3D modelling component supports both polygon modelling and subdivision surfaces The animation component has features such as reverse and forward kinematics for character animation, particle systems and dynamics. Programmers can expand LightWave's capabilities using an included SDK which offers LScript scripting (a proprietary scripting language) and common C language interfaces.

  Blender
Blender is a free and open source 3D computer 3D graphics software used for creating animated films, visual effects, interactive 3D applications or video games Blender's features include 3D modeling, UV unwrapping, texturing, rigging and skinning, fluid and smoke simulation, particle simulation, animating, match moving, camera tracking, rendering, video editing and compositing. It also features a built-in game engine.

Cinema 4D
Cinema 4D is a 3D modelling, rendering and animation software package developed by MAXON it is capable of procedural and polygonal modelling as well as sub division modeling, animating, lighting, texturing, rendering, and common features found in 3d modelling applications.

Four variants are currently available from MAXON: a core CINEMA 4D 'Prime' application, a 'Broadcast' version with additional motion-graphics features, 'Visualize' which adds functions for architectural design and 'Studio', which includes all module it is available on both windows and Mac OS 

ZBrush
Zbrush is a digital sculpting tool that combines both 3D and 2.5D modelling it is used to do sketch modelling where you actually draw the object you are trying to sculpt rather than manipulating objects it doesn't allow you to be as precise as  you can be with other modelling software however it allows you to model a lot faster and if you get quite good at using you can make models that are just as good as what you can make with the other software packages a lot quicker 


ZBrush is used as a digital sculpting tool to create high-resolution models (up to ten million polygons) for use in movies, games, and animations. It is used by companies ranging from ILM to Electronic Arts. ZBrush uses dynamic levels of resolution to allow sculptors to make global or local changes to their models. ZBrush is most known for being able to sculpt medium to high frequency details that were traditionally painted in bump maps. The resulting mesh details can then be exported as normal maps to be used on a low poly version of that same model. They can also be exported as a displacement map, although in that case the lower poly version generally requires more resolution. Or, once completed, the 3D model can be projected to the background, becoming a 2.5D image (upon which further effects can be applied). Work can then begin on another 3D model which can be used in the same scene. This feature lets users work with extremely complicated scenes without heavy processor overhead.


Sketchup
SketchUp is a 3D modelling program for a broad range of applications such as architectural, civil, mechanical, film as well as video game design — and available in free as well as 'professional' versions.

The program highlights its ease of use,[4] and an online repository of model assemblies (e.g., windows, doors, automobiles, entourage, etc.) known as 3D Warehouse enables designers to locate, download, use and contribute free models. The program includes a drawing layout functionality, allows surface rendering in variable "styles," accommodates third-party "plug-in" programs enabling other capabilities (e.g., near photo realistic rendering) and enables placement of its models within Google Earth.

File Formats
Each 3D application allows the user to save their work, both objects and scenes, in a proprietary file format and export in open formats.
A proprietary format is a file format where the mode of presentation of its data is the intellectual property of an individual or organisation which asserts ownership over the format. In contrast, a free format is a format that is either not recognised as intellectual property, or has had all claimants to its intellectual property release claims of ownership. Proprietary formats can be either open if they are published, or closed, if they are considered trade secrets. In contrast, a free format is never closed.
Proprietary formats are typically controlled by a private person or organization for the benefit of its applications, protected with patents or as trade secrets, and intended to give the license holder exclusive control of the technology to the (current or future) exclusion of others.
Examples of proprietary formats, AutoCAD - .dxf, 3D Studio Max - .3ds, Maya - .mb, LightWave -.lwo
Examples of open formats,  .obj and .dae.

Sources:
http://lucianoaugustomg.com.br/wp-content/uploads/2014/07/3d-300x300.png
http://cdn2.digitalartsonline.co.uk/cmsdata/reviews/3375885/Maya1_inline.jpg
http://www.digitalproduction.com/wp-content/uploads/2015/03/Instancing-Abstract_02.png
http://www.isoftology.com/wp-content/uploads/2014/08/blenderscreen.jpg
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOl8_N-2luvr3Co4qj7ypYf3RhL9b6za_WtQva8B_3ivcECr7GjHQCV9Cyrh26lyV1EttkEkbJwaQ1oCwceSQNVQeeWeCF5VAavQUC8NX7KtzJ-kzhs_t7qLCnWzAVRIq38MQGUBTNtVM/s1600/gggg.jpg
https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcQ844zXujWg5KamViBqw31kLe0Has5S3WHYsuMHq6f6vM9cm7ab

Mesh Construction

Polygonal Modelling 
A polygon mesh is a series of vertices, edges and faces that make up the shape of an object the faces usually consist of triangles quadrilaterals or other simple convex polygons 
 
Although you can create a mesh entirely manually placing each polygon it's more common to use a variety of tools when creating a mesh there is a wide variety of software packages that can be used for creating meshes

Primitive Modelling 

A common method of modelling that connects together primitive shapes in order to create the model these shapes are created by the software most of the time and built into the system it''s a very simple form of modelling but can be used quite effectively  
primitive shapes include: Cubes, cones, cylinders, pyramids, spheres and 2D primitives such as square's triangles and disks

Box Modelling
Box Modelling is one of the more popular methods of modelling where you start with a box and create a model out of it you do this using two simple tools:

The Subdivide tool which splits faces and edges into similar pieces by creating new vertices for example a square would be subdivided by adding one vertex down the centre and one on each edge this would create 4 smaller squares 

The Extrude tool is applied to a face or a group of faces it creates a new face of the same size and shape which is connected to the existing edges by a face therefore extruding a square would create a cube connected to the surface at the location of the face  

Extrusion Modelling
Extrusion modelling is another common form of modelling it is done by creating a 2D shape which traces the outline of an object in a photograph or drawing and then using another picture to look at the object from a different angle and then putting the 2D shape into 3D software and using the extrude software to make the 2D shape 3D sometimes this is done with multiple shapes rather than just one if there are certain parts of the object that stick out more than others  

Sketch Modelling
Sketch modelling is a specialized form of modelling where instead of creating shapes and manipulating them you draw 2D shapes and the software transforms it into 3D for you it is good for making quick low detailed models however if you get good with it and put in a lot of practice you can make some very high detailed models a lot quicker than you could using a more conventional 3D software such as lightwave





3D Scanners
3D scanners can be used to make high detail meshes of existing real world objects in an almost automatic way they just scan the object you want and create a mesh of it however these devices are very expensive and only really used by industry professionals and researchers

Sources:

Geometric Theory

The Cartesian Coordinates System
The Invention of the Cartesian Coordinates System was in the 17th century and it revolutionized mathematics by providing the first link between Euclidean Geometry and algebra the Cartesian coordinates system specifies that each point in a plane can be represented by a pair of coordinates these coordinates are found by looking at the two axis (x and y) and then you can find the x and y coordinate of the point on these axis

BBC Coordinates System
When creating 2D Vector artwork the computer draws the image by plotting points on the x and y axis and joining these points with paths the shapes you make with this can be filled with colour and the sides can be given a stroke (border)

3D coordinates exist on a grid of 3D coordinates which is basically the same axis as before but it introduces a z axis this allows you to plot points in 3 dimensions 
Geometric Theory and polygons
The simplest object within mesh modelling is known as a point (vertex) these are a single point in a three dimensional space two points connected by a straight line is an edge and 3 edges create the most simple polygon possible which is a triangle more complex polygons can be created out of using 4 triangles or you can create quads which are polygons with 4 vertices however triangles are the most common shapes used in polygonal modelling. a group of polygons connected by shared vertices is known as an element each of the polygons making up an element is known as a face

In Euclidean geometry any three non-collinear points determine a plane therefore triangles are always only on one plane this is not true of more complex polygons however the flat nature of triangles makes it simple to determine there surface as normal a three dimensional vector perpendicular to the triangles surface, object normals are useful for determaning light transport in ray tracing

A group of polygons which are connected by shared vertices is called a mesh it also often reared to as a wire-frame model
In order for a mesh to look attractive when rendered it should non-self-intersecting this means that no edge should pass through a polygon in other words the mesh shouldn't be able to pierce itself  unless this is intended it is also desirable that the mesh doesn't have any errors such as double edges, vertices or faces 

Primitives 
In 3D applications pre made objects can be used to make models out of various shapes the the most simple shapes are the standard primitive shapes these shapes include, boxes, cubes, sphere's, Cylinders, Pyramids, and cones they are used as a starting point for modelling 

Surfaces
Polygons can be determined as specific surfaces and then have colour, texture or photographic maps added to them to make them look how you want 
Sources:

Displaying 3D Polygon Animations

Displaying 3D Polygon Animations

API
Games use software known as an API (Application Programme Interface) which is a set of tools for building software applications a good API makes it easier for the software to be developed as it gives you all the building blocks you need to make the software you're trying to make

Most Operating Systems such as Windows provide an API so that programmers can make applications for that operating system  although API's are for programmers they benefit the users as applications made on the same API will probably have similar interfaces

Direct 3D

Direct 3D is an API designed for manipulating and displaying 3D objects it was developed by Microsoft Direct 3D provides programmers with a way for programmers to utilize any graphics card in a pc and use it to display objects almost all pc's are compatible with Direct 3D

Open GL
Open GL was developed by silicon graphics in the early 90's and has become one of the most widely used graphics api's in the world it is very similar to Direct 3D however it is open source meaning anyone can edit the code and do what they want with it whereas Direct 3D limits you to what Microsoft says what you can do

Graphics Pipeline
The graphics pipeline is the way that a computer transferees the mathematical data that it has on the object into the object that we see on the screen the 3D graphics pipeline typically takes a 3D object when it's in data and converts it into a 2D raster image Open GL and direct 3D both have very similar graphics pipelines

Stages of the graphics pipeline 
First the scene is created out of geometric primitive shapes this is usually done using triangles as they are good for this as they always exist on a single plane


Modelling and Transformation 
This stage transforms the local objects consternates into the 3D world coordinate system


Camera Transformation
Next it transforms the 3D world Coordinates into 3D Camera Coordinates with the camera as the origin 

Lighting 
Illuminates according to the lighting and reflectiveness of the object for example if the room as pitch black the objects will be seen as black 

Project Transformation
This stage transfares the 3D coordinates into a 2D view of the camera a object further away from the camera looks smaller and one's that are closer up look larger this is caused by the x and y coordinates of each of the objects being divided by it's z coordinate (this represents it's distance from the camera) in orthographic projection objects retain there original size regardless of distance from the camera


Clipping  

At this stage anything that can not be seen by the camera is not shown and discarded

Scan Conversion or Rasterization
This is the stage where the 2D image is converted into a raster format and the resonating pixel values are determined from now on

Texturing, Fragment Shading
At this stage the pre pixels are assigned a colour based values interloped based on vertices during rasterization from a texture that's in memory or from a shader programme

Display 
At this point the final coloured pixels are displayed on the screen

Sources:
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRGqZWF84sbKCSWaWHpQl4RCXWdRUIzP7lMmDgZLQ5Hfc9Lt-z7

Applications of 3D

3D in games
3D in games has evolved a lot over the years. Games started using 3D in 1981 with a game called 3D Monster Maze, this game would now be considered extremely basic however back then this game was considered revolutionary as it was the first game to use 3D the game it's self was a 3D maze with monsters in it and you get more points for the longer you survive if you get hit by a monster you die kind of like a 3D pac-man but without the pills it was quite basic in terms of gameplay and not remarkable however this opened up the idea to 3D games which is now an industry standard practice with 2D games become more rare by the years with most 2D games being made a 2.5D games 



Despite 3D monster maze 3D in games didn't really catch on until the fifth generation of consoles the big consoles of this generation were the Playstation, the Sega Saturn and the N64 these consoles were the driving forces of 3D with games such as Super Mario 64 the first Mario game in 3D and a lot of people regard this as one of the best mario games it took the traditional Mario formula of platforming and put it into a 3D environment adding features such as wall jumping and a full narrated story line and changed the way you progress from going through worlds until you beat browser to beating different stages to unlock stars so that you can beat bowser they were also allowed to add a lot more bonuses for completionests as they could add a lot more exploration than they could with a 2D game 
Another big 3D game on the N64 was Goldeneye this game was one of the games that created the first person shooter genera that we know today this is a game that lots of people look  back on today and still think is a very good game admittedly if you played it today it would feel clunky and horrible however for the time this game was very impressive and had some very impressive texture work for the time and was one of the fifth generation games that used 3D to the fullest of it's abilities
And now many years on 3D graphics have advanced a lot and become an industry standard practice with games such as skyrim boasting a full open world 3D game with graphics that were mindblowing for the time especially on the PC skyrim shows just how much both the technology to run games has improved and how much better people have got at using 3D technology back when people were astounded by goldeneye they would have never dreamed of a game such as skyrim they wouldn't have even thought a fully open world 3D game was possible and it wasn't on the hardware but with the evolution of gaming people are being able to do more and more with 3D and people are still inventing new 3D specific game mechanics to this day



3D in films and TV
The first time 3D was used in a film was in a 1976 film known as Futureworld which used a 3D rotating palm which was the first 3D animation  rendered in 1972 by Ed Catmull and Fred Parke. this wasn't exactly a revolutionary thing however it was a small taste of what could be done with 3D 



The first film to actually use 3D remarkably was the 1993 film Jurassic park this film featured full 3D animated dinosaurs and almost all of these were added in to the live action scenes using CGI this was quite a big part of the reason why this film was so successful because people had never seen anything like this before
A more recent example of 3D animation being used in films is Avatar this film is literately held up by it's use of CGI and the whole movie is a basically a showcase of what they can do with CGI this movie has some very impressive effects within it



Animation
Animation is a style of film possible because of 3D software studios like pixar use animation to make some very high quality films that are often said to be better than a lot of live action films


TV
3D was first used in TV in a show called reboot in 1994 this was the first 3D animated programme to air on TV it ran from 1994-2001 and because of this show 3D animated programmes on television are very common place and pretty much all you can find on children's TV

3D in education
3D is used in education to create models such as a model of heart for students to study if there doing biology or models of the globe for biology

3D in Medicine 
3D is used in medicine for CT scans they can create a model of the inside what they scan and then look at the inside and determine what's up with the patient




3D in engineering
3D is used in engineering to create a model of what they want to make before they make it and test certain pressures on the construction and see if it holds up




3D in architecture 
3D is used in architecture in order to make a model of the building before they actually build it

3D in product design

3D is used in product design to design the product that they want to make before they actually make it it allows them to consider things such as proportions and what materials would be best for what they are making

3D Printing

3D Printing is becoming a much bigger thing as time advances having it's own trade show and soon it's possible that we will have home 3D printers and they will become as common as the standard printer they are able to 3D print a lot of things already including art, food and even cars it's becoming a very big thing and will completely change the way things are made as it grows

Review

Overview
For this project I had to create a sidekick companion for the game second life to do this I did some research of existing companions and decided that I wanted to do a robot or a mech so I collected images of both things into a mood board I then drew up some brainstorms of what features I wanted my companion to have and from this I decided it would be better to do a mech from this I drew up some designs for a mech and went into modeler and made it and then set it up in layout and created some renders of it overall I think my mech was quite good however there are some improvements that I could make

One of the best parts about my mech in my opinion is my minigun I spent quite a lot of time making this and deciding on what material to surface it with it was quite difficult to do and there was a lot of polygons to cut down on after it was done however the final result of it was quite good and I'm quite impressed by it

I also think that my choice of materials to surface the mech with was quite strong as it all blends together quite well and works together all of it except from the drill this was a little bit too shiny compared to the rest of the mech and didn't really blend amazingly with it and I should of dulled it down before finalizing my mech however it does still sort of work with the mech however the drill doesn't really look like a drill as it doesn't have the spiral around it like a drill would and looks too smooth to be a drill


 I also feel like my mech looks a little bit too square although this was how he was drawn on the original design it doesn't really look very good on the finished mech I feel like it needs a bit more complexity and less boxes


I also feel like the rocket pod needs a lot of improvement at the moment it looks tacky and terrible and I really don't like it but I couldn't think of a better way of doing it and therefore I had to use the one that the mech has on it looking back at it I could of took inspiration from rumble from league of legends and put 3 missiles in a holder on his shoulder
The window on the front of the mech was okay I could of done it better I feel however with the surfacing it works for it's purpose and looks the part and it's not exactly ugly and the glass gives it a nice effect
The shoulder pads also add quite a nice effect to it and remove a little bit of the square problem that  I mentioned earlier however I should of made the whole shoulder joint a ball and I feel like this would of made it look a little better

Overall I feel that my companion fulfills the project and lives up to my design however it is quite lacking in it's looks and could be improved upon quite a lot such as the improvements to certain parts that I have mentioned as well as generally reducing the amount of polygons within the mech that can't be seen to improve performance as well as properly connecting a lot of the parts together as currently not everything is properly connected