Category: Computers

With the availability of 3D rendering options and the complexity of traditional 2D cel animation, it is not surprising that a lot of new cartoons’, such as Star Wars: The Clone Wars; Iron Man: Armoured Adventures etc… are being made in 3D as opposed to 2D.  However 2D animation has a certain charm over pure 3D, such as the use of squash and stretch techniques and the general stylised look.

However there are less and less pure 2D animations nowadays.  Where complex elements are needed, that would take too long to draw by hand, frame by frame, a composite is used.  The one of the earliest examples (1992)  to composite 3D CGI to create the carpet ride through the Cave of Wonders, the intricately patterned carpet itself, and the tiger head cave. (Disney Archives)

3D has become more of an influential part in animations since then, often trying to imitate the imperfections of hand drawn imagery.  Such as Warner Bros 1999 adaptation of The Iron Giant.  To achieve an imperfect look of the lines, a programme was created to generate imperfections.

Dreamworks Spirit: Stallion of the Cimarron used 3D predominantly for landscapes and special FX, including the opening shots of the Grand Canyon.

The Futurama series also combines 2D and 3D graphics using a software called PowerAnimator.  The style of Matt Groening creates challenges for rendering in 3D due to the off-perspective stylised nature.  The characters are rarely if ever seen from front view, so to create fully 3 dimensional characters would just not maintain the look of the series

To say that pure 2D animation does not have a place in the future of animation would be sad, although sometimes the combination of the two can look  slightly awkward it is the inevitable future to sustain a level of complexity expected by a modern audience.  With hundreds of TV channels and internet shows, media needs to be created quickly, which is something that traditional 2D animation would have a hard time doing, 3D animation is simply quicker and more easily edited; a single model can be reused instead of every frame being drawn one by one, sets, props and characters can be viewed from any angle, textures easily changed and different rendering styles can apply a whole new treatment and mood to the composition; sending out a whole different message extremely easily.


Development of Computing

Computers have become the primary tool in any workplace let alone in the Media Industry, without computers society would almost certainly collapse.  Even on a social level, computers and the internet have played a massive part in the way we communicate.  During March 2010, the amount of fits on Facebook overtook Google in the US for the first time, with a rise of 185% on the same week in the previous year compared to a 9% rise for Google (Nick Clark, The Independent 18th March 2010) other statistics can also point to a figure of 400 million in March 2010 compared to 20 million in April 2009 for Facebook according to Hitwise (Carrie-ann Skinner, Network World, 18th March 2010).  Together they contributed to around 14% of hits in both the US and the UK.

The principle of a computer being a programmable machine is not a new idea, water clocks have been round since ancient times, being developed in Greece, Arabia and India.  The Monumental water-clock designed by Badi’ al-Zaman Abu-‘l-‘Izz Ibn Isma’il Ibn al-Razzaz al-Jazari, was one of the first time keeping devices, not only would it keep the passage of time from hour to hour, it would also record the phase of the moon and serve a drink from a reservoir.

Water Clock design by Al Jazari

Water Clock design by Al Jazari

Arabic science and mathematics have played a major influence in the fundamentals of computing, Leonardo Fibonacci introduced the Indo-Arabic numeral system in 1202 in his book The Liber Abaci, this numeral system was far simpler than traditional Roman numerals for multiplication and division, and this breakthrough has been important to the development of binary.

The concept of nothing in a mathematical sense is also important for computing.  As computers work on either true or false, as 1’s or 0’s.  The number 0 was invented by an Indian astronomer named Aryabhata, among other things he manged to calculate the value of Pi to 4 d.p. and suggested the Earth spins on its axis.

According to Charles Seife, in his book, Zero: The Biography of a Dangerous Idea “…with the division of 0, you can prove mathematically, anything in the universe. You can prove that 1+1=42, and from there you can prove that J. Edgar Hoover is a space alien, that William Shakespeare came from Uzbekistan, or even that the sky is polka-dotted. (See appendix A for a proof that Winston Churchill was a carrot.)”

In programming languages the numbers start at 0, which is important for the use in arrays.  However causes problems when we start to think of numbers in terms of nth value, with 0 as the 1st term, the 15 would be the 16th term, therefore zero has to be thought of as the zeroth term.

Computers are very good at doing complex mathematics that humans cannot do, however they are very bad at doing things we can do well, for instance, walking and improvising in situations, computers do not cope well with paradox, they must be told exactly what to do and exactly when to do it.  This can be controlled by programming, using if statements, arrays etc…

This can be seen in Isaac Asimov‘s laws of robotics

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

It is ultimately up to humans to provide the laws that computers will abide to, setting up a foolproof system, this is ultimately down to understanding how human’s function and making decisions, after all our own moral code is based on a set of boundaries; rules; exceptions; decisions and processes.  The future of computing may be based on an organic model with the ability to adapt and act on new information based on learning and current situations, simply the future of computing is not artificial intelligence, but actual intelligence.