Logo   | Search
About
Project Feasibility Study
Hardware Design &
Fabrication
Program Development & Debugging
Product Characterization & Correlation


 
   
 
 
Enter
 
 
More to graphics than gaming

Anand Parthasarathy
Intel’s Larrabee architecture for visual computing is slated for 2009 release; The Radeon HD 4800 series graphic card, the world’s first fuelled by a teraflop processor, was launched in India recently, by AMD (Anand Parthasarathy); The GeForce graphical unit powered by nVidia’s GTX280 chips.

In the pecking order of personal computer chips, co-processors were children of a lesser breed. Remember the heydays, two decades ago, of the Intel 80386 central processing unit? Customers who wanted to quicken the number-crunching capabilities of their PCs invested in an additional maths co-processor, the 80387, to accelerate what were known as floating point operations, by performing them directly on the hardware. As the main processor became more powerful, such add-ons became unnecessary.

Then came a new era in computing, fuelled by the gaming fever of the world’s young and restless customers, who looked to the PC not for productivity but picture power: fast action and mind-blowing graphics. No general-purpose processor was able to deliver the realistic gaming that such users demanded.... and so was born the new niche of graphics cards, fuelled by a new class of Graphical processing units of GPUs, optimised for the superior speeds demanded by PC and video games.

nVidia, Asus, Creative.... new brands emerged that created a separate hardware category, running on chips that, in many cases, outperformed general-purpose processors, mega flop for megaflop. Indeed, the Cell processor created by IBM for this burgeoning market was so good at what it did that scientists were quietly slipping it into mission-critical and military systems.

enter GP-GPU

When the tail outperforms the dog, it will end up wagging the dog: In recent months, leading makers of graphics cards and accelerators such as nVidia and ATI found so much supercharged processing power under the hoods of their products that they thought the unthinkable: why not we take on the general purpose processors on their own turf? What emerged is a trend that is being called the GP-GPU: the general purpose graphical processing unit that lays claim to be a one-stop computing shop, offering to do all your normal productive PC tasks as well as offering superior games-class graphics. (for every new trend, its own Web site: Do check out www.gpgpu.org to understand the state-of-the-art in this new niche: General Purpose Computing using Graphics hardware).

The scientific community has a special stake in this development, since so much of high-performance computing (HPC) is graphically intensive: 2D and 3D modelling, finite element analysis, graphical simulation.... a typical 3D modelling task in biological sciences could take days, using a cluster of a hundred standard PCs. Now, many researchers find that the job can shrink to a few hours if they used six-eight graphics card-fuelled machines.

The difference has become even more dramatic in recent days: The two big names in the graphics card business, nVidia and ATI, (the latter now a division of AMD), made almost simultaneous announcements in June this year, unveiling new graphics processors — and the graphical cards and assemblies, fuelled by them — that took their sheer number-crunching capability to dizzy heights.


AMD, in its India launch of new ATI graphics products in Mumbai, chose to highlight the power of its new offerings to bridge the gap between cinema and gaming, giving to cinema the interactive power of games and enhancing gamers’ experience with the ultra real visuals that only cinema can deliver — till now.

The new products are the ATI Radeon HD 4850 and HD 4870 graphics cards, fuelled by AMD’s new RV770 processor, the first chip to burst through graphical computing’s virtual ceiling and enter the teraflop computing arena. In fact, the faster 4870 delivers 1.2 teraflops where a tera flop is one trillion (1,000 billion) computing operations per second.

“This is Bollywood 2.0,” says AMD’s Chief Technology Officer for Graphics Processing, Raja Koduri, “ You won’t just watch movies, you will play in them... put yourself in the driver’s seat of a racing car.. participate in a gun duel ... even change the story line to include your point of view or select from alternative endings... the possibilities are endless.”

Cynics might say film makers already do this, when they want to hedge their bets. When releasing the 1998 Malayalam movie Harikrishan, director Fazil created two endings: In one Hari (Mammooty) wins the girl Meera (Juhi Chawla); in the other, it is Krishnan (Mohanlal). The former version was released in northern Kerala, believed to be fiercely loyal Mammooty territory, while his equally high-profile co-star is perceived to be a stronger draw in the south. Every one was happy, except the critics. (They might have forgotten than even a director like Hitchcock created multiple endings for his thriller Topaz, in one of which the villain gets away, while he shoots himself in the other. Both endings are to be seen in recent DVD releases.

Is the ability to put the viewers in the director’s chair good or bad for Cinema? We might argue over the aesthetics ....meanwhile film makers are revelling in the ability offered by tools such as teraflop graphic cards to create entire libraries of background footage, and then to mix in the shots of protagonists enacting their drama in front of a blue screen. The AMD announcement included clips of a robot rampaging across a New York skyline, virtually created in advance and giving the director the ability to switch the point of view or the ‘camera’ angle with a single mouse click.... an awesome advance over what has been hitherto possible in digital cinema.

From hours to minutes

nVidia’s own offering in this niche was its new family of GeForce graphics processors, the GTX 200 series that includes the 260 and 280, which seem to work at slightly higher processor clock speeds than the ATI Radeon offerings, though their output has not been rated in teraflops. nVidia dramatises the difference that the new graphics hardware makes: converting a video film for viewing on an iPod, that used to take five hours, can now be accomplished in 35 minutes. A partner company, Elemental Technologies, has created a special tool, BadaBoom Media Converter, a consumer video application that does just this. It takes advantage of the highly parallel architecture of the GeForce GPU to re-code video 18 times faster than what would have been possible with general purpose CPUs.

Using such graphics engines, other respected names in graphics have built added value: Asus has just launched what it calls the “world’s most intelligent graphics card,” the Asus ROG (for Republic of Gamers) Matrix, promising 26 per cent better 3D performance total user control over the gaming experience — and 26 per cent less power demand.

Wanting ‘next chunk of meat’

Clearly, while answering the demands of hard core gamers whose constant refrain is ‘gimme more’, graphics leaders such as Asus, nVidia and ATI are looking to carve out some of the business of general purpose computers... which has led some in the tech media, such as The Guardian newspaper’s Chris Edwards to dub them ‘processing piranhas.’.

That is because, unlike the PC’s main CPU which might be a dual core or at most a quad core chip — that is virtually 2 or 4 processors — a single graphics accelerator card might contain a hundred processors, “chomping through tens of gigabytes of data in a second.”

Adds Edwards: “Almost as soon as they have started working, the GPU piranhas will wait for the next chunk of meat. Managing that is hard.....”.

He has hit the crux of the problem: unlike the general purpose computing environment, graphical processing is challenged by multiple programming alternatives. Granted, nVidia is evangelising CUDA, the C language programme framework that allows graphical processors to tackle complex compute-intensive problems. AMD’s competing software development kit (or SDK) for its ATI graphics processors is called CTM (Close To Metal) and it has also achieved compatibility with Microsoft’s DirectX 10, a de facto standard for graphics programming.

Integrating all this with general purpose CPUs still remains a challenge..... and an opportunity for GP chip makers such as Intel, who can approach the task from the other end.... putting graphics functions on their mainstream PC chips.

That is exactly what Intel promises in its next generation ‘Larrabee’ project to deliver a new multicore processor that dedicates some of the cores to graphical processing. By retaining IA or Intel architecture, it will ease the pain of merging the twin worlds of numeric and graphical processing and, incidentally, enable Intel to stave off some of the challenges posed by graphical processors aspiring to graduate to GP-GPU status.

Is Intel playing catch-up with the graphical guys nibbling at its traditional turf — or is it cannily holding back till it has a compelling product to offer at the sangam of general purpose and graphical applications? Time will tell — Mid to late 2009, to be exact.

But one thing is certain: Tomorrow’s computers will seamlessly address all the tasks thrown at them, without stopping to distinguish between graphical and computationally intense routines.

As consumers increasingly demand hyper realistic visual and audio experiences from their PCs, to match what the TV and the high-definition home theatre systems offer, processors will deliver what the customer asks for — and more.

We may not know it, but the desktop computers at the heart of our home entertainment applications will be supercomputers under the skin, addressing tasks every bit as complex as the best of high performance computers serving the mission-critical needs of science and industry.

 

 

 
Copyright © ChipTest, All Rights Reserved | Disclaimer
Designed & Developed by Cherry
Home