Posted on March 22, 2012, Ross Lincoln A Closer Look At NVIDIA’s Geforce GTX 680
Last week, we had the opportunity to get a close up look at NVIDIA’s new Geforce GTX 680. The Internet being what it is, the majority of what I intended to report to Game Front readers has been spoiled by the timely leak of an NVIDIA video describing the amazing new GPU. Even so, I can confirm that yes, it is quite amazing, and if what I saw is no longer a secret, I can give a bit more information. To sum things up, it’s going to be expensive, but if you care at all about how your games look, and have the cash to burn, it’ll be worth every penny.
The GTX 680′s capabilities were evident a few weeks back at GDC, when Epic showed off what it can do with a video called Samaritan. In 2011, they showed that video using three Geforce GTX 580s. This year’s demo accomplished the same thing using a single GTX 680. Simply put, everything about the The GTX 680 is impressive. It’s more powerful, more functional, and even manages to use less power, emit less heat and generate less noise.
The reason the 680 is able to pull off feats like this is the tech NVIDIA has crammed onto it. Using the new Kepler architecture unveiled a few weeks back, it boasts a new version of the GTX series’ streaming multiprocessor called SMX, and each unit has 2 SMX for each graphical processing cluster (there are 4 GPCs total). It has an astonishing 1536 stream processors (up from the 580′s 512), 32 ROPs and 128 texture units. Its core clock is 1006MHz, memory clock 6.008GHz, and Boost clock 1058MHz. Frankly, this thing is kind of a tiny monster.
* GPU Boost
The GTX 680 already boasts what NVIDIA claims is “the highest memory clock speeds of any GPU in the industry,” but its biggest innovation is GPU Boost, which could be described as the GPU’s cruise control setting. It automatically adjusts clock speeds in real time. No, really. Most graphical applications you’re likely to run will never approach the limits of thermal design power. In such instances, GUP Boost will increase clock speed to optimum levels. 680′s minimum 3D frequency, Base Clock, is 1006MHz. The Boost Clock is average frequency the GPU can run when less power is being consumed, 1058MHz. Yes, if you’re using a low power application you can automatically get better than average graphics, and you don’t need to overclock to do it.
But in case you’re wondering, it is totally compatible with overclocking, not that you’ll need it (unless you really want to). GPU boost is also managed via a downloadable application that allows you to manually adjust settings, turn it on and off, or apply it on an application to application basis, ensuring that you’ll always get exactly the output you want or need.
* Adaptive Sync
Another cool feature is Adaptive Sync. Under normal conditions, when your framerate drops a step, the synchronization will drop from 60Hz to 30Hz, resulting in stutter even with the best of cards. To get around this in the past, one would disable VSync; the GTX 680 is designed to autmatically disable Vsync whenever the framerate drops, resulting in smoother transitions to lower sync levels, and thus reduced stutter.
* Less Power, Less Noise, Less Heat
The 680 utilizes an H.264 video encoder called NVENC that manages to get 4 times what precious CUDA-based encoders achieved while consuming less power. This offers a wider array of possible applications for consumers, like video conferencing or transmitting video contents from your desktop to your HD television. It also boasts improved noise reduction technology that is very noticeable, which I can verify is essentially a soft hum. Finally, because of the efficient way the components are arranged on the unit (the image above is a representation), including the way the power inputs are stacked, significantly less heat is emitted, meaning you won’t burn up your machine as fast, wake up your roommate or jack up your power bill just from playing Battlefield 3.
* Better Gaming Graphics.
The upshot is that your games are going to look incredible. I saw this firsthand in two demos that showed off how subtle and complex things can get. The first utilized the latest version of NVIDIA’s Phsyx technology with a yeti/gorilla monster whose fur was incredibly detailed. Each hair appeared to move independently and realistically as virtual wind blew against it. The other demonstrated destruction physics so minute you could pulverize a virtual marble pillar into sand – seriously, the detail was practically down to the pixel level. I also saw Battlefield 3 running on the 680, and I can report it looks fantastic.
The thing to bear in mind is that the visual differences between the 680 and its predecessor, while large, are expressed subtly. Colors are crisper, details more intense and a richer image in general is delivered, but it’s in the minute details where things really improve. NVIDIA’s FXAA technology is put to good use here, and the 680 boasts vastly improved anti-aliasing that smoothes images considerably, even compared to the 580. From far away you won’t even notice but zoom up close and you’ll see how impressive it is. The image above compares the latest FXAA to MSAA.
It also a technique called TSAA, a combination of hardware anti-aliasing, a cutomized AA resolve that resembles film, and a temporal component (optional, in the TSAA2), which results in better image quality all around. This technique will be available with several upcoming games, the first of which is Borderlands 2, designed with TSAA specifically in mind.
Better, the 680 is capable of supporting up to 4 displays at once, with 3 devoted to top quality graphics and the 4th running off the monitor’s own graphics capability, meaning you can check email, chat and so forth without sacrificing visual quality on your game. Useful if for nothing else in bragging to your friends about how awesome everything looks.
* Cost And Availability
Finally, you want it, yes? You’re in luck. Despite rumors of supply problems, the 680 will have no shipment problems at launch, which happens today. It’s available for $499.00. Not cheap, but as you can see, it’s completely worth it. More information is available from nVidia.