A Closer Look At NVIDIA’s Geforce GTX 680

Last week, we had the opportunity to get a close up look at NVIDIA’s new Geforce GTX 680. The Internet being what it is, the majority of what I intended to report to Game Front readers has been spoiled by the timely leak of an NVIDIA video describing the amazing new GPU. Even so, I can confirm that yes, it is quite amazing, and if what I saw is no longer a secret, I can give a bit more information. To sum things up, it’s going to be expensive, but if you care at all about how your games look, and have the cash to burn, it’ll be worth every penny.

* Details

The GTX 680′s capabilities were evident a few weeks back at GDC, when Epic showed off what it can do with a video called Samaritan. In 2011, they showed that video using three Geforce GTX 580s. This year’s demo accomplished the same thing using a single GTX 680. Simply put, everything about the The GTX 680 is impressive. It’s more powerful, more functional, and even manages to use less power, emit less heat and generate less noise.

The reason the 680 is able to pull off feats like this is the tech NVIDIA has crammed onto it. Using the new Kepler architecture unveiled a few weeks back, it boasts a new version of the GTX series’ streaming multiprocessor called SMX, and each unit has 2 SMX for each graphical processing cluster (there are 4 GPCs total). It has an astonishing 1536 stream processors (up from the 580′s 512), 32 ROPs and 128 texture units. Its core clock is 1006MHz, memory clock 6.008GHz, and Boost clock 1058MHz. Frankly, this thing is kind of a tiny monster.

* GPU Boost

The GTX 680 already boasts what NVIDIA claims is “the highest memory clock speeds of any GPU in the industry,” but its biggest innovation is GPU Boost, which could be described as the GPU’s cruise control setting. It automatically adjusts clock speeds in real time. No, really. Most graphical applications you’re likely to run will never approach the limits of thermal design power. In such instances, GUP Boost will increase clock speed to optimum levels. 680′s minimum 3D frequency, Base Clock, is 1006MHz. The Boost Clock is average frequency the GPU can run when less power is being consumed, 1058MHz. Yes, if you’re using a low power application you can automatically get better than average graphics, and you don’t need to overclock to do it.

But in case you’re wondering, it is totally compatible with overclocking, not that you’ll need it (unless you really want to). GPU boost is also managed via a downloadable application that allows you to manually adjust settings, turn it on and off, or apply it on an application to application basis, ensuring that you’ll always get exactly the output you want or need.

* Adaptive Sync

Another cool feature is Adaptive Sync. Under normal conditions, when your framerate drops a step, the synchronization will drop from 60Hz to 30Hz, resulting in stutter even with the best of cards. To get around this in the past, one would disable VSync; the GTX 680 is designed to autmatically disable Vsync whenever the framerate drops, resulting in smoother transitions to lower sync levels, and thus reduced stutter.

* Less Power, Less Noise, Less Heat

The 680 utilizes an H.264 video encoder called NVENC that manages to get 4 times what precious CUDA-based encoders achieved while consuming less power. This offers a wider array of possible applications for consumers, like video conferencing or transmitting video contents from your desktop to your HD television. It also boasts improved noise reduction technology that is very noticeable, which I can verify is essentially a soft hum. Finally, because of the efficient way the components are arranged on the unit (the image above is a representation), including the way the power inputs are stacked, significantly less heat is emitted, meaning you won’t burn up your machine as fast, wake up your roommate or jack up your power bill just from playing Battlefield 3.

* Better Gaming Graphics.

The upshot is that your games are going to look incredible. I saw this firsthand in two demos that showed off how subtle and complex things can get. The first utilized the latest version of NVIDIA’s Phsyx technology with a yeti/gorilla monster whose fur was incredibly detailed. Each hair appeared to move independently and realistically as virtual wind blew against it. The other demonstrated destruction physics so minute you could pulverize a virtual marble pillar into sand – seriously, the detail was practically down to the pixel level. I also saw Battlefield 3 running on the 680, and I can report it looks fantastic.

The thing to bear in mind is that the visual differences between the 680 and its predecessor, while large, are expressed subtly. Colors are crisper, details more intense and a richer image in general is delivered, but it’s in the minute details where things really improve. NVIDIA’s FXAA technology is put to good use here, and the 680 boasts vastly improved anti-aliasing that smoothes images considerably, even compared to the 580. From far away you won’t even notice but zoom up close and you’ll see how impressive it is. The image above compares the latest FXAA to MSAA.

It also a technique called TSAA, a combination of hardware anti-aliasing, a cutomized AA resolve that resembles film, and a temporal component (optional, in the TSAA2), which results in better image quality all around. This technique will be available with several upcoming games, the first of which is Borderlands 2, designed with TSAA specifically in mind.

Better, the 680 is capable of supporting up to 4 displays at once, with 3 devoted to top quality graphics and the 4th running off the monitor’s own graphics capability, meaning you can check email, chat and so forth without sacrificing visual quality on your game. Useful if for nothing else in bragging to your friends about how awesome everything looks.

* Cost And Availability

Finally, you want it, yes? You’re in luck. Despite rumors of supply problems, the 680 will have no shipment problems at launch, which happens today. It’s available for $499.00. Not cheap, but as you can see, it’s completely worth it. More information is available from nVidia.

Join the Conversation   

* required field

By submitting a comment here you grant GameFront a perpetual license to reproduce your words and name/web site in attribution. Inappropriate or irrelevant comments will be removed at an admin's discretion.

17 Comments on A Closer Look At NVIDIA’s Geforce GTX 680

MadMax

On March 22, 2012 at 8:50 am

…. i already run all the games out there with a geforce gtx 275………. why should i pay 500$ to do the same thing ?

user

On March 22, 2012 at 9:40 am

Maybe cause you don’t need it?

SXO

On March 22, 2012 at 11:17 am

Saying you “run all the games out there with a geforce gtx 275″ is very different than being able to say “I can run all the games out there with max detail on beyond HD resolutions.” Obviously this type of hardware is not for you. And this is not a slight against you at all, so please don’t take it that way. I have friends who play PC games and do not care what quality of graphics they get out of their games as long as they’re playable, but I have others that want every last setting in the game to be maxed and want to run at 120fps on their 1900×1200 screens, or others that have 2560×1600 monitors. Personally I run 3 screens, so this kind of hardware appeals to me.

Kevin

On March 22, 2012 at 3:54 pm

To the one who claims he can run everything out there on such and such a card, as SXO said, the issue isn’t “running everything out there.” I’d say the issue is more than even “I can run these games with max detail.” This kind of card is for the discerning customer who wants to not only run everything in max detail, on very high resolutions, but also wants to do it with almost seamless framerates.

Steve

On March 23, 2012 at 12:21 pm

Already not a fan of “Nvidia Turbo Boost”. This could effectively kill off overclocking when implemented with no user option to turn it off (which nvidia has unfortunately done here). Since this card is targetted at the “enthusiast” crowd, taking away a feature your targetted audience expects is nothing short of lame.

The performance per watt is the selling feature here. Nvidia has taken a selling point away from AMD now. No surprise here, the card has pretty much taken the recent AMD design path anyway.

user

On March 23, 2012 at 3:43 pm

“Already not a fan of “Nvidia Turbo Boost”. This could effectively kill off overclocking when implemented with no user option to turn it off (which nvidia has unfortunately done here).”

I read somewhere that OCing still works with it….

Jimmy S.

On March 23, 2012 at 5:59 pm

I love PC Gaming and I *do* want to run everything maxed out and have almost seamless framerates on my rig at 1920 x 1200! >: ) This card really appeals to me… I was just about to purchase the ATI HD 7970 but am holding off now for this one. Thanks for the review and comments by all.

SXO

On March 23, 2012 at 9:16 pm

Hey Steve, to my understanding you can still OC the card manually, but you’re right in that you can’t disable GPU boost. I’ll reserve judgement on whether or not that’s a good thing until I see a lot more overclocking tests done on the cards.

Steve

On March 24, 2012 at 11:37 am

What bothers me is that alot of people are going to assume GPU Boost is some kind of auto-overclocking feature. It is anything but that. It shouldn’t even be compared to overclocking at all. GPU Boost is more like auto-clocking or auto-downthrottling.

The point being, you’re not going to be “pushing” the card past a certain threshold (namely the cards TDP). In the case of the 680, there is a hard limit set there by Nvidia that keeps you from pushing onward. Yes, currently they let you tweak some sliders. But don’t fool yourself into thinking Nvidia is letting you overclock your card via GPU Boost. You are just working within the confines of what Nvidia thinks is “fair” for the card.

And just to be clear, TDP IS NOT THE MAXIMUM POWER YOU CAN EVER DRAW.

It is also not the maximum heat or “themtrip”.

See a pattern here?

To be perfectly fair, AMD did a similar thing with PowerTune. What TDP means and how manufacturers go about calculating it is an ongoing discussion. Let’s just say that skeptics like me know that AMD’s TDP is not the same as Nvidia’s TDP (or Intel’s, for that matter). For all intents and purposes, it’s a BS figure they pull out of their ass. Put simply, there is no standard for calculating TDP.

And if you still buy that GPU Boost is a good thing, then you fail to understand what overclocking is all about.

One thing most reviewers fail to point out is that GPU Boost will make benchmarking numbers a little bit fuzzy. Who’s to say two identical 680s will hit their max thresholds under identical environments? I’ve yet to see any review actually benchmark a 680 vs another 680. From what little I’ve gathered so far, there are going to be some 680s that clock themselves more aggressively than others, all thanks to GPU Boost. The quality of the chip & cooling are going to foil any attempts at “flat” benchmarking. This is probably nitpicking though, as it’s already common knowledge that the “better” part is the one that can clock itself higher. Nonetheless, the “fairness” of benchmarking should be under scrutiny when GPU Boost is involved.

As it stands right now, without any means to disable GPU Boost, there will be no true overclocking. From my point of view, this is Nvidia’s attempt at:

- stopping users from (un)intentionally killing their cards with OC tools
- lowering the average RMA per customer (direct result of above)
- killing off the whole ‘OC’ branding from 3rd party vendors (bye bye OCFTWWTFBBQ edition brandings)
- control, control, control

I’m not claiming the sky is falling here. There are some nice things about GPU Boost (and I seem to be pointing out all the bad ones). But if Nvidia thinks it can dictate what is overclocking and we all buy into it, then you can just kiss the classic concept of overclocking goodbye. Intel was smart enough to introduce K parts with Sandy Bridge. AMD at least allows the end user to disable PowerTune. For the love of god, Nvidia, let us disable GPU Boost altogether. We the consumer should have the last say in getting the most out of a product.

/rant

SXO

On March 24, 2012 at 3:44 pm

Steve, I honestly think you’re misunderstanding what GPU Boost does. It does NOT downthrottle the card unless you give it a target FPS. So if you set your target FPS to 60, and play an older game where you’re getting over 200fps, then yes, it will downthrottle. Otherwise GPU Boost works very similar to Intel’s Turbo Boost. It just adds more juice whenever possible (as long as temps and such are fine).

Steve

On March 24, 2012 at 5:18 pm

No, SXO, GPU Boost works the other way around. GPU Boost will actually downclock the card when the card reaches its TDP. So when you are playing something intense, the card will clock down to its base clock. In the case of an older, less stressfull game, the card will have headroom because it is not at TDP and will thus clock itself back up to either meet that TDP (or less).

GPU Boost is a pessimistic approach where Nvidia has determined the base clock to be the maximum operation under the most stressful situation (at TDP) a GPU.

PowerTune is the exact opposite. It is an optimistic approach whereby when and AMD gpu hits its TDP, it will downclock the card to stay within TDP.

Both technologies are reaching the same result via opposite directions.

And as I said above, its the fact that we are confined to a “determined” TDP that bothers me. Who is to say that Nvidia’s TDP for a specific gpu is the true maximum threshold that it can take? We all know not all GPUs are cut from the same die. Some do better than others. So why can’t I decide this for myself? It’s akin to putting a 160mph limter on a Bugatti Veyron and trying to sell it as “the fastest car in the world”.

Steve

On March 24, 2012 at 5:26 pm

After reading that, I made PowerTune sound too much like the same thing. The way PowerTune works is that there is no base clock. Meaning, there is no clock where AMD says the card should be clocked at under full load. Instead, AMD ships the cards at some chosen clock, and if PowerTune is enabled, it will push that card until it hits TDP.

The big difference between the two is that AMD doesn’t determine a base clock as “worst case scenario” like Nvidia does.

Steve

On March 24, 2012 at 6:33 pm

Update: It looks like you can raise the TDP threshold and offset the base clock envelope using the latest version of EVGA precision, although there is still a limit there of 132% of TDP. Still shackled, but at least some wiggle room there.

After looking at some extreme OC’ing sites, it seems like with a wire mod and some hacks you can unleash the cards potential. So it looks like there is some hope for us enthusiasts after all :)

SXO

On March 25, 2012 at 6:00 am

I’ll holdout for some non-reference cards to see if this issue persists, though I’m still doubtful it really is an issue. I haven’t read every review of the cards out there, but none of the ones I’ve read have mentioned GPU Boost working in the way you described. Would you mind providing a link where they go over GPU Boost in greater detail? Perhaps its time for me to add another hardware site to my daily views.

SXO

On March 25, 2012 at 6:26 am

I’ve read a few more reviews that explore GPU Boost more thoroughly, and I see what you mean about it limiting itself based on the TDP threshold, but at the same time as you just said, they mention that OC tools designed around GPU Boost are being made. Even so, you can still raise clocks on the card and get a noticeable performance boost. In fact, over at hardocp they saw their card reach some really high clockrates. Anandtech on the other hand pointed out that their review card would step down the clockspeed as soon as the card hit 70 C. With an aftermarket cooler or watercooling, this shouldn’t be an issue. At least that’s what I take from it. Personally, I’m still waiting for 4GB versions before I even consider spending on this, but that’s only if rumors of the GK110 turn out to be just that, rumors.

Steve

On March 25, 2012 at 6:28 am

Of the 20 or so reviews I’ve read, Anand seems to explain GPU Boost best:

http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/4

I’ve yet to run across a really good OC guide for the 680. Hopefully that will be alleviated here soon.

Steve

On March 25, 2012 at 7:20 am

If I recall, the card [H] saw hitting 1300Mhz was an engineering sample, and it was only briefly and never sustained. Anand’s findings proves that OC tools will only get you so far using air cooling.

The whole dynamic clock concept if very app specific. You’re going to see high clocks on things that do not stress the GPU (menus or staring at a wall in a 3D shooter for example). It’s scenarios where the GPU is truly being stressed where you are almost guaranteed to see GPU Boost downclocking (possibly down to the minimum clock aka base clock). Cooling might help, it might not. GPU Boost is using some complex algorithms based on power, temps, and internals that we are probably unaware of. We really have no idea how GPU Boost works with any certainty… Right now, we’re relying on vague Nvidia literature & user experimentation.

I just now noticed Anand noting that the boost target of a 680 is actually LESS than the TDP. So when we’re talking about GPU Boost hitting a certain “threshold” before throttling, it’s happening well before TDP. I guess it’s just easier to say TDP, even though that’s evidently being too generous.

Yeah, it is confusing. If this is the future of overclocking, then we’re going to have to throw everything we’ve known about setting clock speeds and having those clocks reliable through the benchmarking process out the window. No wonder [H] is praising GPU Boost, as it pretty much vindicates their long going hatred towards canned benchmarking. Quantitative benchmarking (frames per second) will be obsolete. Qualitative benchmarking (the user experience) will be the new king.