Kepler

Vaulter98c

[H]ard|DCer of the Month - October 2009
Joined
May 21, 2008
Messages
5,817
Based off the changes from fermi, do y'all think it'll do better or be like an ati card now?
 
no but the think i worry the most about is the whole boost feature and what effect it will have on folding performance.
 
you make a point, but from what I gather is it goes off the heat signature right? In that case, it should kinda max out at one steady area, not too much flux, as its a pretty steady work load. I think it should be safe as long as it doesnt jump around much, cuz I've over/under clocked mid unit and it doesnt crash now a days.

Who knows, maybe there is some CUDA code that says when we use it as a GPGPU for CUDA stuff it wont do the boost. who knows, that's actually a good question for the Boost review, cuz I'm sure people use their cards for folding and boinc and video encoding and tons of other things these days. I hope [H] looks into that, but I doubt they will, its just one more thing for a small crowd like us
 
based on looking at the power usage numbers the clocks seem to change based on gpu demand but i'll wait for Brents article talking about the dynamic boost feature since i don't really know how it works.

you can always ask Brent, sometimes he'll check stuff for people after reviews are posted and what not.
 
Well at the very least, there will probably be some work around if the option cant be disabled. What the work around would be, I could not tell you at this time lol.
 
I saw over at anandtech that you can have a setpoint for the boost based off the wattage being drawn. At least in their launch review, they claimed that how they had it set the GPU wouldn't boost if the draw was over ~140W, which is probably less than the GPU would use running F@H at 100% load.

It'll be interesting to see how it performs in F@H since nV definitely made some concessions on the compute side in order to see better efficiency in gaming scenarios.
 
I don't think it will perform well. They neutered the hell out of it based on what anandtech reported. The scheduler has been basically cut off. I think ATI will do better. Just my 2 cents though.
 
The reason ATI cards cannot fold as well as NVIDIA cards has a lot to do with the programming side of things, not the architectural differences.
Kepler still has CUDA so it shouldn't suffer significantly. Of course, the folding cores will have to be tweaked for them to work on the new architecture (remember Fermi transition?), but it's far easier than having to write the code from scratch. AFAIK, NVIDIA is working with PG towards this end.
 
It will be fun to see what they do.

I'm going to take a guess (and that's all it is) that it is about the same as a 580.

Also wonder if this will let PG come out with even larger GPU WU. (Have the client do a shader count test and request WU based on that)
 
It will be fun to see what they do.

I'm going to take a guess (and that's all it is) that it is about the same as a 580.

Also wonder if this will let PG come out with even larger GPU WU. (Have the client do a shader count test and request WU based on that)

Probably it will depend on whether fah uses fp64 on the gpu side of things.
If it's fp32, we should see some nice boosts. Otherwise, 680 == 580 is probably the best we can hope for.
 
After reading Anand's review, I think the theoretical max performance should be around 1.5 times GTX580. Not sure though...
 
After reading Anand's review, I think the theoretical max performance should be around 1.5 times GTX580. Not sure though...

true but AMD has proven theoretical and realistic performance are 2 different things even though the 7970 still wipes the floor against anything nvidia has outside of F@H.
 
true but AMD has proven theoretical and realistic performance are 2 different things even though the 7970 still wipes the floor against anything nvidia has outside of F@H.

Yeah, but AMD's case is about the coding (software) side of things as I said above. AMD's computing power is unfortunately not being utilized, and AMD is among those to blame if you wanted to blame someone. CUDA really makes a difference...
 
Yeah, but AMD's case is about the coding (software) side of things as I said above. AMD's computing power is unfortunately not being utilized, and AMD is among those to blame if you wanted to blame someone. CUDA really makes a difference...

no i'm talking about other things outside of F@H.. AMD completely destroys Nvidia and its cuda when it comes to distributed computing(F@H being the exception) but their actual performance is still no where near the theoretical performance.

but given that GPGPU wasn't the priority of kepler my guess is its not going to be all that much faster then the 580 even with the shaders. but my reasoning for this is that they went the AMD way and are now running 1:1 gpu/shader clocks unlike previous generations. so great they increased the shaders to 1536 but they are now running at half the speed of the shaders from the gtx 580..
 
no i'm talking about other things outside of F@H.. AMD completely destroys Nvidia and its cuda when it comes to distributed computing(F@H being the exception) but their actual performance is still no where near the theoretical performance.

but given that GPGPU wasn't the priority of kepler my guess is its not going to be all that much faster then the 580 even with the shaders. but my reasoning for this is that they went the AMD way and are now running 1:1 gpu/shader clocks unlike previous generations. so great they increased the shaders to 1536 but they are now running at half the speed of the shaders from the gtx 580..

The comment about AMD cards simply isn't true, the only compute work that AMD cards before the 7 series were good at was simple hashing. Anything more complicated would suffer on their simple shaders, there just happens to be a lot of projects that aren't doing much more than hashing.

GTX680 will likely be slower at FAH than a 580 because of the fixed hardware scheduler. However GK110 should redeem the series with its CUDA centric design. :)
 
The comment about AMD cards simply isn't true, the only compute work that AMD cards before the 7 series were good at was simple hashing. Anything more complicated would suffer on their simple shaders, there just happens to be a lot of projects that aren't doing much more than hashing.

GTX680 will likely be slower at FAH than a 580 because of the fixed hardware scheduler. However GK110 should redeem the series with its CUDA centric design. :)

The shaders on 7000 series (GCN) are just as capable as the nVidia ones. On the older architectures using the wide VLIW designs it was quite a bit different. I just barely got some BOINC stuff running on my 7970, but I am kinda new to that stuff so I don't have anything to reference against.
 
The shaders on 7000 series (GCN) are just as capable as the nVidia ones. On the older architectures using the wide VLIW designs it was quite a bit different. I just barely got some BOINC stuff running on my 7970, but I am kinda new to that stuff so I don't have anything to reference against.

I believe that's why he said "before the 7 series"... i.e. pre GCN cards.
 
I've been testing a GTX680 vs GTX580 vs HD7970 concentrating on BOINC for now as FAH has no client to support yet.
 
It's my understanding Nvidia and Stanford are working on this now.

From what I've read I'll keep my 580's thanks.

Just like Fermi , I'll wait till they produce the true Kepler.

:D
 
Well, I'm seeing reports of a 680 doing 40k+ on certain WU.....

I'm wondering is anyone can confirm as I don't have one.
 
Well, I'm seeing reports of a 680 doing 40k+ on certain WU.....

I'm wondering is anyone can confirm as I don't have one.

if thats the case i'm calling bullshit. there's no bloody way kepler should be able to do 40k PPD.
 
if thats the case i'm calling bullshit. there's no bloody way kepler should be able to do 40k PPD.

nvr mind, the guy got "confused".

I didn't think this was the case, and that's why I asked if someone could verify.
 
Last edited:
nvr mind, the guy got "confused".

I didn't think this was the case, and that's why I asked if someone could verify.

ok good, i was going to be pissed if that ended up being true. because if kepler magically got 40k PPD then PG better of had posted the coding for the kepler core so we can find where they cheated. but since thats not the case i'm happy now. :D
 
No chance 40k until maybe they are optimized.

You can get V7 running on 680s by running with the client-type:beta configuration on the slot. My 680s are pulling 22k each but they're running cool and not using 100%.
 
22k on a 680? Doesn't seem that impressive honestly.
 
Well the cards aint working @ 100% Probaby not using all 1536 cores of the new cards. Think someone on the folding forum was saying they only use 384 cores because it picks it up as a 460ti. Don't know how true that is.
 
I don't think that's very true either since a 460 can't do 22k PPD. My 470 doesn't even do that. According to FAHMon my 470 is currently rocking out 15k PPD. (The WU it's working on it currently at 19%) Not sure how accurate it is.
 
I don't think that's very true either since a 460 can't do 22k PPD. My 470 doesn't even do that. According to FAHMon my 470 is currently rocking out 15k PPD. (The WU it's working on it currently at 19%) Not sure how accurate it is.


its using more then 384 shaders, the amount of shaders the WU uses has nothing to do with what card is detected. it will use all the shader required based on the atom count of the WU. all the client can do is read what the graphic drivers say the card is in the system info.

either way 22k is a bit more then expected. kepler is not build on GPGPU performance like fermi was so the performance isn't going to be a huge jump over fermi.
 
I never said it was only using 384 shaders, only said 22k didn't seem that impressive to me. Currently I'm showing 17k PPD on my 470 GTX. I'd imagine a 570 or 580 can do more than 17k PPD.
 
I never said it was only using 384 shaders, only said 22k didn't seem that impressive to me. Currently I'm showing 17k PPD on my 470 GTX. I'd imagine a 570 or 580 can do more than 17k PPD.
meh. its got more shaders, but they run at half the clocks. as has been stated, they aren't built for GPGPU. the 680 has more in common architecturally with the 460 than the 580, so it might be best to adjust expectations based on 460 results than 580/570 results. I certainly don't expect the 680 to be a folding monster
 
meh. its got more shaders, but they run at half the clocks. as has been stated, they aren't built for GPGPU. the 680 has more in common architecturally with the 460 than the 580, so it might be best to adjust expectations based on 460 results than 580/570 results. I certainly don't expect the 680 to be a folding monster

If it does come to pass, it is going to be a huge wet blanket to everyone who jumped on the train early.

Kinda funny how the 680 might be similar to AMDs 62xx chips getting outpreformed in folding by the 61xx chips.
 
If it does come to pass, it is going to be a huge wet blanket to everyone who jumped on the train early.

Kinda funny how the 680 might be similar to AMDs 62xx chips getting outpreformed in folding by the 61xx chips.

That would suck and I think I'm gonna hold off on buying the 6xx series cards now if that's the case. Granted I don't need the extra GPU power since my 470 handles what I need fine for now. Though if it did give a good boost in PPD, then that's a good enough reason for me. Looks like I'm gonna wait on IB to see how well it performs and just go with a CPU upgrade instead. Cheaper all around anyway.
 
That would suck and I think I'm gonna hold off on buying the 6xx series cards now if that's the case. Granted I don't need the extra GPU power since my 470 handles what I need fine for now. Though if it did give a good boost in PPD, then that's a good enough reason for me. Looks like I'm gonna wait on IB to see how well it performs and just go with a CPU upgrade instead. Cheaper all around anyway.

i think the cards to look for are the 660 and 670.. the performance to power usage ratio in F@H should be pretty damn good with those cards making them worth replacing a 570/560 or 470/460.
 
If it does come to pass, it is going to be a huge wet blanket to everyone who jumped on the train early.

Kinda funny how the 680 might be similar to AMDs 62xx chips getting outpreformed in folding by the 61xx chips.


I don't think this is even in the same league as that. I mean, nV is pretty up-front about the fact that the 680 is optimized for low power, lower die size, and gaming more than anything else, and it shows in the details regarding GPGPU. Anyone who bought one thinking it was going to be a killer folding card did so pretty much on faith. I'm not saying it might not turn out to be a damn quick card, we that CPU folding scales pretty linearly with both cores and clocks and the 680 has more than 2x the cores and they run at more than 1/2 the clocks so the equation might work out in the 680's favor anyway... of course thats making a big assumption that we can correlate GPU folding the same way as CPU, but hey.

AMD's situation, on the other hand, is just a flop.. plain and simple.
 
I said in folding ;)

I know in games the 680 is a wonderful card.
The 62xx chips from AMD where built for VM and cloud operations, and do quite well there.
 
Back
Top