Gee, I sure hope those Anonymous guys visit Gaza City sometime and get their props from the happy-go-lucky Islamic fundamentalists they want to be super best friends with.
Why not? Why can't you drop the CPU multiplier to whatever you want and have a correspondingly lower TDP?
And as defaultluser already mentioned, Intel have configurable TDP (cTDP) now so just dial in whatever you want...
Anand: Ivy Bridge Configurable TDP Detailed
If you're running stock speeds then just leave the CPU vcore setting to 'auto'. The auto setting will use the CPU's VID (voltage identification) signal to set the correct vcore for that particular chip.
Yeah, I guess he meant to say Ivy Bridge. Even though IVB was a 'tick' (process change rather than architecture) processor it had architectural improvements as well so 10% faster than SNB sounds no better than IVB.
I haven't got a clue what sort of performance AMD is offering compared to Intel these days but this might help...
http://www.anandtech.com/bench/CPU/2
http://www.anandtech.com/bench/Product/363?vs=147
Thanks for the heads-up pxc, I always enjoy David Kanter's articles. As someone who digs energy-efficient designs, I was already looking forward to Haswell. Now I'm looking forward to the performance gains too. I'll be sure to shop around for a new laptop with Haswell when they ship.
I'm running an Intel Xeon E3 processor on my MSI Z77 board and am very pleased with it. I'm also not bothered by overclocking these days so the Xeon E3 was a good choice for me. This rig has great performance, is silent and idle power consumption is about 44W.:)
That's the beauty of being so vague and using highly ambiguous terms like 'pipeline' and shader 'somethings', you can make what you say match any truth.
You're now saying that a pixel pipeline is the same thing as a shader unit (aka. pixel processor, fragment shader, shader pipeline)? That's...
The general consensus seems to be that R580 will have 3 shader pipes per pixel pipeline. So, that's 16 pixel pipes (like R520) and 48 shader pipes (3x R520). In shader-intensive games it should be quite a bit quicker than R520....... duh. :p
Just not many? Yes, I suppose zero isn't many is it? Sounds like you've got a bad memory so let me refresh it.....
X1800XT launched on October 5th.
X1800XT available (according to ATi) on November 9th.
Were you asleep for the month between X1800XT launch and availability? As far as I'm...
This one is a no-brainer as far as I'm concerned - all your games will run a lot faster with the X1800XT+1GB option. Having 2GB of system RAM instead of 1GB is nowhere near as important as your video card is to the speed of your games. They might load a bit quicker with the extra memory but if...
48 pipes on 90nm?? R520 has 321 million transistors with just a third of those pipes! The die size, transistor count and power consumption would be through the roof. I can't even imagine how bad yields would be. 24 pipes and a bit more speed sounds doable but 48 pipes on 90nm sounds like pure...
That's good news but remember that R520 taped-out well over a year ago and look how long that's been available for. Then there's R420, that taped-out in December '03 and when was that available in decent quantities? R480 was on the SIG list by early November '04 but you couldn't buy them for...
The following are changes made and issues resolved since driver version 81.94:
Single GPU Issues Resolved
There is a possible incompatibility between the ForceWare graphics
driver and the current WDM driver.
GeForce 7 Series: Modifying any Performance and Quality Setting
using the...
Funny, ATi approved the release of benchmarks for the unreleased (at the time) X850 Crossfire technology.....
http://www.anandtech.com/video/showdoc.aspx?i=2477
Why are they so shy about performance figures this time? :confused:
I've seen the Point of View GTX512 for £479 (inc. VAT) in the UK but I suppose you could get the GTX256 and X1800XT cheaper too if you shopped around.
As for X1800XT XFire and R580 - I'll be impressed when they're on sale and in stock and not a moment sooner.
:)
There is a patch to enable HDR with ATi's X1000-series cards in Far Cry but it doesn't allow HDR+AA and it's not available yet. Here are some benchies using the beta patch....
http://www.xbitlabs.com/articles/video/display/geforce7800gtx512_14.html
Edit: It may or may not have much impact...
Hmmm, 470MHz is also the stock core clock for the Quadro FX 4500 - which also uses the same GPU and cooler. 470 is pretty conservative considering how high the Leadtek Extreme GTX256 is overclocking with that same dual-slot cooler - 533MHz average from the 5 reviews I've seen so far. I would...
I checked out a few of the 6800GS reviews to see how well it was overclocking, here are the overclocks the reviews reached........
425/1.00 (Stock, for reference)
480/1.32
515/1.18
480/1.11
505/1.18
540/1.23
521/1.19
500/1.08
522/1.20
Average is 508MHz/1.19GHz (core/mem) or about...
Nope, NV42 is a native 12-pipe design with 5 vertex shaders.
It'll be interesting to see what sort of overclocks the NV42 can reach considering how well the G70 overclocks on the same process with a lot more transistors. :)
Well, the GTX256 was $599 at launch and can be found for $450-470 now so I expect the GTX512 will drop down to the $549-599 range within a few weeks. After all, this card is supposed to be NVIDIA's answer to the X1800XT 512MB and it's not going to be stealing sales when it costs a hundred bucks...
I'm not sure ATi can triple the number of pipes and also significantly increase the clockspeed all on the same 90nm process as R520. Remember the sort of changes we saw going from R430 (X800) to R480 (X850) on the same process? With R580, I think we'll either see a speed bump into the 700-800MHz...
With the GTX256 going for $455-470 I'm guessing NV will put the GTX512 into the $599 price slot in competition with the X1800XT 512MB (MSRP $549). Plenty of folks bought the GTX for that price so I see no reason why plenty more won't spend that sort of amount again for the new card.
If NVIDIA can make a 302 million transistor, 24-pipe GPU that can clock to 550-580MHz then I can only imagine what they will achieve with the switch to 90nm low-k. ATi's X800XL is fabbed on the same process, has only 16-pipes and 160 million transistors yet struggles to hit 450MHz. I think 2006...
The real news are the clocks...........
3D : 580/1730MHz
Throttle : 500/1730MHz
2D : 275/1730MHz
The same guys who got hold of the picture also got hold of the new XFX card's BIOS and extracted the clocks from it. If this XFX card represents an 'extreme' (highly-overclocked as standard)...
Intel's 45nm process should be a big improvement over their 90nm and 65nm techs. The use of high-k dielectrics and tri-gate transistors mean current leakage will be vastly improved. You can read more about the new transistor techs here.
The only thing you're bringing-up is ridiculous anti-NVIDIA bile. You're welcome to do that but make sure you do it in the right place.....
ATi Flavor
Thanks.
HT certainly makes Windows feel more snappy and responsive when opening multiple Windows/Apps. You can't really benchmark the benefit HT delivers, but it's a significant one nonetheless. HT is probably the only reason I didn't jump ship to a single-core A64 a long time ago. :)