GoldenTiger
Fully [H]
- Joined
- Dec 2, 2004
- Messages
- 30,009
Problem trading as in spending $30 for an adapter vs. $600 for another card?
Don't the adapters run a LOT more than $30? I thought they were more like $100ish.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Problem trading as in spending $30 for an adapter vs. $600 for another card?
Oh, yeah, that was another minor issue that hasn't been mentioned. Is anyone bothered by this?
Don't the adapters run a LOT more than $30? I thought they were more like $100ish.
No. This wasn't a launch. Don't even know if the chips are in their partner's hands yet.I didn't pore over every slide, but I assume there was no pricing info in this release?
Problem trading as in spending $30 for an adapter vs. $600 for another card?
That was the overpriced piece of cr*p Apple adapter. AMD says a specific built $30 adapter will be out shortly.
So in other words.... $100
Even if they stay in the $50-100 price range, it's still far more accessible than buying two cards for the majority of gamers, even here on [H].
The list is already out. Hardocp should post it on front page so everyone knows. http://support.amd.com/us/eyefinity/Pages/eyefinity-dongles.aspxThat was the overpriced piece of cr*p Apple adapter. AMD says a specific built $30 adapter will be out shortly.
Hell buying 3 monitors is more than I can afford. Although I do like that it works with the card I already have, so I could just add an extra one for cheap off of eBay. But I can't stand the bezels in my view.
All of the work put into making the GF100 a Geometry powerhouse makes me raise my eyebrows in extreme interest.
Very interesting read...
Disapointed that they aren't releasing more information (speeds, power, temp, product lineup).
After all the time that has passed, im getting a feeling that nvidia is STILL trying to buy itself more time...
I didn't pore over every slide, but I assume there was no pricing info in this release?
This really bothers me. I have recently purchased a Dell P2310H monitor (display port) and am planning on ordering another 2 with the intention of running an eyefinity setup. Should I still swing for the HD5870 or wait for the new GF100? I don't think I can afford two cards at this time...not unless I could reuse my existing GTX260.
We have enough information to know that Fermi is fast, considerably faster than the current competition. And it supports the coolest new features (Eyefinity) that our competitor has. That was really the purpose of the press release. Just enough to whet our appetite and perhaps slow down AMD's momentum just a bit.
But now its time for some more details however this is only the beginning. Cards are either just getting production up and started or that will soon need to be the case in order to get these puppies out by late March/ early April.
I want to see what the future holds for these cards. This part looks EXTREMELY interesting now and I just have to wait and see before I make my next major GPU upgrade. It will be interesting to say the least.
If you want multi-monitor gaming on a single card it looks like we do have enough information to know that that 5000s are your only option for that still: http://hardocp.com/image.html?image=MTI2MzYwODIxNHh4VHN0ekRuc2RfMl8yOV9sLmdpZg==
Actually, we don't know that Fermi will be "considerably" faster than AMD's current offering. We only have information that suggest it could be faster. There are still too many unknown variables to make any kind of assumption of actual gaming performance.
GF100 will have 512 CUDA cores, which more than doubles its cores compared to the GeForce GTX 285 GPUs 240 core. There are 64 texture units, compared to the GTX 285s 80, but the Texture Units have been moved inside the Third Generation Streaming Multiprocessors (SM)for improved efficiency and clock speed. In fact, the Texture Units will run at a higher clock speed than the core GPU clock. There are 48 ROP units, up from 32 on the GTX 285. The GF100 will use 384-bit GDDR5, so depending on clock speeds it actually operates at, there is potential for high memory bandwidth. These changes seem logical, and encouraging, but without knowing clock speeds actual shader performance is anyones guess.
"Do I wait for GF100 or do I purchase a Radeon 5000 series card now?" In my opinion, the answer is quite simple right now. With all these unknown variables I would buy a Radeon 5000 series video card right now and enjoy gaming with the fastest current GPU for gaming, and enjoy an Eyefinity experience. If when GF100 is released, it turns out to offer more than the Radeon HD 5000 series for the factors that matter most to me, then I would sell my Radeon HD 5000 series video card and upgrade to the GF100. If however, it turns out it doesnt offer what I need, then I would rest happy that I made a good buying decision.
I have to disagree a little with this. nVidia did release a couple of benchmarks beating the
5870 by a third. I'm not saying that that's what GF100 will actually do but I am saying that nVidia has now kind of set a bar. GF100 better come in closer to beating the 5870 by 30% than by 10% for nVidia's sake. I think with all that they said in this release is that GF100 is simply going to rock. The details about the rest are yet to come.
I have to disagree a little with this. nVidia did release a couple of benchmarks beating the
5870 by a third. I'm not saying that that's what GF100 will actually do but I am saying that nVidia has now kind of set a bar. GF100 better come in closer to beating the 5870 by 30% than by 10% for nVidia's sake. I think with all that they said in this release is that GF100 is simply going to rock. The details about the rest are yet to come.
Putting that much faith in benchmark scores released by the IHV is a little naive. As I've said in another thread, Nvidia is in the market to sell more cards than the competition. As history has shown, both AMD and Nvidia are more than willing to "fudge" numbers in order to accomplish this. Right now, all Nvidia cares about is stopping AMD from getting a larger share of the market due to Nvidia's delays. If that means over stating their performance, then that's exactly what they'll do.
But, as many others have said, this is all speculation till someone trustworthy(Hello Kyle/Brent) actually gets production level hardware in their hands and answers everyone's questions once and for all. I will say this, I truly hope thier tessellation implamentation is really that big of an advance. It'll prove that tessellation can make a large difference, and game developers will take advantage of it. I'm sure AMD's next release will fix any shortcomings in their current design, and it should be out by the end of this year/very early next year.
old and fake pic. if you read todays article you would know there will be 64 tmus not 96 or 128.nice info. so based on the Dark Void benchmark my next card will be 80% faster than my current GTX 285, NVIDIA has said numerous times that this will be cost competitive with similarly-performing ATI solutions so I hope it’ll have a good value.
makes this (leaked here first) look true?
![]()
"quite simple" for a rich eyefinity user who can buy an expensive video card to use it for 3 months then sell it, just look at the views number and comments in this topic and similar topics at other tech sites, it is not a "quite simple" answer, in my opinion.
+1 here.As Anand says:
"In short, heres what we still dont know and will not be able to cover today:
1.Die size
2.What cards will be made from the GF100
3.Clock speeds
4.Power usage (we only know that its more than GT200)
5.Pricing
6.Performance"
..........
That's only everything a buyer might find worth knowing about Fermi.
+1 here.
Though PCperspective said the die size will be over 500mm^ and they use 384bit memory bus...traditional nVidia monolithic beast GPU designs.
No doubt these are probably going to be faster than 5870 but they ain't going to be cheap to produce, especially when we're taking TSMC 40mm yeilds into consideration.
Honestly this looks like another GTX 280 vs 4870 dejavu.(expect nVIdia is 6months late)
nVidia for absolute best performance, ATi for price/performance affordability.
buy a used gtx 260 and sli it instead. Nvidia Surround can only support two displays per card (inc. Fermi), but support should be added to GT200 chips later, after all GT200-based quadros have SLI Mosaic already.This really bothers me. I have recently purchased a Dell P2310H monitor (display port) and am planning on ordering another 2 with the intention of running an eyefinity setup. Should I still swing for the HD5870 or wait for the new GF100? I don't think I can afford two cards at this time...not unless I could reuse my existing GTX260.