390X coming soon few weeks

Zarathustra[H];1041558093 said:
Would you mind explaining what yo mean by "support" in this example? I'm not trying to be difficult, I honestly don't understand what you mean.

I haven't owned a AMD Video card since I sold off my two 3slot Asus DirectCU2 HD6970's, but my Kaveri A10-7850K HTPC has AMD graphics, and I haven't noticed any support problems there.

Most complaints I see are against crossfire and lack of support there. Probably what he's talking about. I've never seen complaints about single cards IIRC that didn't affect nVidia too.

I don't think the 390x with 1,000+ more cores than the 290x, HBM and AIO (if that's all true) will launch at $550. Just a hunch.
 
I'm talking about NV being ahead of AMD in the following way

- Introduced frame metering into Kepler
- The obvious faster support of SLI profiles
- Shadowplay - saves you a $40 FRAPs license and performs far better
- I don't know if AMD has unique AA modes, but at least Nvidia tries to release new ones (TXAA sucked, but MFAA is getting good reviews). Having options is better than not.
- DSR, and while AMD responded with VSR, DSR gives you a smoothing option if you're still seeing jaggies. Not the best of solutions but again having options is better than not.
- G-Sync, and while AMD responded with FreeSync, FreeSync took way too long and it isn't supported by mGPU configurations like G-Sync is. I'm also pretty sure that having hardware base variable sync will allow for a G-Sync + ULMB in the future.
- On the professional side, using officially supported and optimized CUDA libraries is very nice, not to mention how easy it is to get CUDA going because of their extended support into popular compilers.
- Having EVGA is very nice. They are the only vendor that has given me no questions asked replacements. You could take a piss on your GPU and still get a replacement from EVGA as long as you're within the warranty period. Ok maybe that's a little bit exaggerated. :p
- And all of this, without "coming soon TM"

But it has also reached the point where I feel NV quality is starting to degrade (Maxwell issues) because they've gotten too far ahead of AMD. It's like they've captured so much market with Kepler, they just said, "we've now got them so take it easy with Maxwell - it's not like we have any competition or much market share left to obtain". Like in any market, first mover gets the advantage and recognition, whereas the copycats don't really get much second thought (point making, AMD needs to stop "responding" and being more proactive). Kepler may have been a tipping point.

I don't have anything against AMD specifically. Unless the 390X is a flop I do plan to get one. If it is a dual die chip oriented for VR... well I do plan to get a Rift sooner than later. ;)
 
Last edited:
Well isn't AMD being proactive with the 390X/395X2? They will be the first to bring HBM to the table, no following there?
 
I'm talking about NV being ahead of AMD in the following way

- Introduced frame metering into Kepler
- The obvious faster support of SLI profiles
- Shadowplay - saves you a $40 FRAPs license and performs far better
- I don't know if AMD has unique AA modes, but at least Nvidia tries to release new ones (TXAA sucked, but MFAA is getting good reviews). Having options is better than not.
- DSR, and while AMD responded with VSR, DSR gives you a smoothing option if you're still seeing jaggies. Not the best of solutions but again having options is better than not.
- G-Sync, and while AMD responded with FreeSync, FreeSync took way too long and it isn't supported by mGPU configurations like G-Sync is. I'm also pretty sure that having hardware base variable sync will allow for a G-Sync + ULMB in the future.
- On the professional side, using officially supported and optimized CUDA libraries is very nice, not to mention how easy it is to get CUDA going because of their extended support into popular compilers.
- Having EVGA is very nice. They are the only vendor that has given me no questions asked replacements. You could take a piss on your GPU and still get a replacement from EVGA as long as you're within the warranty period. Ok maybe that's a little bit exaggerated. :p
- And all of this, without "coming soon TM"

But it has also reached the point where I feel NV quality is starting to degrade (Maxwell issues) because they've gotten too far ahead of AMD. It's like they've captured so much market with Kepler, they just said, "we've now got them so take it easy with Maxwell - it's not like we have any competition or much market share left to obtain". Like in any market, first mover gets the advantage and recognition, whereas the copycats don't really get much second thought (point making, AMD needs to stop "responding" and being more proactive). Kepler may have been a tipping point.

I don't have anything against AMD specifically. Unless the 390X is a flop I do plan to get one. If it is a dual die chip oriented for VR... well I do plan to get a Rift sooner than later. ;)

I'm gonna go ahead and refute a couple of your points.

ATI has tried different types of AA before, and they had a Temporal AA that was back in the X800 series. Not many games would support it so they dropped it. They also have edge-detect anti-aliasing in the current drivers which does a fairly good job for image quality, IMO. But again, it needs game support, and unfortunately that's where NVIDIA throws more of their money. NVIDIA isn't the only one that tries, however.

Also, DSR smoothing is bad. Like, MLAA kind of bad for text, etc. and is blurry. It is no good, in my honest opinion. I would rather not have the smoothing option.
 
Well isn't AMD being proactive with the 390X/395X2? They will be the first to bring HBM to the table, no following there?

I feel like this is somewhat overlooked. I'm sure that AMD could have just as easily released the 390x months ago with your standard 4-6GB of GDDR5, at a lower price point, and still be somewhat competitive with Nvidia. Instead, they're (metaphorically) shooting for the stars, and hoping that they hit a home run. If HBM proves to be as big of a game changer as its rumored to be (yes, we all know the numbers and the theory behind it, but up until now we haven't had a viable, working product from either team), then AMD may very well have made a smart move by delaying their offering in order to launch a far superior product.

Essentially, you have two options: Do you take longer and invest more in developing a superior product, hoping to strike gold? Or, considering your current financial position, do you play it safe, release a mediocre but still competitive card, but remain on life support for the time being? It's a much harder decision when you consider the amount of money, jobs, etc that is on the line.

It's easy to sit in your comfy arm chair and pontificate when you have no skin in the game.
 
all companies have short term and long term goals and projections, but short term goals effect long term goals, so thinking of delaying their offering because it would be better to wait for a superior product isn't really a consideration in a competitive market unless we are talking about within a quarter difference.

Also I don't think it would have been possible to release a GDDR5 version of the r390 because the bus for HBM would have been totally different from GDDR5.
 
I'm gonna go ahead and refute a couple of your points.

ATI has tried different types of AA before, and they had a Temporal AA that was back in the X800 series. Not many games would support it so they dropped it. They also have edge-detect anti-aliasing in the current drivers which does a fairly good job for image quality, IMO. But again, it needs game support, and unfortunately that's where NVIDIA throws more of their money. NVIDIA isn't the only one that tries, however.

Also, DSR smoothing is bad. Like, MLAA kind of bad for text, etc. and is blurry. It is no good, in my honest opinion. I would rather not have the smoothing option.

Plus,

Frame metering in GCN
Sufficient CF support in games that they aren't locked out of.
AMD has their own Shadowplay.
SMAA, no need to discuss anything else.
VSR.
G-Sync has taken longer to come to market in any significant way than Free-Sync.
 
I feel like this is somewhat overlooked. I'm sure that AMD could have just as easily released the 390x months ago with your standard 4-6GB of GDDR5, at a lower price point, and still be somewhat competitive with Nvidia. Instead, they're (metaphorically) shooting for the stars, and hoping that they hit a home run. If HBM proves to be as big of a game changer as its rumored to be (yes, we all know the numbers and the theory behind it, but up until now we haven't had a viable, working product from either team), then AMD may very well have made a smart move by delaying their offering in order to launch a far superior product.

Essentially, you have two options: Do you take longer and invest more in developing a superior product, hoping to strike gold? Or, considering your current financial position, do you play it safe, release a mediocre but still competitive card, but remain on life support for the time being? It's a much harder decision when you consider the amount of money, jobs, etc that is on the line.

It's easy to sit in your comfy arm chair and pontificate when you have no skin in the game.

Not sure why people get so fired up about HBM. I think AMD should of waited until 2016 if it's slowing them down by months. To quote a few pages back.

4096 bits ÷ 8 bits/Byte = 512 bytes × 1000-1250MHz (rumored) memory clock rate = 512-640GB/sec

The Titan X with compression is around 440 GB/s. Most Titan X's can OC the memory to 500GB/s. HBM might help the 144Hz guys past 4GB, but with a 4GB max I personally don't see a need for more bandwidth... It wasn't a limiting factor. Now if they launch at 8GB VRAM the higher Hz guys may benefit from a multiGPU setup. If AMD would launch the damn thing we'd find out....
 
Not sure why people get so fired up about HBM. I think AMD should of waited until 2016 if it's slowing them down by months. To quote a few pages back.

The Titan X with compression is around 440 GB/s. Most Titan X's can OC the memory to 500GB/s. HBM might help the 144Hz guys past 4GB, but with a 4GB max I personally don't see a need for more bandwidth... It wasn't a limiting factor. Now if they launch at 8GB VRAM the higher Hz guys may benefit from a multiGPU setup. If AMD would launch the damn thing we'd find out....

The only issue that would be affecting Fiji in regard to HBM might be availability.
The actual implementation is rather straight forward.
Co-developing HBM means they are rather familiar with it and especially the packaging side.

I don't know why compression is being talked about so much... Fiji will have +700-900GB/s taking compression into account.
 
The only issue that would be affecting Fiji in regard to HBM might be availability.
The actual implementation is rather straight forward.
Co-developing HBM means they are rather familiar with it and especially the packaging side.

I don't know why compression is being talked about so much... Fiji will have +700-900GB/s taking compression into account.

The poster I quoted mentioned that as a point of why it was taking so long. To me I don't see the benefit at such low rumored VRAM capacities. Some more recent rumors are saying 8GB but I thought that wasn't available until around Q3 or Q4....

I'd still love to see a curve ball like hitting 980/Titan X clocks. I am not sure what enabled nVidia to get so high. If it was architecture or process.
 
Not sure why people get so fired up about HBM. I think AMD should of waited until 2016 if it's slowing them down by months. To quote a few pages back.


I don't think people are as fired up about HBM other than future reasons (bus width, bandwidth, and capacity will be irrelevant 2016+). For AMD I think it has more to do with shrinking down the components that produce heat and saving some extra juice.

AMD's biggest problems are heat, stock cooling solutions, and power draw. Nvidia invested millions in cooling alone after the 400/500 Series debacle. AMD needs their own Titan cooler, but they also need to correct TDP and they aren't going to do that with these minor GCN revisions that have proved relatively fruitless compared to Maxwell.
 
The poster I quoted mentioned that as a point of why it was taking so long. To me I don't see the benefit at such low rumored VRAM capacities. Some more recent rumors are saying 8GB but I thought that wasn't available until around Q3 or Q4....

I'd still love to see a curve ball like hitting 980/Titan X clocks. I am not sure what enabled nVidia to get so high. If it was architecture or process.


Its architecture and process, designing semiconductors they would have target goals for possible clock rates.
 
So they are sticking to their crackpot "Fiji is 2 GPU's" theory, eh?
Ridiculous.

And then of course they contradict their own title.





So... There is one Fiji coming in June? And then a dual GPU version of that sometime later?
Based on this article the $700+ theory goes out the window. It was supposed to be a Titan killer at that price.

$550 it is.

This is just Fuad covering up for the article last month that said Fiji was going to be a dual GPU card. Now, when it's released he has articles claiming both.

As far as projected Fiji performance goes, judging from the specs it should be faster than the Titan-X. It should also use more power too, unless AMD has some magic pixie dust to sprinkle on the GCN arch.

There is the possibility that AMD will "under clock" the card for efficiency, I suppose, and lose out im overall performance. It's not what I would do, but who knows?
 
Zarathustra[H];1041557963 said:
And the fans continue to kill AMD by expecting lower prices.

The company is money starved and because of this can't spend what Nvidia does on R&D and product development.

If they are also expected to sell their products at a lower price than the competition, they will never get back in the game.

They need to be able to charge price parity with Nvidia at the same level of performance. if it ties the Titan X, they need to be able to charge $1,000 for it. If it beats it, they need to be able to charge more.

At this point since their market share is low, maybe they should drop that a little bit to grow market share, but still, these "$700 price tags for a potential product that outperforms Nvidia's $1000 product" is killing AMD.

They can't make something out of nothing.

yes, top end cards are more expensive than they were 10 - 15 years ago.

There are many reasons for this:

  • Lack of competition at the top end (and please, no the 295x doesn't count, as it is a dual GPU card, with all the problems and drawbacks that implies)
  • Increasuing difficulty of either shrinking nodes, or trying to squeeze more performance out of existing ones.
  • High demand for limited supply of small node contract manufacturing (yes, even 28nm is considered a small node)
  • Reduced demand for GPU's. Before R&D costs could be spread out over more low end parts, but today with almost all CPU's having built in GPU's that isn't really the case anymore.
  • and part of it is also inflation....


The Geforce 2 Ultra cost $500 in 2000. Corrected for inflation that is almost $700 today.

And lets not forget the 8800 Ultra which cost $830 in 2007, approximately $950 today corrected for inflation.


AMD wasn't shy about charging more when they had the performance crown either during the Geforce FX disaster, but they failed to take advantage of it as much as Nvidia would when the 8800 series came around:

The 9800XT was $499 in 2003, $650 today corrected for inflation

The X850 Platinum Edition was $549 in 2005, $675 today corrected for inflation.

Maybe if they had charged more, they would have had larger cash reserves been able to spend more on successive generations R&D and had higher performing products to compete with, and wouldn't be in the dire straits they are today?


I'm hoping for a great 390x, but I'm also hoping - for AMD's sake - that they will be able to charge what it is really worth, cause god knows they need the money.

Just a couple of facts for your comparison.

1) AMD didn't buy ATI until 2006. It wasn't AMD charging those prices at all.

2) There was no inflation on high end graphics card until Kepler. Add inflation to the $500usd release price of the 580 today and you get $552usd. When Titan came out it was $1000. That wasn't even a full fat chip. It was just nVidia seeing an opportunity to gouge us, and they did. Trying to justify pricing by using inflation from a decade ago is so disingenuous. :rolleyes:
 
I don't think people are as fired up about HBM other than future reasons (bus width, bandwidth, and capacity will be irrelevant 2016+). For AMD I think it has more to do with shrinking down the components that produce heat and saving some extra juice.

AMD's biggest problems are heat, stock cooling solutions, and power draw. Nvidia invested millions in cooling alone after the 400/500 Series debacle. AMD needs their own Titan cooler, but they also need to correct TDP and they aren't going to do that with these minor GCN revisions that have proved relatively fruitless compared to Maxwell.

It's not just bandwidth, it's latency. HBM is necessary for VR. AMD is once again ahead of the curve. AMD is working hard on lowering TDP. They don't need to drop/move on from GCN to do it. They are working on being able to adjust the power usage of the individual transistors in a chip. It's coming out of their APU development. We might see it to some degree with Fiji. If not, then by next year, from what little I've seen about it. I'm not an engineer, so I can't explain it properly. Anyone who knows more about it can explain please.
 
There is the possibility that AMD will "under clock" the card for efficiency, I suppose, and lose out im overall performance. It's not what I would do, but who knows?

I wondered why nVidia didn't clock their cards higher. It could be because they wanted the process margin... But I think it helps perception.

If AMD under clocked and under volted to match the Titan X instead of surpass the lower TDP would make them look good. Then the massive OC's in reviews help sell cards IMO. With the 970/980 I think people really got off to the large OCs. I certainly did. :) It's like everyone won the lottery!

I think what people take away from Maxwell is "wow these run cool" then when they OC them "wow these are beast" and completely ignore the massive TDPs when OC'd. My Titan X pulls 388W at 1525....

Would the 290x have sold better if it was under clocked and under volted and didn't sound like a jet turbine? Then when people OC'd them 20% be happy with the OC and "excessive sound is expected with this OC!" Kind of crap?

Look at the Titan X. It's -100 MHz and 1.15V stock vs the 980 at 1.225-1.25V. If you add on the 100Mhz and voltage back in it's pretty damn loud. There's definitely enough OC margin to do it as well... I think it's a perception thing nVidia has nailed. Hot + noisy = bad for ref even if there are custom coolers that fix it.
 
Last edited:
They are working on being able to adjust the power usage of the individual transistors in a chip. It's coming out of their APU development. We might see it to some degree with Fiji. If not, then by next year, from what little I've seen about it. I'm not an engineer, so I can't explain it properly. Anyone who knows more about it can explain please.

Power gating. As you progress down to lower nodes, power gating finer and finer areas of the chip becomes less expensive and depending on the process and power targets, necessary due to leakage.
I think it is likely we see a more fine grained approach to power gating on Fiji, as it sounds like they have done a massive overhaul with Carrizo when comparing it to Kaveri.
 
I wondered why nVidia didn't clock their cards higher. It could be because they wanted the process margin... But I think it helps perception.

If AMD under clocked and under volted to match the Titan X instead of surpass the lower TDP would make them look good. Then the massive OC's in reviews help sell cards IMO. With the 970/980 I think people really got off to the large OCs. I certainly did. :) It's like everyone won the lottery!

I think what people take away from Maxwell is "wow these run cool" then when they OC them "wow these are beast" and completely ignore the massive TDPs when OC'd. My Titan X pulls 388W at 1525....

Would the 290x have sold better if it was under clocked and under volted and didn't sound like a jet turbine? Then when people OC'd them 20% be happy with the OC and "excessive sound is expected with this OC!" Kind of crap?

Look at the Titan X. It's -100 MHz and 1.15V stock vs the 980 at 1.225-1.25V. If you add on the 100Mhz and voltage back in it's pretty damn loud. There's definitely enough OC margin to do it as well... I think it's a perception thing nVidia has nailed. Hot + noisy = bad for ref even if there are custom coolers that fix it.

The downside to AMD under clocking Fiji is nVidia still has the 980ti up it's sleeve and AMD can't afford to have it beat the 390X.

I would say it's a matter of balance. If they can't get competitive perf/W then they would probably be better to go for outright performance. Assuming they can beat Titan/980ti by enough to make the perf/W acceptable.
 
Just a couple of facts for your comparison.

1) AMD didn't buy ATI until 2006. It wasn't AMD charging those prices at all.

2) There was no inflation on high end graphics card until Kepler. Add inflation to the $500usd release price of the 580 today and you get $552usd. When Titan came out it was $1000. That wasn't even a full fat chip. It was just nVidia seeing an opportunity to gouge us, and they did. Trying to justify pricing by using inflation from a decade ago is so disingenuous. :rolleyes:

There is no such thing as price gouging, not in video cards not anywhere with any product or resource.

A good is worth what people are willing to pay for it, and apparently people are willing to pay $1k+ for a Titan, thus that is what it is worth.

If a company couldn't charge more for a superior product, where is their incentive to make said superior product?

No one owes you a video card at a price you like.

AMD should be charging more for their cards. They need every bit of margin they can get right now. The fact that they don't is a bad thing.
 
Yeah, the bottom line is what matters, so they are going to do what they can to maximize their yields while maintaining a competitive product. The intentional underclocking is not going to happen unless it dramatically increased their viable dies, and no one will know that but AMD.
 
Zarathustra[H];1041559094 said:
There is no such thing as price gouging, not in video cards not anywhere with any product or resource.

A good is worth what people are willing to pay for it, and apparently people are willing to pay $1k+ for a Titan, thus that is what it is worth.

If a company couldn't charge more for a superior product, where is their incentive to make said superior product?

No one owes you a video card at a price you like.

AMD should be charging more for their cards. They need every bit of margin they can get right now. The fact that they don't is a bad thing.

Obviously you couldn't dispute anything in my post you quoted except for the opinion whether or not the Titan is gouging. I guess we just disagree on that one point then.
 
AMD is once again ahead of the curve.
Ya know now that I think of it, this is another huge problem for AMD...

Sure they're ahead of, well not the curve, but a curve, in this case the GPU memory bandwidth curve, just like their CPU division is ahead of the core count curve. The problem is they picked the wrong curves to get ahead of. Sure there will be a time when GPU memory bandwidth is a limiting factor, just like eventually even grandma will need a 8 "real" core CPU.

But not today.

Like many in this thread have recently stated a 4GB 390X is basically DOA, it needs to have more than that and due to architecture that probably means 8GB is the next step up. Maybe there is truth to the rumors that AMD pushed it back to give the HBM process time to scale to where they could put 8GB on it. Maybe had AMD not tried to get ahead of the curve (that didn't really benefit much from being ahead of) and just used GDDR5 on this generation (because the HBM won't really be needed til Pascal/R9 4xx series) then the 390X could have been released a few weeks from when this thread started as opposed to June.

But if they are waiting for 8GB to become available that poses another troubling possibility, that the June launch will be one of those paper launches and the cards will be in very short supply until the fabs can reliably pump out 8GB HBM in bulk.
 
Ya know now that I think of it, this is another huge problem for AMD...

Sure they're ahead of, well not the curve, but a curve, in this case the GPU memory bandwidth curve, just like their CPU division is ahead of the core count curve. The problem is they picked the wrong curves to get ahead of. Sure there will be a time when GPU memory bandwidth is a limiting factor, just like eventually even grandma will need a 8 "real" core CPU.

But not today.

Like many in this thread have recently stated a 4GB 390X is basically DOA, it needs to have more than that and due to architecture that probably means 8GB is the next step up. Maybe there is truth to the rumors that AMD pushed it back to give the HBM process time to scale to where they could put 8GB on it. Maybe had AMD not tried to get ahead of the curve (that didn't really benefit much from being ahead of) and just used GDDR5 on this generation (because the HBM won't really be needed til Pascal/R9 4xx series) then the 390X could have been released a few weeks from when this thread started as opposed to June.

But if they are waiting for 8GB to become available that poses another troubling possibility, that the June launch will be one of those paper launches and the cards will be in very short supply until the fabs can reliably pump out 8GB HBM in bulk.

HBM has been in mass production since Q4 '14.
You always need more bandwidth and lower latency, add into that equation power savings and you should be able to figure out why they are going with it.

Nvidia just caught up to AMD on memory and memory controller technology with Kepler. It took them about 6 years to "catch" up.
They are trailing with HBM by at least a year, if not more.
 
http://www.fudzilla.com/news/graphics/37566-two-amd-fiji-cards-coming-in-june

If this is true that 390x is "unlikely that it will end up faster than the Geforce Titan," I'm going to pull the trigger on the Titan X. Normally, when there's no previews ahead of an imminent release, somebody is trying to hide something. My confidence in the 390x is at an all time low.

that story is bunk... my sources have never seen a nextgen dual-gpu card, "Fiji VR" or whatever. only Fji pro and Fij XT, both ASICs w/ 4GB HBM vram. (for sure not 8, at least in current form.) and the cooling needed for one Fiji is already much, i can't imagine the cooling needed for two Fiji's lol. (w/ two watercooling radiators, heh) and tehres so many issues with the cards that i srsly doubt they will be out in June. Bermuda/395X2 is expected to be the dual-gpu probably coming holdays, or next year.
 
Just a couple of facts for your comparison.

1) AMD didn't buy ATI until 2006. It wasn't AMD charging those prices at all.

2) There was no inflation on high end graphics card until Kepler. Add inflation to the $500usd release price of the 580 today and you get $552usd. When Titan came out it was $1000. That wasn't even a full fat chip. It was just nVidia seeing an opportunity to gouge us, and they did. Trying to justify pricing by using inflation from a decade ago is so disingenuous. :rolleyes:

I don't look at it as price gouging. It's a video card, not some vital resource. Enough people bought Titans for nVidia to roll the dice again. Don't like it? Don't buy one.
 
I don't look at it as price gouging. It's a video card, not some vital resource. Enough people bought Titans for nVidia to roll the dice again. Don't like it? Don't buy one.

Price gouging can also occur when someone takes advantage of the competitive environment to raise prices.

Don't worry about me buying one. I still have the right to voice my opinion, as do you.
 
Price gouging is a free market reaction that self-limits the use of supplies.
 
I wondered why nVidia didn't clock their cards higher. It could be because they wanted the process margin... But I think it helps perception.

If AMD under clocked and under volted to match the Titan X instead of surpass the lower TDP would make them look good. Then the massive OC's in reviews help sell cards IMO. With the 970/980 I think people really got off to the large OCs. I certainly did. :) It's like everyone won the lottery!

I think what people take away from Maxwell is "wow these run cool" then when they OC them "wow these are beast" and completely ignore the massive TDPs when OC'd. My Titan X pulls 388W at 1525....

Would the 290x have sold better if it was under clocked and under volted and didn't sound like a jet turbine? Then when people OC'd them 20% be happy with the OC and "excessive sound is expected with this OC!" Kind of crap?

Look at the Titan X. It's -100 MHz and 1.15V stock vs the 980 at 1.225-1.25V. If you add on the 100Mhz and voltage back in it's pretty damn loud. There's definitely enough OC margin to do it as well... I think it's a perception thing nVidia has nailed. Hot + noisy = bad for ref even if there are custom coolers that fix it.

Hmmm, this rings true.. smart move also.. to sell perception..
 
Ya know now that I think of it, this is another huge problem for AMD...

Sure they're ahead of, well not the curve, but a curve, in this case the GPU memory bandwidth curve, just like their CPU division is ahead of the core count curve. The problem is they picked the wrong curves to get ahead of. Sure there will be a time when GPU memory bandwidth is a limiting factor, just like eventually even grandma will need a 8 "real" core CPU. But not today.

Have the same opinion... the last time they were saying, that they were enforcing the future, changing the reality and ringing product that is ahead of its curve, it was their real cores Flopover. Right now, they claim, they offer HBM, so people can use VR.

Show me, how many consumer VR sets are on the market and how many games are right on the market? It's same case as Bulldozer - only now, in 2015/2016 DX 12 might actually use real cores, and by that time Bulldozer is obsolete (it was obsolete on start too), and even Intel mainstream i5 will be better in DX12 than Bull.

So, right now we have card that is VR-set ready, and when VR sets and games hit the market in 2016 (like Occulus), we will have Pascal and possiobly 490x to tackle new tech, while 390X will be yesterday's tech.

It's Panther case once again. Yes, Panther was far superior medium tank, compared to Soviet vehicles, but there were too little of them to stop the less advanced, but produced in much bigger number T-34.
 
https://translate.google.se/transla...vartal-avtacks-pa-computex&edit-text=&act=url

We already knew it'll be launched @ computex and now we know it's a paper launch. Which isn't surprising since most people have their summer vacation around that time and will be on the beach drinking pina coladas rather than buying GPU's. But..

http://www.tweaktown.com/news/44684/amd-radeon-r9-390x-rumored-very-short-supply-launch/index.html

Does that mean it'll be some available in i June making it a semi paper launch or does that mean short supply in Aug-Sep when it's supposed to be available in stores?
 
They can put whatever price they want to on it. I just think people are foolish to pay it.

Agreed. Some people now consider Titan video cards to be status symbols. I wonder how many people bought Titans simply to be able to say "I have a one thousand dollar video card (or two) in my rig. What are you running?". It's like like the Apple crowd has moved onto buying video cards now. The more expensive it is, the better it is for them as it means fewer other people will be able to afford one.
 
Agreed. Some people now consider Titan video cards to be status symbols. I wonder how many people bought Titans simply to be able to say "I have a one thousand dollar video card (or two) in my rig. What are you running?". It's like like the Apple crowd has moved onto buying video cards now. The more expensive it is, the better it is for them as it means fewer other people will be able to afford one.

You know the performance of two 980s can be equaled or surpassed(in certain games) with one Titan X right?

Single GPU solution is always better. The performance is there and the price is right.
 
https://translate.google.se/transla...vartal-avtacks-pa-computex&edit-text=&act=url

We already knew it'll be launched @ computex and now we know it's a paper launch. Which isn't surprising since most people have their summer vacation around that time and will be on the beach drinking pina coladas rather than buying GPU's. But..

http://www.tweaktown.com/news/44684/amd-radeon-r9-390x-rumored-very-short-supply-launch/index.html

Does that mean it'll be some available in i June making it a semi paper launch or does that mean short supply in Aug-Sep when it's supposed to be available in stores?

Good to see that some sites have taken the BS rumors to heart and decided to write up their own articles with them.
Makes it far easier to weed out who actually has sources and who doesn't.
 
Agreed. Some people now consider Titan video cards to be status symbols. I wonder how many people bought Titans simply to be able to say "I have a one thousand dollar video card (or two) in my rig. What are you running?". It's like like the Apple crowd has moved onto buying video cards now. The more expensive it is, the better it is for them as it means fewer other people will be able to afford one.

I think we see this more than anything.
 
Agreed. Some people now consider Titan video cards to be status symbols. I wonder how many people bought Titans simply to be able to say "I have a one thousand dollar video card (or two) in my rig. What are you running?". It's like like the Apple crowd has moved onto buying video cards now. The more expensive it is, the better it is for them as it means fewer other people will be able to afford one.

Find me another single GPU on this planet that brings the performance and VRAM the Titan X does and we'll talk. I know for certain that isn't AMD with their outdated 290/290x and joke 295x2 that relies on Misfire drivers.
 
Last edited:
Obviously you couldn't dispute anything in my post you quoted except for the opinion whether or not the Titan is gouging. I guess we just disagree on that one point then.

I responded to the aspect of your post I found the most necessary to dispute :p

But if you really want to hear my thoughts on the rest of what you wrote, here goes :p

Just a couple of facts for your comparison.

1) AMD didn't buy ATI until 2006. It wasn't AMD charging those prices at all.

You are right about this fact, and I am well aware of it, but it isn't horribly relevant to the discussion.

Regardless of whether it was an independent ATi at that point or an AMD owned organization, it illustrates that this is how healthy competitors should work. Charging what the market will bear. Only a fool would base anything on a "cost plus" type model of pricing, as that type of pricing model just doesn't reflect reality.

Here is what any healthy business should do to set pricing (overly simplified but you get the point):

Try to predict:

1.) How many units will I sell at price X

2.) How much will it cost me to produce this many units (per unit)

3.) (Price X) * (Units able to sell at price X) - (cost to produce this many units) = Profit


Now, repeat this exercise for many different values of X. Pick the X that is the most profitable and construct your manufacturing, supply chain and sales channels around it.

Your argument that it was ATi, not AMD is irrelevant. It's not as if AMD is some sort of white knight company existing only to put GPU performance in the hands of customers. AMD exists for the same reason EVERY corporation exists. To make a profit for its shareholders. (Even though AMD has been doing a terrible job at this for years) And they - just like any other company - will set any price for the goods they have to sell to maximize shareholder profit, even if it prices some fanboy out of the market. They do this for two reasons. Firstly because fiduciary responsibility to its shareholders is the legal responsibility of any corporation. And secondly, if they don't, shareholders (who invested their money in the business expecting profit) will get pissed and fire their asses.

In a way, there are no GPU companies or CPU companies or even car companies or fashion companies that truly care about their product. They are all money making companies who only care about the product they are making in as much as it makes them money, making rational emotionless decisions to make or not make products, to continue or cut product lines and how to price products strictly off of what the balance sheets say, and for no other reason. AMD is one of them.

There are no exceptions to this rule. There are no "good" companies in it for their appreciation of the greater cause, agreeing to take a cut to what they are making in order to make fanboys happy. This is not how the world works.

Now, so back to why AMD is not charging as much as Nvidia for the same performance.

It is probably a number of things: A combination of worse multi-GPU support compared to Nvidia, the impression (true or not) that AMD has unstable drivers, and also a cheap fanbase that keeps posting unreasonable stuff like "performance better than Nvidias $1,000 flagship at under $700 or no sale".

A 30% price difference in any one generation for the same thing is bad enough. That can mean the difference between making and losing money, but do it year over year and generation over generation and each generation AMD has less to spend on R&D compared to its competitor, so the next generation they may have to take even more of a price cut, resulting in even less R&D spending for the next generation, etc. etc. A lovely death spiral.



2) There was no inflation on high end graphics card until Kepler. Add inflation to the $500usd release price of the 580 today and you get $552usd. When Titan came out it was $1000. That wasn't even a full fat chip. It was just nVidia seeing an opportunity to gouge us, and they did. Trying to justify pricing by using inflation from a decade ago is so disingenuous. :rolleyes:

That's not how inflation works. Inflation is on the currency basis, not on the product basis.

Also, a few more qualifying statements here:

When the 580 was released it was no Titan. It was their halo product, but it was based on the top level consumer part with all cores enabled (as opposed to the 480) AMD was also more competitive at the time. Nvidia didn't likely feel as confident that they could sell as many of them if they raised the price like they can today.

The Titan X (like the Titan and Titan black before it) are a step above what the 580 was, based on Nvidias professional series of GPU's repurposed for gaming use. (think the GPU equivalent of Socket 2011 vs Socket 1150) Nvidia likely doesn't even care how many of them they sell. From a profit perspective they'd make way more money using those same chips as Tesla chips and not having a Titan at all. They are just throwing this bone to the gaming community in order to have halo product marketing win, and pricing it high to effectively make it a low volume product, and keeping the majority of the chips for Tesla products, which they usually have a shortage of as it is.
 
HBM has been in mass production since Q4 '14.
You always need more bandwidth and lower latency, add into that equation power savings and you should be able to figure out why they are going with it.

HBM in general, sure, but not the kind of HBM required to put 8GB RAM on a the 390X. And I never doubted if it would be necessary or why they might want to put it on, I'm just doubting that pushing their card back 6 months, possibly more, to wait on the tech for 8GB of HBM was the right call when GDDR5 would have done them just fine this round.

Nvidia just caught up to AMD on memory and memory controller technology with Kepler. It took them about 6 years to "catch" up.
They are trailing with HBM by at least a year, if not more.

And of those 6 years how many of them was NVidia behind overall? It's neat that AMD was so far ahead on memory tech those 6 years but it's not like that one piece of the puzzle being so far advanced really bought them any sort overwhelming advantage that whole time. One might say that NVidia advanced their memory tech as fast as was necessary to keep up with the demand of the rest of their product.


The fact is the first GM200 thread I can find on these forums started 3 days after this one, the Titan X didn't even get its name leaked until mid February, and there are consumers out there that have had a real Titan X in their system for over a month now, and we know exactly how it performs using GDDR5. So if the 390X does end up with 8GB, and that's the reason it's going to be 6+ months late relative to the title of this thread, it's going to be hard to say that the whole memory bandwidth and latency curve was the correct one for them to be out in front of.
 
Back
Top