Another 6 months before VEGA?

my point was that AMD keeps competing with Nvidia's mid-range cards and can come close enough...if AMD can actually go for it and produce a really high end Vega card then with their improved a-sync DX12 architecture and Dx12/Vulkan performance they can compete with Nvidia on the high end...especially when taking into account AMD's aggressive pricing

Doom is the only new game that has Vulkan support so there's nothing else to compare it with

http://www.eurogamer.net/articles/d...n-patch-shows-game-changing-performance-gains

everything what you could find on the web right now is gona be useless since the introduction of Nvidia latest drivers which include DX12 performance optimizations(378.78) which in some games as Rise of the tomb raiders according their claims is up to 33% performance improvement, in Hitman other 23%, AoTS 9%, Gears of war 4 another 9%.. that, as itself can also pretty much mean the actual difference in performance versus AMD i'ts most nullified to say it soft..
 
everything what you could find on the web right now is gona be useless since the introduction of Nvidia latest drivers which include DX12 performance optimizations(378.78) which in some games as Rise of the tomb raiders according their claims is up to 33% performance improvement, in Hitman other 23%, AoTS 9%, Gears of war 4 another 9%.. that, as itself can also pretty much mean the actual difference in performance versus AMD i'ts most nullified to say it soft..

those driver claims have already been debunked...

https://www.guru3d.com/news_story/q..._66_versus_378_78_directx_12_performance.html
 
You keep going back to APUs and again I have to repeat, the OP IS NOT ABOUT FRIGGING APUS
If I call them CPUs and dGPUs that are connected will that make you happy? Same damn thing.

Drink the coolaid Anarchist, really drink the coolaid sigh.
Hardly my fault you don't like the engineering of it. Plenty of documents showing it, makes sense from an engineering standpoint, yet you call it coolaid because you don't understand it?

One last point, the reason all articles from decent tech journalists mention 4S is not possible is that it needs to be a mesh and the PCIe fabric solution is a limiting design in this regard.
They have shown it is impossible with the current design because of the number of lanes required for a 4S coherent mesh fabric cannot be done with the limit of lanes per Naples CPU that need to be fully meshed let alone also meshed with direct attached dGPU.
And that is ignoring all the articles showing further complexities to doing this.
Well then they got it wrong. Surely wouldn't be a first for a journalist. If they drop to a single node per socket, as opposed to 2/4, there's more than enough room for lanes in a 4S design. The ONLY reason they likely avoid it is that packing all the nodes into fewer sockets makes more sense. Why use 4 when you can use 2?

NVLInk can FULLY mesh 6 dGPUs with 2S, and also dynamically split load separately between the 2S environment over different pathways, all at substantially more BW and latency improvement than PCIe while also having scale out capabilities.
And you think the PCIe Infinity Fabric is going to do that and more because of 1 line in the Naples presentation saying direct attached GPU.
First, as there are already benchmarks indicating this, the links look like they exceed PCIE speeds(22GB/s benchmarked). Using PCIE bandwidth and latency for a comparison isn't valid. Second, a consumer part likely isn't designed for that level of scaling. No reason to make the parts super expensive for a capability that will seldom be used. They should function just fine in a more limited capability. Naples appears like it will be 2S and 4 GPUs, maybe 8GPUs if the links are cut to 8x. AMD have stated they can route around congested links, so taking them at their word "dynamic load splitting" should be possible. The bigger point here is that a full mesh likely isn't required. Clusters and nodes probably make more sense for a lot of workloads. Embedded parts that won't have the socket restrictions can probably implement whatever topology they find practical. Transition to a Kautz graph or different network and you still have a mesh, but with far less links required. I'd imagine they can reuse some lightly utilized links as well while still maintaining the appearance of a mesh.
 
If I call them CPUs and dGPUs that are connected will that make you happy? Same damn thing.


The APU instelf will probably take up PCI-e lanes, so no I don't think that will help in this case.......


Well then they got it wrong. Surely wouldn't be a first for a journalist. If they drop to a single node per socket, as opposed to 2/4, there's more than enough room for lanes in a 4S design. The ONLY reason they likely avoid it is that packing all the nodes into fewer sockets makes more sense. Why use 4 when you can use 2?


nah to have more CPU's you need more lanes, no way around that, just look at a 4 socket Intel motherboard, there is a reason why they have so many pci-e lanes, and its not for more peripherals, the CPU's eat them up because they need to have heavy data communication for them.

The rest of what you guys are talking about, looks to me you are going off on tangents which won't have anything to do with desktops, so yeah, there is no way AMD's infinity fabric for Ryzen is going to be like Naples over all feature set, but even if it was limited, and if it could show off its prowess with Vega, they would have demonstrated it, so by them not showing it, I am assuming it won't matter much. And really we all should do the same cause why wouldn't they show it if it was so good for them?

I can't think of a single reason at this point. Even if its just a demo that isn't fully fucntional, cause no will have Vega, they would have had time to fix the bugs.
 
If I call them CPUs and dGPUs that are connected will that make you happy? Same damn thing.


Hardly my fault you don't like the engineering of it. Plenty of documents showing it, makes sense from an engineering standpoint, yet you call it coolaid because you don't understand it?


Well then they got it wrong. Surely wouldn't be a first for a journalist. If they drop to a single node per socket, as opposed to 2/4, there's more than enough room for lanes in a 4S design. The ONLY reason they likely avoid it is that packing all the nodes into fewer sockets makes more sense. Why use 4 when you can use 2?


First, as there are already benchmarks indicating this, the links look like they exceed PCIE speeds(22GB/s benchmarked). Using PCIE bandwidth and latency for a comparison isn't valid. Second, a consumer part likely isn't designed for that level of scaling. No reason to make the parts super expensive for a capability that will seldom be used. They should function just fine in a more limited capability. Naples appears like it will be 2S and 4 GPUs, maybe 8GPUs if the links are cut to 8x. AMD have stated they can route around congested links, so taking them at their word "dynamic load splitting" should be possible. The bigger point here is that a full mesh likely isn't required. Clusters and nodes probably make more sense for a lot of workloads. Embedded parts that won't have the socket restrictions can probably implement whatever topology they find practical. Transition to a Kautz graph or different network and you still have a mesh, but with far less links required. I'd imagine they can reuse some lightly utilized links as well while still maintaining the appearance of a mesh.

And yet this is only in Naples slides not in any Ryzen presentations, which is not retail consumers.
Please show me the documentation of Infinity Fabric direct attaching GPUs as you infer with line 2.
The only documented solutions-techs are the ones I mentioned earlier pertaining to coherent pathways beyond 2S use as IO pathways.
And so according to you respected HPC tech journalists got it wrong because they are journalists.....
Not talking about the usual journalists but those that specialise in the HPC/server/deep learning segment.
BTW there are no benchmarks showing the Naples Infinity Fabric even in its use as a CPU-CPU connection with its dedicated pathway, which as I keep telling you is much simpler than a true coherent 'interconnect' pathway/mesh.
You mistake my point about NVLink and PCIe, I am stating just how advanced it is relative to PCIe in both bandwidth and latency while Naples still uses PCIe 3 lanes but with 'HyperTransport' over it for less latency with the dual server connection; each CPU must reserve 64 lanes.
As an example CAPI started from a similar position requiring to transmit over PCIe, but has evolved beyond that now with OpenCAPI (no products expected until 2018 earliest).
IBM has 300GB/s aggregate it can use with NVLink2 and BlueLink, while NVLink2 indiviually will provide a minimum of 40GB/s (it is going to increase a bit with NVLInk2) that importanly increases for each mesh connection-pathway (will handle at least 6 accelerators with 2S CPU and mesh connections for a very large amount of bandwidth and efficient coherent GPU-GPU-CPU-CPU communication), PCIe 4 as an example with 16 lanes will provide 31.5GB/s bidirectional but this is not in Naples nor Xeon Skylake.

And so which benchmark has been released by AMD expressly showing the performance of the Infinity Fabric (with its evolved Hyper Transport)?
They mention memory bandwidth/capacity/IO density in the Naples presentation.

Anyway this is probably one of the better documents out there regarding such pathway interconnection and what is involved at all layers of communication physical through to application even if CAPI is now out of date/superceded, CAPI interface with IBM Power8: http://www.nallatech.com/wp-content/uploads/Ent2014-CAPI-on-Power8.pdf

And so this brings us back yet again to the OP and Ars article where suggestion it is similar to NVLink and dGPU (it is not) and OP perspective whether we will see this in a consumer product (not even clear how it is going to work for HPC but we will not see this for consumer retail product lines because it is complex to do and will not be cheap).
Cheers
 
Last edited:
The APU instelf will probably take up PCI-e lanes, so no I don't think that will help in this case.......
Adding a second CPU with Naples doesn't take up PCIE lanes despite requiring 64 for the additional socket. Vega has 32 lanes, so even with 16 direct lanes, 16 more are added. So the impact is neutral.

IBM has 300GB/s it can use with NVLink2 and BlueLink as an example, while NVLink2 will provide a minimum of 40GB/s (it is going to increase a bit with NVLInk2) that increases for each mesh connection-pathway (will handle at least 6 accelerators with 2S CPU and connections), PCIe 4 as an example with 16 lanes will provide 31.5GB/s bidirectional but this is not in Naples nor Xeon Skylake.
These are fundamentally different designs though. The node counts also differ a bit. Each CPU socket is 4 nodes, GPU nodes likely paired with CPU nodes. They likely won't mesh as much as be directly attached to a specific CPU node. A 2S Naples I'd imagine is more like 8 APU nodes, even with dGPUs thrown in. Each 4 core CPU would have a full dGPU associated to it.

And so which benchmark has been released by AMD expressly showing the performance of the Infinity Fabric (with its evolved Hyper Transport)?
Several of the Ryzen reviews showed the link/cache bandwidth between CCXs. That should be your Infinity Fabric right there with just a single link. Likely not even the full speed as it was set by memory frequency.

Look through AMD's old HSA stuff and just update the terminology if you need some documents.
 
Adding a second CPU with Naples doesn't take up PCIE lanes despite requiring 64 for the additional socket. Vega has 32 lanes, so even with 16 direct lanes, 16 more are added. So the impact is neutral.
Doesn't take up PCI-E lanes? What? Also, source for Vega having 32 lanes, that statement certainly needs it.
Several of the Ryzen reviews showed the link/cache bandwidth between CCXs. That should be your Infinity Fabric right there with just a single link. Likely not even the full speed as it was set by memory frequency.
Physical level implementation of fabric between CCXs and fabric between CPUs is way different to compare them like that, so please, quit that.
 
Adding a second CPU with Naples doesn't take up PCIE lanes despite requiring 64 for the additional socket. Vega has 32 lanes, so even with 16 direct lanes, 16 more are added. So the impact is neutral.


These are fundamentally different designs though. The node counts also differ a bit. Each CPU socket is 4 nodes, GPU nodes likely paired with CPU nodes. They likely won't mesh as much as be directly attached to a specific CPU node. A 2S Naples I'd imagine is more like 8 APU nodes, even with dGPUs thrown in. Each 4 core CPU would have a full dGPU associated to it.


Several of the Ryzen reviews showed the link/cache bandwidth between CCXs. That should be your Infinity Fabric right there with just a single link. Likely not even the full speed as it was set by memory frequency.

Look through AMD's old HSA stuff and just update the terminology if you need some documents.
Naples has to reserve PCIe lanes in 2S setup, it is neutral because your effectively doubling PCIe lanes with half taken up for the 2S interconnect as shown in their slides.
If Vega has 32 PCIe 3 lanes (I really cannot think they would do this on consumer Vega) that is substantially less than NVLInk2 that has over 6 'interconnects' each with 40GB/s per dGPU along with an efficient way to scale; how is Vega+Naples going to scale in a 2S+2dGPU setup (a very basic setup for deep learning), without a coherent mesh you add latency and decrease the transmission-communication efficiency.
In other words it is going to be simpler than NVLink.
Lets see who is right when it launches and how simple/complex the solution AMD provide for their direct attached dGPU compared to NVLink, from both bandwidth/communication-mesh efficiency and scalability.
Anyway the reason I would say they only talk about direct attached in the Naples presentations and not Ryzen is because of the 128 PCIe lanes (8x16 gen 3 in their presentation) but this has to be used by all devices and a further limitation that NVLink does not suffer.

The positive it is better than Intel that refuses to accomodate either AMD or Nvidia, and Intel refuses to even be a member of OpenCAPI or CCIX but that is a separate topic from the OP and context.
I have seen my fill now of AMD slides and papers at B3D thanks.
 
Last edited:
my point was that AMD keeps competing with Nvidia's mid-range cards and can come close enough...if AMD can actually go for it and produce a really high end Vega card then with their improved a-sync DX12 architecture and Dx12/Vulkan performance they can compete with Nvidia on the high end...especially when taking into account AMD's aggressive pricing

Doom is the only new game that has Vulkan support so there's nothing else to compare it with

http://www.eurogamer.net/articles/d...n-patch-shows-game-changing-performance-gains

Its funny you mention Doom and Vulkan. This is at 4K, usually a win resolution for AMD(read Fury).
http://techreport.com/review/31562/nvidia-geforce-gtx-1080-ti-graphics-card-reviewed/5
upload_2017-3-11_10-38-52.png


DX12/Vulkan myth is busted for good.
 

How is this debunked exactly?

NVIDIA: "we have registered X% performance improvement from the games release until now"

G3D: "let's compare performance of last week's driver with this week's because.... Reasons"

You "NV's claim of performance improvements compared to game release ha seen debunked by testing last week's driver, except in hitman, where there's a 11% gain I don't mention because it doesn't fit with my narrative that the performance improvements don't exist "

I talked about a 9% improvement in AotS back in August when Pascal launched, I even posted my numbers here lol.

Per Nvidia's statement the performance difference is between the 1080 release driver 368.81 and the 378.78. Anandtech's Hitman delta was +24% on a 980Ti and + 26% on a 1080 when comparing against the 1080 release driver.

Quoting Kyle's words of wisdom... Reading is essential
(1) Figure averages the percentage increase of benchmark numbers in the following: GeForce GTX 1080 at 3840x2160 with launch driver 368.81 vs 378.74 on an Intel Core i7 5930K, 16GB DDR4 using Win10 x64. Ashes of the Singularity, Crazy Preset (46.5, 50.9 or 9%), Tom Clancy's The Division 1.6, Max Settings + 1x SMAA Ultra (31.5, 32.7 or 4%) Hitman, High Settings + High SSAO (50.6, 62.1 or 23%), Rise of the Tomb Raider, Very High + 2x SSAA (20.5, 27.2 or 33%), and Gears of War 4, Ultra Preset (41.2, 45.2 or 10%).

So polonyc2, it seems your claims have been debunked
 
Last edited:
Also critical to that is the settings for those games and potentially resolution, none of these seemed to have been matched by Hilbert or with deeper analysis, quite disappointing from him as he usually is more thorough.
Case in point look how DX12/async compute performance varies in Gears of War 4 for AMD with Polaris 480 depending upon resolution and settings and also scene.
HardOCP benchmark is interesting to read for Gears of War 4 and DX12/Async compute as gains for AMD 480 can be negligible or around 7% depending upon settings and section (will also be influenced by resolution).
Cheers
 
Last edited:
Nice. And that's the 8GB version. So double the scores for the 16GB version, less a reduction in clock speed?


NO 16 gb for consumers man

AMD already stated this or alluded to this, that is why they are banging on the 8gb is good enough from a quarter ago.

What you are seeing here is a down clocked Vega 10 probably the one to go against the gtx 1070 or in between the 1070 and 1080.
 
NO 16 gb for consumers man

AMD already stated this or alluded to this, that is why they are banging on the 8gb is good enough from a quarter ago.

What you are seeing here is a down clocked Vega 10 probably the one to go against the gtx 1070 or in between the 1070 and 1080.

Nice. And that's the 8GB version. So double the scores for the 16GB version, less a reduction in clock speed?
 
NO 16 gb for consumers man

Too expensive, they can't do it unless ya want a card that performs like a this at Titan X prices lol. The extra memory would be useless. in performance figures.
 
NO 16 gb for consumers man

Too expensive, they can't do it unless ya want a card that performs like a this at Titan X prices lol. The extra memory would be useless. in performance figures.

Nice. And that's the 8GB version. So double the scores for the 16GB version, less a reduction in clock speed?

One of these times you might get around to address the actual question. lol ;)
 
Also worth noting Vega was shown only as a 2-stack solution meaning 8GB max for now as SK Hynix only has the 4GB density advertised for HBM2.
The MI25 may be 4-stack or again for now stuck as what has been publicly shown, guess we will have to wait and see.
You will not see 8-core HBM2 stacks (8Hi) anytime soon in accelerators/dGPUs as it is a nightmare for the cooling solution, and to date the only high offering is 4Hi 4GB stacks from the memory manufacturers.
Cheers
 
Last edited:
Nice. And that's the 8GB version. So double the scores for the 16GB version, less a reduction in clock speed?

One of these times you might get around to address the actual question. lol ;)


You aren't making any sense. English grammar isn't your forte I take it?

NO 16gb for consumer cards. Memory has nothing to do with performance, Memory has nothing to do with Clock speeds of GPU, why are you even trying to put clock speed of the GPU/performance and memory amounts together? What you think its going to end up like Fury X did, double the bandwidth would equal double the performance? but failed to keep up with the 980ti without a AIO? Is that what you are trying to say? What is your parallel here?
 
Also worth noting Vega was shown only as a 2-stack solution meaning 8GB max for now as SK Hynix only has the 4GB density advertised for HBM2.
The MI25 may be 4-stack or again for now stuck as what has been publicly shown, guess we will have to wait and see.
You will not see 8-core HBM2 stacks (8Hi) anytime soon in accelerators/dGPUs as it is a nightmare for the cooling solution, and to date the only high offering is 4Hi 4GB stacks from the memory manufacturers.
Cheers


Well they can't even go 4 stack, there isn't enough space on the interposer and shim for it, this was shown to us by AMD. So if they can go to 16gb, they will have to do it with 8gb stacks, and they are just not available, even if they are, its not for any type of mass produced product. This is why nV went with the 4 stack route with a 4096 bit bus, but their die is smaller, so they did have the space.

Just for you peppercorn.

AMD-Vega-GPU.jpg


you tell me where they have space to put 2 more stacks of HBM 2 on there and they can pull out 8 gb stacks mysterious out of their ass, when Samsung nor Hynix have them yet?

AMD also has Unicorns that make their chips right? Maybe Vega has a TARDIS incorporated in it.
 
Last edited:
You aren't making any sense. English grammar isn't your forte I take it?

NO 16gb for consumer cards. Memory has nothing to do with performance, Memory has nothing to do with Clock speeds of GPU, why are you even trying to put clock speed of the GPU/performance and memory amounts together? What you think its going to end up like Fury X did, double the bandwidth would equal double the performance? but failed to keep up with the 980ti without a AIO? Is that what you are trying to say? What is your parallel here?

You mean like this shining example of English grammar?

So what would your recommendation be if we these leaks as come to pass?

:D

Nice pic of that Vega beauty. :) Looks like there is 2 8GB stacks, as to why NV didn't go that route, they probably don't have the engineering know how. They are quite late to the HBM game after all.
 
You mean like this shining example of English grammar?



:D

Nice pic of that Vega beauty. :) Looks like there is 2 8GB stacks, as to why NV didn't go that route, they probably don't have the engineering know how. They are quite late to the HBM game after all.
I think you must be trying to bait him :)
As I mentioned unfortunately there are no 8GB stacks and there will not be any for quite awhile, the only Q1 2017 from SK Hynix is the 4GB density stack.
It will not be easy to do as you need to double the density of each core in the stack to attain 8GB version. so not anytime soon for both unless Samsung gets there (who currently have the longest experience manufacturing HBM2 now) and Nvidia has a large order contract with them and would probably be in there as a priority client.
Cheers
 
I think you must be trying to bait him :)
As I mentioned unfortunately there are no 8GB stacks and there will not be any for quite awhile, the only Q1 2017 from SK Hynix is the 4GB density stack.
It will not be easy to do as you need to double the density of each core in the stack to attain 8GB version. so not anytime soon for both unless Samsung gets there (who currently have the longest experience manufacturing HBM2 now) and Nvidia has a large order contract with them and would probably be in there as a priority client.
Cheers

Nvidia doesn't have the engineering know-how to integrate nonexistent 8Hi stacks of HBM in their designs, bro.

NVidia gotta git gud
 
I think you must be trying to bait him :)
As I mentioned unfortunately there are no 8GB stacks and there will not be any for quite awhile, the only Q1 2017 from SK Hynix is the 4GB density stack.
It will not be easy to do as you need to double the density of each core in the stack to attain 8GB version. so not anytime soon for both unless Samsung gets there (who currently have the longest experience manufacturing HBM2 now) and Nvidia has a large order contract with them and would probably be in there as a priority client.
Cheers

Haha well just trying to figure out what he thinks of the Vega 10x2 on the slide, maybe you can answer. Do you think that will be full Vega with lower clocks?
 
Haha well just trying to figure out what he thinks of the Vega 10x2 on the slide, maybe you can answer. Do you think that will be full Vega with lower clocks?
It's obvious that it is a dual GPU card. So, you are definitely baiting.
 
You mean like this shining example of English grammar?



:D

Nice pic of that Vega beauty. :) Looks like there is 2 8GB stacks, as to why NV didn't go that route, they probably don't have the engineering know how. They are quite late to the HBM game after all.


Baiting me isn't going to work man, cause I will keep saying the same things I always have. Unless you can add to the conversation with well thought out posts, baiting won't work lol. That is why I stated, I will correct your false thoughts, cause that is all you do, you think with what you feel things should be and not what reality has shown us in the past.

Please HBM means jack shit in the consumer market right now, just added expense to the IHV's which that can't recover, just ask Raja, he even stated it that just setting up the HBM pipeline, they haven't even recovered that expense yet.

So what was smarter, making a memory technology that by the time it would be useful, regular GDDR (5x) could match it, or spending a fraction of that cost and improving the GPU capabilities to fully utilize existing bandwidth via compression, tiled renderers etc?
 
Last edited:
Haha well just trying to figure out what he thinks of the Vega 10x2 on the slide, maybe you can answer. Do you think that will be full Vega with lower clocks?


Dude Vega x 2 doesn't matter, dual GPU cards don't make good halo products nor are they even commercially viable, Fiji x2, did we see anything good out of it? What actually have 2 or 3 games in the past year that can actually use multi GPU properly (scaling). So now you want people to wait for Vega X 2 if Vega isn't what you hoped to be? Are you already starting up an excuse to wait for more AMD products?

And no dual gpu cards can't utilize all memory as a single pool yet, doesn't work yet, so that is why your 16 gb double the performance didn't make any sense even in this context, it will still be an 8 gb card x 2.

You should be right up on stage with Raja, trying to explain how 2 rx 480s against a gtx 1080 and the 2 rx 480's are only 50% utilized, BS, they were, made crap up that was so easy to see through.
 
Last edited:
Baiting me isn't going to work man, cause I will keep saying the same things I always have. Unless you can add to the conversation with well thought out posts, baiting won't work lol. That is why I stated, I will correct your false thoughts, cause that is all you do, you think with what you feel things should be and not what reality has shown us in the past.

Please HBM means jack shit in the consumer market right now, just added expense to the IHV's which that can't recover, just ask Raja, he even stated it that just setting up the HBM pipeline, they haven't even recovered that expense yet.

So what was smarter, making a memory technology that by the time it would be useful, regular GDDR (5x) could match it, or spending a fraction of that cost and improving the GPU capabilities to fully utilize existing bandwidth via compression, tiled renderers etc?

Relevant part, "right now". Of course that's because Vega hasn't launched in consumer market yet. A few more weeks. Then we'll see NV follow AMD to producing consumer based HBM SKUs. ;)
 
Relevant part, "right now". Of course that's because Vega hasn't launched in consumer market yet. A few more weeks. Then we'll see NV follow AMD to producing consumer based HBM SKUs. ;)


What if it doesn't launch in a few more weeks are you going to give lolfail9001 him 100 bucks? I will go to this thread for this conversation.

https://hardforum.com/threads/radeon-rx-vega-discussion-thread.1926069/page-5#post-1042872806

And this post

Sure, but you owe me $100 bucks if it is not out by March 25th.

Deal?

Anyways, considering that it is a middle of March when first leaks started to happen compared to early April last year, May launch looks very likely.

Please nV doesn't care about HBM in the consumer market, IT Makes no sense for them because they have other technologies that just do better monetarily with the same benefits bandwidth wise!

What will your recommendation be at EOM, when Vega doesn't appear? Wait for May which is were the current rumors look to put it? Or are you going to sit around and tout fabled HBM crap up our collective bung holes?
 
Relevant part, "right now". Of course that's because Vega hasn't launched in consumer market yet. A few more weeks. Then we'll see NV follow AMD to producing consumer based HBM SKUs. ;)

AMD had consumer HBM based SKUs almost two years ago, a lot of good it did them.

NV has had a HBM2 based product with 4 stacks of HBM and a 600mm^2 die footprint a year ago.

If NV was in the business of losing money we'd see HBM2 across the whole lineup, no problem.
 
What if it doesn't launch in a few more weeks are you going to give lolfail9001 him 100 bucks? I will go to this thread for this conversation.

https://hardforum.com/threads/radeon-rx-vega-discussion-thread.1926069/page-5#post-1042872806

And this post



Please nV doesn't care about HBM in the consumer market, IT Makes no sense for them because they have other technologies that just do better monetarily with the same benefits bandwidth wise!

What will your recommendation be at EOM, when Vega doesn't appear? Wait for May which is were the current rumors look to put it? Or are you going to sit around and tout fabled HBM crap up our collective bung holes?

Sure if for some reason NV happens to raise there prices $100. Although that is certainly a possibility with nvidia, I think they are spooked and is why their prices are curiously lower than usual. I give it a 99.9999% chance they will lower prices before they raise them. :)
 
Sure if for some reason NV happens to raise there prices $100. Although that is certainly a possibility with nvidia, I think they are spooked and is why their prices are curiously lower than usual. I give it a 99.9999% chance they will lower prices before they raise them. :)


Oh now you are giving reasons not to take his challenge? What don't have confidence in what you post?

If you are so in tune with how AMD is doing things and know what is happening, you should have no problem taking his challenge.

Show us your a man of your words lol.



As long as you aren't like the joker I guess you will be fine.
 
Oh now you are giving reasons not to take his challenge? What don't have confidence in what you post?

If you are so in tune with how AMD is doing things and know what is happening, you should have no problem taking his challenge.

It seems you suffer from 'reading fail'. Since the original topic was about saving $100 by waiting to see what Vega brings to the table, it only stands to reason that there is nothing lost if it doesn't. If prices go UP $100 however, then consumers would have been better off to buy today. However, that is highly unlikely, and the whole concept is something you probably lack the ability to understand. Maybe before you butt in to a conversation with someone else, you should know what you are talking about first? ;)
 
It seems you suffer from 'reading fail'. Since the original topic was about saving $100 by waiting to see what Vega brings to the table, it only stands to reason that there is nothing lost if it doesn't. If prices go UP $100 however, then consumers would have been better off to buy today. However, that is highly unlikely, and the whole concept is something you probably lack the ability to understand. Maybe before you butt in to a conversation with someone else, you should know what you are talking about first.


its not a reading fail, he challenged you, if it doesn't come out on March 25th as you stated, will you pay him a 100 bucks?

That is what he stated to you. If you have a problem with that because of your previous statements clarify it with him lol.

I even asked you what would your recommendations be if it doesn't show up in March or when it likely will in May if it doesn't portray the performance you are looking for, you still haven't answered those questions too... Those should be simple, they are if than questions.
 
Not sure why people think HBM is the next best thing since sliced bread. It is clearly not ready for consumer primetime.
 
Not sure why people think HBM is the next best thing since sliced bread. It is clearly not ready for consumer primetime.


Because AMD says so, that is why ;), people still believe it even after Fiji got its ass handed to it.
 
its not a reading fail, he challenged you, if it doesn't come out on March 25th as you stated, will you pay him a 100 bucks?

That is what he stated to you. If you have a problem with that because of your previous statements clarify it with him lol.

I even asked you what would your recommendations be if it doesn't show up in March or when it likely will in May if it doesn't portray the performance you are looking for, you still haven't answered those questions too... Those should be simple, they are if than questions.

You asked something, but was completely undecipherable. If you could clarify that it would be great.
As for his comment about a 100 bucks, i'll let him clarify that, since it clearly is none of your business. Y'all on the same team or something? lol

Get back to me when you get your thoughts together. ;)
 
Not sure why people think HBM is the next best thing since sliced bread. It is clearly not ready for consumer primetime.

Because it is the next best thing since sliced bread for graphics. And Nv will adopt it as well, only later than their competition.
 
You asked something, but was completely undecipherable. If you could clarify that it would be great.
As for his comment about a 100 bucks, i'll let him clarify that, since it clearly is none of your business. Y'all on the same team or something? lol

Get back to me when you get your thoughts together. ;)

You had two others that understood the question I posed and redirected the question with their views, but you can't understand it? See where the problem is coming from

This is the question

OK fair enough hypothetical situation, May comes around Vega is launched and it comes out to what the leaks and rumors show, 1080 performance even 10% higher lest say, at 225 watts its price is 500 bucks, which card would you recommend? gtx 1080, Vega, or the Ti? Now keeping in Mind Volta is probably end of this year lol? Shit people waited for Vega for close to year now, why not wait another 7 months right?

These were their rebuttles

Volta, if I am no mistaken is the big architecture change after Pascal was primarily a node shrink (Although it clearly was not just a node shrink), but due to it supposedly being a big change I can see it taking longer
Every news or solid rumor I have read puts volta at q1 2018. May be the will but I am not too hopeful that will be the case given what the word around the internet has been. Given how good pascal is doing I really doubt nvidia is in too much of a hurry to release volta. They can easily wait until q1 2018 and amd will still be behind.

So lets see you have two people that understood perfectly what my question was, but the person that it was directed to, can't understand it. The problem is with you man, you can't understand what others do......
 
As for his comment about a 100 bucks, i'll let him clarify that, since it clearly is none of your business. Y'all on the same team or something? lol
There is nothing to clarify. You claim I am to refund people who lost their $100 on getting their 1080 Ti 2 weeks 2 early. I claim you are to back that up with your own $100 instead, if it does not come out in couple of weeks. I.e. by March 25th. Deal?
Sure if for some reason NV happens to raise there prices $100.
Oh, do not dodge the bet, are you afraid it won't come out in 2 weeks? Or 4 for that matter?
 
Back
Top