Vega Rumors

speculation on coin prices is too hard, there is absolutely nothing backing these coins, its like doing Forex without knowing the market of the currency you want trade in.

At least with mining, ya know approximately what you will get no matter what, and if price tanks, you can sell off the hardware.

But you are speculating is my point. Yes, underlying is some capital asset that can be sold for a loss to recoup some of the investment. If coin markets crash to zero, expect more than 30% loss maybe 50% as gpus flood the market. So my point is with mining you are essentially speculating with 30%-50% of the capital you invest in hardware anyway and you are speculating prices will rise. so just buy the coin and you don't need mining to geek out, plenty of other ways to scratch that itch.
 
Last edited:
But you are speculating is my point. Yes, underlying is some capital asset that can be sold for a loss to recoup some of the investment. If coin markets crash to zero, expect more than 30% loss maybe 50% as gpus flood the market. So my point is with mining you are essentially speculating with 30%-50% of the capital you invest in hardware anyway and you are speculating prices will rise. so just buy the coin and you don't need mining to geek out, plenty of other ways to scratch that itch.

Your margin of error in speculating will be minimized. Lets say right now you get a rig for ETH, 1070's, you can expect to pay that rig off in 6 months, even it goes down or up based on speculation there will be a 1 month variation to both sides. Now if this was 3 months ago, the pay out was 4 month turn around to pay the rig off, and when Eth dropped that is when it went to 6 months, and has been staying steady since then. And with Bitcoin and Light coin craze 6 months per rig was pretty much what we were looking at as a turn around. All these guys making the algorithms and coins are creating difficulties to stay around that 6 month time frame.

Now lets say I bought eth at 380 bucks expecting it to go much higher, yeah it went past 400, but that 10% gain would have turned to a 50% loss when Eth immediately when back down to the DDOS attacks on Kracken after a major sell off of Eth. There are no regulations to stop things like this. Once something like that happens if your investing in coins you can loose your shirt, many people that day lost a good chuck of change on their positions. At least in the stock market or Forex there are balancing forces that the government implements to stop those things from happening. With coins there are no checks and balances.....
 
Your margin of error in speculating will be minimized. Lets say right now you get a rig for ETH, 1070's, you can expect to pay that rig off in 6 months, even it goes down or up based on speculation there will be a 1 month variation to both sides. Now if this was 3 months ago, the pay out was 4 month turn around to pay the rig off, and when Eth dropped that is when it went to 6 months, and has been staying steady since then. And with Bitcoin and Light coin craze 6 months per rig was pretty much what we were looking at as a turn around. All these guys making the algorithms and coins are creating difficulties to stay around that 6 month time frame.

Now lets say I bought eth at 380 bucks expecting it to go much higher, yeah it went past 400, but that 10% gain would have turned to a 50% loss when Eth immediately when back down to the DDOS attacks on Kracken after a major sell off of Eth. There are no regulations to stop things like this. Once something like that happens if your investing in coins you can loose your shirt, many people that day lost a good chuck of change on their positions. At least in the stock market or Forex there are balancing forces that the government implements to stop those things from happening. With coins there are no checks and balances.....

yes there is, it's called a stop loss order.

Also not seeing the logic of arguing the volatility of coin prices as a con to buying the coin but a pro to mining. It's a con to both. As I pointed out you are speculating with mining whether you see it or not. You are also speculating even if your hardware is already paid for in full as you have option of cashing out the hardware for top value today vs risking 30%-50% of their worth on coin markets not crashing in the future.
 
yes there is, it's called a stop loss order.

Also not seeing the logic of arguing the volatility of coin prices as a con to buying the coin but a pro to mining. It's a con to both. As I pointed out you are speculating with mining whether you see it or not. You are also speculating even if your hardware is already paid for in full as you have option of cashing out the hardware for top value today vs risking 30%-50% of their worth on coin markets not crashing in the future.


if you are selling your coins off in a timely manner then you don't need to worry about it.

I sell everything other than about 15% of my Eth per month. I don't use stop orders nor even speculate.... I expect to get around 9k per month on all my rigs per month. The rest I save as coins, that is the % I'm willing to speculate on. The rest is just cashed out.

That 15% adds up fast and even if the price drops I don't need to worry about it. If it goes up, great, even better for me.

Purely buying an selling coins, there is no cushion, cause even with the stop limit, if the coin doesn't recover, you are still out that money.
 
if you are selling your coins off in a timely manner then you don't need to worry about it.

I sell everything other than about 15% of my Eth per month. I don't use stop orders nor even speculate.... I expect to get around 9k per month on all my rigs per month. The rest I save as coins, that is the % I'm willing to speculate on. The rest is just cashed out.

That 15% adds up fast and even if the price drops I don't need to worry about it. If it goes up, great, even better for me.

Purely buying an selling coins, there is no cushion, cause even with the stop limit, if the coin doesn't recover, you are still out that money.

You make me want to dabble but it's kinda late I think... although winter is coming...
 
I would do zcash if starting now, out of all my systems (the ones I'm not doing the penny coins on), all others 50% are doing zcash, the others eth. I know Eth is going away soon, so gotta have back up plans lol. zcash is just as profitable right now too.
 
Since I have virtually all the hardware except power supply, I am building separate rig to dabble in using 3-4 cards. I can't see paying for the whole rig at this time, maybe that view will change as well. Not worth it using multiple computers with like a card each(what I did before) but one dedicated rig not interfering with the other machines should be ok.
 
GCN is only superior in one category, its really good at using more power and giving less performance while doing so. Great achievement by AMD.
I hope you realize that is primarily a software feature of tiled rasterization that Vega also possesses that isn't currently enabled. It's not an architectural difference, but Nvidia simply doing less work.

Volta does not look like GCN AT ALL
If you mean beyond adding ACEs, hardware scheduling, scalar like behavior with the INT pipeline, and the memory model I suppose you could be right. Just think, Paxwell was so successful Nvidia decided to wholesale abandon the architecture for Volta. It was simply that amazing!

That post is just utter nonsense and I have to think you know it.
Then perhaps you would care to explain how packed math (+30%), binning (+20-40%), HBCC (up to 15% tested), and a bindless model that Pascal has to emulate (only exposed a month or so ago) won't lead to your conclusion? Among a host of other features like intriniscs that "shouldn't be compared", yet are included in the upcoming shader model. You know, instructions that largely don't exist in Nvidia hardware. But please, don't let all the inconvenient facts and common sense stop you.

My only takeaway from this whole argument is just how stupid the current gamer is if they believe even the slightest bit of all this forum marketing, but then the average consumer never was intelligent.

AMD is not better at compute, any program that uses CUDA as its base, runs circles around its Open CL version.
Except for Blender I guess? Where comparable AMD hardware is twice as fast with CUDA vs OpenCL.

No, I think in some productivity benchmarks it beat the 1080 Ti handily.
OpenGL on Linux had Fiji almost beating it with respectable scaling by all hardware. Different driver stack there as they recently rebuilt much of it. Just need to wait for the Vulkan driver to open up. Only recently did the Linux kernel expose huge pages which AMD needed.
 
I hope you realize that is primarily a software feature of tiled rasterization that Vega also possesses that isn't currently enabled. It's not an architectural difference, but Nvidia simply doing less work.


If you mean beyond adding ACEs, hardware scheduling, scalar like behavior with the INT pipeline, and the memory model I suppose you could be right. Just think, Paxwell was so successful Nvidia decided to wholesale abandon the architecture for Volta. It was simply that amazing!


Then perhaps you would care to explain how packed math (+30%), binning (+20-40%), HBCC (up to 15% tested), and a bindless model that Pascal has to emulate (only exposed a month or so ago) won't lead to your conclusion? Among a host of other features like intriniscs that "shouldn't be compared", yet are included in the upcoming shader model. You know, instructions that largely don't exist in Nvidia hardware. But please, don't let all the inconvenient facts and common sense stop you.

My only takeaway from this whole argument is just how stupid the current gamer is if they believe even the slightest bit of all this forum marketing, but then the average consumer never was intelligent.


Except for Blender I guess? Where comparable AMD hardware is twice as fast with CUDA vs OpenCL.


OpenGL on Linux had Fiji almost beating it with respectable scaling by all hardware. Different driver stack there as they recently rebuilt much of it. Just need to wait for the Vulkan driver to open up. Only recently did the Linux kernel expose huge pages which AMD needed.
I am not going to go point by point on all of the wonderful little things you graced us with here, but the idea that Pascal wasn't Succesful doesn't border on delusion it is a wagon filled with people suffering from dysentery forging the river of delusion.
 
I hope you realize that is primarily a software feature of tiled rasterization that Vega also possesses that isn't currently enabled. It's not an architectural difference, but Nvidia simply doing less work.

Doing less work and getting the same results is also effeciency lol, don't know what world you live in but a person that is capable of doing that in the work place is more valuable!


If you mean beyond adding ACEs, hardware scheduling, scalar like behavior with the INT pipeline, and the memory model I suppose you could be right. Just think, Paxwell was so successful Nvidia decided to wholesale abandon the architecture for Volta. It was simply that amazing!

Where did they add ACE's? They added more schedulers than what they have now lol. How do you go from that to ACE's? Woops didn't know that did ya?

Scalar like behavior with the Int pipeline? Are you talking about the tensor cores, those won't be there for the gaming versions of chip...... not that I know of. Pascal kills anything AMD has right now, if nV was remotely threatened they would release Volta now. They have that capability, if you don't think they don't or because of Jensen stated, Jesen's cost he was stating is for V100, V104 will not cost that much, about quarter of that cost and would kill anything out there by 70% if they have hit their mark.

Then perhaps you would care to explain how packed math (+30%), binning (+20-40%), HBCC (up to 15% tested), and a bindless model that Pascal has to emulate (only exposed a month or so ago) won't lead to your conclusion? Among a host of other features like intriniscs that "shouldn't be compared", yet are included in the upcoming shader model. You know, instructions that largely don't exist in Nvidia hardware. But please, don't let all the inconvenient facts and common sense stop you.

Ya know you just asked a guy that made chips for one major 3 chip makers for GPU and CPU right? Common sense says...... he knows more about these things then you do.........

My only takeaway from this whole argument is just how stupid the current gamer is if they believe even the slightest bit of all this forum marketing, but then the average consumer never was intelligent.
The average consumers are sheep, you on the other hand seem to be worse than that.


Except for Blender I guess? Where comparable AMD hardware is twice as fast with CUDA vs OpenCL.

LOL Blender has both Open Cl and Cuda, if the test was done with open cl vs open cl, nV would loose, if it was cuda vs open cl, they win in blender. (for equivalent cards). Now if you want to see benchmark on that


http://www.gamersnexus.net/hwreviews/2973-amd-vega-frontier-edition-reviewed-too-soon-to-call/page-3

Titan Xp crushes Vega FE. This is without any professional driver unlocks with the latest driver!
  • Titan Xp: 15.1 minutes
  • Vega FE: 19.8 minutes
Now come again? You were just talking about general consumers (and what not about the forum members here) as sheep so how should we categorize your statement about blender performance of the two cards? Hmm yeah worse then sheep.....

OpenGL on Linux had Fiji almost beating it with respectable scaling by all hardware. Different driver stack there as they recently rebuilt much of it. Just need to wait for the Vulkan driver to open up. Only recently did the Linux kernel expose huge pages which AMD needed.

yes its always a waiting game and it will always be. Why not say what it will end up being anyways by the time you figure out nothing is going to change, WAIT FOR NAVI, and be done with it
 
Last edited:
Blender benchmarks! Below is direct from the article. If something is wrong or misrepresented contact the Blender developers. :) I just copy and pasted.
https://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.79/Cycles

GPU Rendering test
Below are the timings for the official Benchmark Files, using reference systems in Blender Institute. AMD and Nvidia GPUs are now giving comparable performance.

640px-GPU279-1.png


640px-GPU279-nrs.png
 
Blender benchmarks! Below is direct from the article. If something is wrong or misrepresented contact the Blender developers. :) I just copy and pasted.
https://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.79/Cycles

GPU Rendering test
Below are the timings for the official Benchmark Files, using reference systems in Blender Institute. AMD and Nvidia GPUs are now giving comparable performance.

640px-GPU279-1.png


640px-GPU279-nrs.png


They look about right, Thats why Titan XP lays waste to the Vega FE, It has close to the same shader power (excluding boosts of the FE since it doesn't boost that well or consistant at boosting and Titan Xp has no issue with that)

All cards that are equally priced, like the 1060 against the rx480 are close, but one has 30% less shader power lol. Cuda doing its magic. There are scenes that the 1060 lags behind, Koro, as it also affects the 1080 could be something specific that hurts them.
 
I asked IAMSPOON (Team NVIDIA twitch streamer) what he thought of Vega 64 while he was playing Destiny 2:



Confirmed, delicious burritos > hamburgers.
 
I managed to buy four Vega 56 today - without any kind of script, and even purchased two of them via my cell phone browser. I use the nowinstock alerts to be notified. I got two at best buy, and two at newegg. All four cost $500 (+ tax/shipping). There was a Power Color $500 Vega 56 model in stock at newegg for probably at least 45 minutes today. I and two of my friends bought one of those, and they just kept lingering in stock. Newegg only let you buy 1 card of each type. My cousin got five Vega 56 today (2 from bestbuy, 3 from newegg) and he could be buying more - but his single rig is now full too.

yes I'm planning to mine with them -- in compliment to my eight 1080TI I'm mining with. That'll fill out my Biostar TB250 BTC-Pro motherboard's 12 GPU config. I'm not buying anymore cards for a long while to come. My credit card is sore, and one mining rig is enough.....for now.

Should turn ~$1000 a month on a $8500 investment. My risk is all but eliminated in two to three months - cause I'll have covered the 20-25% hardware loss in resale value by then with mining profits. (if for some reason I have to bail out in a couple/three months???)

Invest those coins... Dropped 2 eth on mco earlier this month, its gone from $2-$25 so far. 31 Aug is launch date for its app and maybe a few weeks for the visa approved card.
 
I am not going to go point by point on all of the wonderful little things you graced us with here, but the idea that Pascal wasn't Succesful doesn't border on delusion it is a wagon filled with people suffering from dysentery forging the river of delusion.
For DX11 it was successful, I never denied that. Seems odd Nvidia would ditch it completely for Volta with DX12 does it not? Or that SM6 which is essentially a GCN2 feature set and present on consoles already hasn't released for PC. I can only imagine what is holding up that effort, but I'm sure it will be ready by the time Volta arrives.

Doing less work and getting the same results is also effeciency lol, don't know what world you live in but a person that is capable of doing that in the work place is more valuable!
Driver efficiency, doesn't matter much in general compute where it doesn't work anymore. Some hardware work to allow that, but not what I'd call the architecture as it doesn't translate universally.

Where did they add ACE's? They added more schedulers than what they have now lol. How do you go from that to ACE's? Woops didn't know that did ya?
Volta blog, pretty sure I linked it before, so feel free to go look it up. Of course they aren't called ACEs, just provide nearly identical features. And the hardware schedulers you describe are something else, nothing to do with dispatch and you should know that.

Scalar like behavior with the Int pipeline? Are you talking about the tensor cores, those won't be there for the gaming versions of chip...... not that I know of. Pascal kills anything AMD has right now, if nV was remotely threatened they would release Volta now. They have that capability, if you don't think they don't or because of Jensen stated, Jesen's cost he was stating is for V100, V104 will not cost that much, about quarter of that cost and would kill anything out there by 70% if they have hit their mark.
Scalar in that it may be used for flow control and addressing. Rather essential for the bindless resource models.

Nvidia won't be releasing consumer volta for a bit if they need 7nm or GDDR6. I'd guess early to middle of next year. The pro version with HBM2 sure, and I suppose they could make a HBM2 consumer variant for the high end sooner.

Ya know you just asked a guy that made chips for one major 3 chip makers for GPU and CPU right? Common sense says...... he knows more about these things then you do.........
Who and what? Not sure what I'd have asked that was refuted in any way recently.

The average consumers are sheep, you on the other hand seem to be worse than that.
Intelligent consumer that doesn't buy marketing narratives? Not exactly difficult to poke holes in all the agendas out there. Just think, AMD seems to have spent little to no effort marketing Vega, yet they are gaining market share and apparently selling product as fast as it hits shelves.

Heck it's almost like companies pay individuals to go on forums and talk up their products while putting down the competition. Difficult to imagine anyone out of high school doing that though.

LOL Blender has both Open Cl and Cuda, if the test was done with open cl vs open cl, nV would loose, if it was cuda vs open cl, they win in blender. (for equivalent cards). Now if you want to see benchmark on that
Guess you haven't seen the tests lately where AMD is ahead with OpenCL like what was linked above. I think GN or one of the other sites had better graphs, and lower is better in case you missed that.
 
That's the one I was leaning towards also. Great minds think alike! Now let me blow your mind a bit more.
http://www.swiftech.com/komodo-rx-le-vega.aspx

KOMODO-RX-LE-VEGAX600.jpg


Cageymaru, this is what I want to know:

You've spent ~$500 on Radeon RX Vega 64 which is a mediocre product at best.

AMD basically overclocked Radeon RX Vega 64 close to its limits right out of the box (and that's why the power consumption is so high) to compete with NVIDIA's Geforce GTX 1080.

There isn't much room to overclock because of that.

Regardless, you spent another ~$170 for a water block.

So, in entire, you spent ~$670 for lackluster performance

...but, why?
 
Cageymaru, this is what I want to know:

You've spent ~$500 on Radeon RX Vega 64 which is a mediocre product at best.

AMD basically overclocked Radeon RX Vega 64 close to its limits right out of the box (and that's why the power consumption is so high) to compete with NVIDIA's Geforce GTX 1080.

There isn't much room to overclock because of that.

Regardless, you spent another ~$170 for a water block.

So, in entire, you spent ~$670 for lackluster performance

...but, why?
Well you could use the lower power bios then overclock. A water block would help get a decent overclock while keeping heat and power levels a bit lower. Or he just really loves him some AMD.
 
For DX11 it was successful, I never denied that. Seems odd Nvidia would ditch it completely for Volta with DX12 does it not? Or that SM6 which is essentially a GCN2 feature set and present on consoles already hasn't released for PC. I can only imagine what is holding up that effort, but I'm sure it will be ready by the time Volta arrives.

sm6.0 is also a feature set of DX12, all DX12 cards will be capable of doing it. Again don't think of tiers as DX versions, it will get you confused. Don't know what you are mixing the two together, unless you don't know the difference.

Driver efficiency, doesn't matter much in general compute where it doesn't work anymore. Some hardware work to allow that, but not what I'd call the architecture as it doesn't translate universally.

Are you making this up as you go along? That is a total flip flop from what ya said before, you said the drivers weren't ready for Vega so now they don't make a difference because why?

Volta blog, pretty sure I linked it before, so feel free to go look it up. Of course they aren't called ACEs, just provide nearly identical features. And the hardware schedulers you describe are something else, nothing to do with dispatch and you should know that.

Yeah you didn't see the information from Hot Chips conference too bad.
Scalar in that it may be used for flow control and addressing. Rather essential for the bindless resource models.

LOL more BS right? nV has been using scaler for how long?

Nvidia won't be releasing consumer volta for a bit if they need 7nm or GDDR6. I'd guess early to middle of next year. The pro version with HBM2 sure, and I suppose they could make a HBM2 consumer variant for the high end sooner.

How much you want to bet GP 104 won't use GDDR6? And at this point I can 100% say they don't need to go to 7nm for Volta, not the gaming versions, nor the professional versions with HBM.

Who and what? Not sure what I'd have asked that was refuted in any way recently.

The person that you quoted lol, yeah. The guy that pretty much said its not going to happen the way you stated. Pretty straight forward who you quoted..... If you are going to continue on with this, its a prefect representation of who you think you are, you think you know more than a person that is an EE and has worked or still working (I know he has worked) with one of the big three GPU and CPU makers. If you read through the last few pages, you can figure out who he worked for.

Intelligent consumer that doesn't buy marketing narratives? Not exactly difficult to poke holes in all the agendas out there. Just think, AMD seems to have spent little to no effort marketing Vega, yet they are gaining market share and apparently selling product as fast as it hits shelves.

Right..... Takes one to know em I guess.......

AMD didn't spend time because it won't do shit for them. They know what they have that is why a month before launch they pretty much stopped talking about Vega as much as possible.

Heck it's almost like companies pay individuals to go on forums and talk up their products while putting down the competition. Difficult to imagine anyone out of high school doing that though.

Pretty bold statement from a person that says Vega will match up with Volta, sorry not buying crack today.

Guess you haven't seen the tests lately where AMD is ahead with OpenCL like what was linked above. I think GN or one of the other sites had better graphs, and lower is better in case you missed that.

Case you missed it Blender has two rendering modes, hmm don't know your 3d software do you? Did you miss the point that even a gtx 1060 keeps up with a rx480 in blender 4 out of 5 tests with 30% less flops...... Come one man I saw the graphs, it seems like you are NOT reading them right.......
 
Last edited:
Well you could use the lower power bios then overclock. A water block would help get a decent overclock while keeping heat and power levels a bit lower. Or he just really loves him some AMD.

Well, it doesn't matter how much he overclocked: he isn't going to get near the performance that he would get from a Geforce GTX 1080 Ti

This is, of cause, considering that he's spending close to the price of the Geforce GTX 1080 Ti (~$30 less)

...and of cause, let's not forget that the Geforce GTX 1080 Ti can be overclocked too
 
but then he'd need to spend $170 on a block and plate too...
perhaps its because it gives him the performance he wants for a price he wants and from a team he supports. isn't that how you buy gpus?
 
but then he'd need to spend $170 on a block and plate too...

No, he doesn't.

Geforce GTX 1080 Ti on air would easily beat Radeon RX Vega 64 on liquid any day of the week.

perhaps its because it gives him the performance he wants for a price he wants and from a team he supports. isn't that how you buy gpus?

No, that is not how I choose which GPUs to buy.
 
No, he doesn't.

Geforce GTX 1080 Ti on air would easily beat Radeon RX Vega 64 on liquid any day of the week.



No, that is not how I choose which GPUs to buy.
well it looks like he's doing a full loop so he probably would want too.
ok, well if you buy the absolute fastest available that's good for you. not everyone does. oh, you have a 480 ok... well that's your opinion. others have other reasons. I'm sure he has his.
 
Cageymaru, this is what I want to know:

You've spent ~$500 on Radeon RX Vega 64 which is a mediocre product at best.

AMD basically overclocked Radeon RX Vega 64 close to its limits right out of the box (and that's why the power consumption is so high) to compete with NVIDIA's Geforce GTX 1080.

There isn't much room to overclock because of that.

Regardless, you spent another ~$170 for a water block.

So, in entire, you spent ~$670 for lackluster performance

...but, why?

I imagine whatever he upgraded from will be a huge upgrade for him. As much as I don't see the value of Vega 64, doesn't mean others won't since some have freesync monitor which Vega will go well with or PLP is much more robust in AMD than Nvidia or just like AMD in general. At the end of the day, it is his money, and he can chose to spend however he wants to.
 
ok, well if you buy the absolute fastest available that's good for you. not everyone does. oh, you have a 480 ok... well that's your opinion. others have other reasons. I'm sure he has his.

I consider by order:

1. price

2. performance

3. Power consumption/heat



Compare to Geforce GTX 1080, Radeon RX Vega 64 costs the same (or more), performs worse, and uses more power

What's the saving grace?
 
For DX11 it was successful, I never denied that. Seems odd Nvidia would ditch it completely for Volta with DX12 does it not? Or that SM6 which is essentially a GCN2 feature set and present on consoles already hasn't released for PC. I can only imagine what is holding up that effort, but I'm sure it will be ready by the time Volta arrives.


Driver efficiency, doesn't matter much in general compute where it doesn't work anymore. Some hardware work to allow that, but not what I'd call the architecture as it doesn't translate universally.


Volta blog, pretty sure I linked it before, so feel free to go look it up. Of course they aren't called ACEs, just provide nearly identical features. And the hardware schedulers you describe are something else, nothing to do with dispatch and you should know that.


Scalar in that it may be used for flow control and addressing. Rather essential for the bindless resource models.

Nvidia won't be releasing consumer volta for a bit if they need 7nm or GDDR6. I'd guess early to middle of next year. The pro version with HBM2 sure, and I suppose they could make a HBM2 consumer variant for the high end sooner.


Who and what? Not sure what I'd have asked that was refuted in any way recently.


Intelligent consumer that doesn't buy marketing narratives? Not exactly difficult to poke holes in all the agendas out there. Just think, AMD seems to have spent little to no effort marketing Vega, yet they are gaining market share and apparently selling product as fast as it hits shelves.

Heck it's almost like companies pay individuals to go on forums and talk up their products while putting down the competition. Difficult to imagine anyone out of high school doing that though.


Guess you haven't seen the tests lately where AMD is ahead with OpenCL like what was linked above. I think GN or one of the other sites had better graphs, and lower is better in case you missed that.

Why don't you take your analysis of the average consumer elsewhere? It's bad enough having to read your bullshit predictions and boast about your impartiality day after day.

Whats the latest? Volta is copying Vega? Okay. I guess these are your coping mechanisms.

There's nothing remotely resembling ACEs anywhere in the Volta information that has been released, but I guess when your level of understanding is low enough, all of these fairly technical subtleties are lost on you and it all seems like the same thing
 
Why don't you take your analysis of the average consumer elsewhere? It's bad enough having to read your bullshit predictions and boast about your impartiality day after day.

Whats the latest? Volta is copying Vega? Okay. I guess these are your coping mechanisms.

There's nothing remotely resembling ACEs anywhere in the Volta information that has been released, but I guess when your level of understanding is low enough, all of these fairly technical subtleties are lost on you and it all seems like the same thing
Actually there was an article a few months back saying Volta was following GCN in how it operates. It alluded to ACE-like setup but stopped short of indepth detail. I think it was following some event where this information was based on Nvidias presentation, maybe.
 
Actually there was an article a few months back saying Volta was following GCN in how it operates. It alluded to ACE-like setup but stopped short of indepth detail. I think it was following some event where this information was based on Nvidias presentation, maybe.

It was probably written by anarchist, Volta has nothing to do with how GCN operates whatsoever. People said the exact same thing re Pascal & async.

Edit:

upload_2017-8-29_13-35-8.png


You were right anarchist !!!!!! (/s)
 
Last edited:
I consider by order:

1. price

2. performance

3. Power consumption/heat



Compare to Geforce GTX 1080, Radeon RX Vega 64 costs the same (or more), performs worse, and uses more power

What's the saving grace?

For some people, there's that emotional factor that rationalizes their purchases. lol

Still, their money, their rules and it's none of my business.
 
Cageymaru, this is what I want to know:

You've spent ~$500 on Radeon RX Vega 64 which is a mediocre product at best.

AMD basically overclocked Radeon RX Vega 64 close to its limits right out of the box (and that's why the power consumption is so high) to compete with NVIDIA's Geforce GTX 1080.

There isn't much room to overclock because of that.

Regardless, you spent another ~$170 for a water block.

So, in entire, you spent ~$670 for lackluster performance

...but, why?

If I bought a GTX 1080 I would put it under water also as I have a custom loop. If Pioneer starting selling video cards and I bought one I would figure out how to put it under water. ;)

The water block to me is a part of the case cooling and not video card cost. The only GTX 1080 Ti I was considering were the FE and Aorus that came with a water block already installed for $849. Everything else was fodder.
 
If I bought a GTX 1080 I would put it under water also as I have a custom loop. If Pioneer starting selling video cards and I bought one I would figure out how to put it under water. ;)

My dream has always been to have a Peltier cooled main water loop which is in turn serviced by another non-peltier cooled water loop to cool the peltier units. I don't know if I'll ever do it but it's a very attractive idea ;)
 
My dream has always been to have a Peltier cooled main water loop which is in turn serviced by another non-peltier cooled water loop to cool the peltier units. I don't know if I'll ever do it but it's a very attractive idea ;)

This is what I WANT next. I just can't justify the cost in my head yet. :) The Hailea chillers are European and 220v normally I believe. These have been switched to 110v for the American market. Honestly everything on this page gets me excited. :)
http://www.performance-pcs.com/water-chillers

Koolance is my favorite water cooling company. I wish they made a block for Vega. I would order it today.

exc-800_01.jpg
 
This is what I WANT next. I just can't justify the cost in my head yet. :) The Hailea chillers are European and 220v normally I believe. These have been switched to 110v for the American market. Honestly everything on this page gets me excited. :)
http://www.performance-pcs.com/water-chillers

Koolance is my favorite water cooling company. I wish they made a block for Vega. I would order it today.

exc-800_01.jpg


Ayyy, that is indeed very sexy lol. It's expensive as hell but at least it's a selfcontained unit no need for additional pumps or reservoir.
 
Actually there was an article a few months back saying Volta was following GCN in how it operates. It alluded to ACE-like setup but stopped short of indepth detail. I think it was following some event where this information was based on Nvidias presentation, maybe.

Yeah and its wrong lol.

Problem is I think people that listen those types of articles really should take a step back and see what Vega really is (GCN vs Maxwell and Pascal), the efficiency difference is staggering and it all comes from the control silicon. To go to separate scheduler blocks like GCN would kill everything that is advantageous to nV right now, its occupancy and throughput give it a lot of advantageous that GCN just can't compete with then add in the power usage efficiency on top of that They won't go towards a "GCN" like architecture, they will lose 50% of their advantage.

Can anyone link this article please, there was no article it was forum members posted about it after reading nV's blog which was really generalized.

Its like taking what AMD stated about Vega a year ago and saying its the best thing since sliced bread but turning it around since its an nV blog.

https://devblogs.nvidia.com/parallelforall/inside-volta/

Volta transforms this picture by enabling equal concurrency between all threads, regardless of warp. It does this by maintaining execution state per thread, including the program counter and call stack, as Figure 11 shows.

This small quote should be looked at as what the new schedulers are capable of. GCN can't do this. No GPU can do this right now.

The Pascal SIMT execution model maximizes efficiency by reducing the quantity of resources required to track thread state and by aggressively reconverging threads to maximize parallelism. Tracking thread state in aggregate for the whole warp, however, means that when the execution pathway diverges, the threads which take different branches lose concurrency until they reconverge. This loss of concurrency means that threads from the same warp in divergent regions or different states of execution cannot signal each other or exchange data. This presents an inconsistency in which threads from different warps continue to run concurrently, but diverged threads from the same warp run sequentially until they reconverge. This means, for example, that algorithms requiring fine-grained sharing of data guarded by locks or mutexes can easily lead to deadlock, depending on which warp the contending threads come from. Therefore, on Pascal and earlier GPUs, programmers have to avoid fine-grained synchronization or rely on lock-free or warp-aware algorithms.

Prior to that qoute Pascal SIMT, they talk about convergence and divergence. GCN also has to problem, not as sever as Pascal though. Again block level control gives GCN some advantageous over current nV products but can't over come all the other advantages nV products have.

Now if a person actually understood the blog they would know Volta is a quite a bit different than GCN

And it was summed up in that blog prior to all this too
The Volta architecture is designed to be significantly easier to program than prior GPUs, enabling users to work productively on more complex and diverse applications. Volta GV100 is the first GPU to support independent thread scheduling, which enables finer-grain synchronization and cooperation between parallel threads in a program. One of the major design goals for Volta was to reduce the effort required to get programs running on the GPU, and to enable greater flexibility in thread cooperation, leading to higher efficiency for fine-grained parallel algorithms.

NO GPU DOES THIS RIGHT NOW.

Whom ever started this rumor of GCN like just doesn't read the pertinent things.

To do that the scheduling block (which is the gigathread engine for nV) has to be way different than what they have now and way different to what is out there right now too.

GCN like all other currently unified pipelines schedules it threads at once, this is why understanding utilization and occupancy is so damn important for GCN, Maxwell, Pascal, etc. With Volta that will change to a large degree, I don't think it will be removed entirely though, but much less. There were HPC devs talking about Volta that stated how much different the programing model is, and then you had nV employees saying this is a much different architecture than before.

Now we can see there are pretty big differences in which the shader array at a high level (threads) is quite different. That tells us a bit about the lower levels, like instructions how it will be different. By getting finer granularity at a thread level, all those problems with static partitioning and dynamic partitioning of Maxwell and Pascal (respectively) will all disappear.

Where AMD decided to do their instruction assignment and thread assignment at a block level, nV with Volta is doing it at an ALU level, much higher granularity than what we see now even much higher than in GCN. GCN is stuck at a block level (one CU block).

Is this understandable?

Now for people that call others sheep and marketing forum what not, but don't back up a single thing of what they say or read things wrong, that is kinda Fucked up don't you think?
 
Last edited:
The memory to the SSD will have that type of bandwidth not to the GPU ;) The memory is the one that acts as a L3 or L4 cache.

And those oil and gas people, you think spending 100k vs 10k to get a tesla system which hell those can cruch their numbers quite a bit faster than a single SSG, what would they want to spend their money on? Mind you software too is already done, that market is pretty much a monopoly right now, as nV has been in that market since oh 3 to 5 years.




20mV I would say.

Tesla can't crunch the data fast if it can't fit... that's why oil/gas was reporting 10x faster speeds with AMD solutions last time I looked. That's the whole reason they have the SSG market underway.


P.s. I hope next generation they make expensive as fuck cards sold direct from Nvidia/AMD that are for mining and disable the compute functionality needed for gaming cards.
This shit is getting ridiculous.
 
Back
Top