Just released info.
Note: the MI25 is 25TFLOPs FP16, sure this will catch some out as its FP32 is 12.5TFLOPs.
Looks like 1st set of info is based upon HPC world rather than Prosumer or PC Gaming, however that does not necessarily mean it will launch 1st (may do though).
Another aspect, looks like it may be a large-ish die Vega (fits in with what I have mentioned in other threads), still no clarification on core count (full or cut size for smaller GPU).
If it was a small die Vegax2 then FP32 TFLOPs would be higher, and more notably the MI8 would be a single die Vega but it is not.
Looks like Vega is HBM only to me as well, just like Fiji (not being critical just something to consider when discussing gaming or Prosumer).
Where it fits against Nvidia will be interesting as there is no full overlap between each of them; although MI25 is crown of FP32/FP16 Nvidia seems to think the move is towards wanting dedicated GPUs and nodes for training or inference where Nvidia is now pushing Int8, still feel Nvidia missed a point but it has pros/cons for both manufacturers' approach.
The Vega GPU seems focused on FP32/FP16, while Nvidia broaden their selection to be FP64/FP32/FP16 with the P100 or FP32/Int8 (4x) with the P40 and Titan Pascal.
http://videocardz.com/64677/amd-ann...erator-radeon-instinct-mi25-for-deep-learning
Worth noting the 1st 2 cards do not say Vega and are Fiji and smaller model is Polaris, this is what WhyCry reports.
Also regarding Nvidia as a reference the P100 is cheaper than the Tesla P40 (this is Nvidia's top FP32 card but without FP16 and 4xInt8 functions instead), the P100 strength is its high FP64 with good FP32 and FP16 - this figure is influenced by whether it is PCIe (model has slightly lower numbers) or NVLINK.
Edit:
Going back reading WhyCry article, seems he has Nano and Polaris wrong way round.
Cheers
Note: the MI25 is 25TFLOPs FP16, sure this will catch some out as its FP32 is 12.5TFLOPs.
Looks like 1st set of info is based upon HPC world rather than Prosumer or PC Gaming, however that does not necessarily mean it will launch 1st (may do though).
Another aspect, looks like it may be a large-ish die Vega (fits in with what I have mentioned in other threads), still no clarification on core count (full or cut size for smaller GPU).
If it was a small die Vegax2 then FP32 TFLOPs would be higher, and more notably the MI8 would be a single die Vega but it is not.
Looks like Vega is HBM only to me as well, just like Fiji (not being critical just something to consider when discussing gaming or Prosumer).
Where it fits against Nvidia will be interesting as there is no full overlap between each of them; although MI25 is crown of FP32/FP16 Nvidia seems to think the move is towards wanting dedicated GPUs and nodes for training or inference where Nvidia is now pushing Int8, still feel Nvidia missed a point but it has pros/cons for both manufacturers' approach.
The Vega GPU seems focused on FP32/FP16, while Nvidia broaden their selection to be FP64/FP32/FP16 with the P100 or FP32/Int8 (4x) with the P40 and Titan Pascal.
http://videocardz.com/64677/amd-ann...erator-radeon-instinct-mi25-for-deep-learning
Worth noting the 1st 2 cards do not say Vega and are Fiji and smaller model is Polaris, this is what WhyCry reports.
Also regarding Nvidia as a reference the P100 is cheaper than the Tesla P40 (this is Nvidia's top FP32 card but without FP16 and 4xInt8 functions instead), the P100 strength is its high FP64 with good FP32 and FP16 - this figure is influenced by whether it is PCIe (model has slightly lower numbers) or NVLINK.
Edit:
Going back reading WhyCry article, seems he has Nano and Polaris wrong way round.
Cheers
Last edited: