AMD MI100 $1000

Andrew_Carr

2[H]4U
Joined
Feb 26, 2005
Messages
2,777
Ok, so... technically these aren't listed as buy it now at this price, but I've bought two now for $1000 each by doing the following:
Plan A
1.) View all buy it now auctions
2.) Offer $1k
???
3.) Profit
1679268121547.png


Plan B
If offers are not accepted, Watch the item and the seller usually gives you a steep discount (if you're not in a rush, I recommend doing this on all ebay auctions. Sellers are desperate lately).


*Note: These may devalue heavily and turn into e-waste someday like the lower tier instinct GPUs, but are a good price if you need their specific strengths/power efficiency.
**Note2: My last one was DOA and these are a pain to get working in Linux (no Windows support), so just be aware of that.

https://www.ebay.com/itm/1755399715...Ck4uLbu6nEQbAAxsQ/lffnc9TMq4|tkp:BFBMqsX5wt9h
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Nice card. How do you plan to cool this card?

The FP64 is 11.54 TFLOPS versus Titan V 7.45 TFLOPS. Should be an awesome performance card for crunching MilkyWay DC project. Care to help us to evaluate its performance ;)?

P.S. Holdolin wrote a good guide on how to install amdgpu on ubuntu for DC, just fyi.
 
Nice card. How do you plan to cool this card?

The FP64 is 11.54 TFLOPS versus Titan V 7.45 TFLOPS. Should be an awesome performance card for crunching MilkyWay DC project. Care to help us to evaluate its performance ;)?

P.S. Holdolin wrote a good guide on how to install amdgpu on ubuntu for DC, just fyi.
I bought one of these shrouds for it and it worked pretty well (https://www.ebay.com/itm/225130736574?var=524033099499) The smaller ones were pretty weak when I tried them on my MI25 GPUs. Hopefully it'll work in my GPU server that has tons of suction, but it also only has PCI 3.0 on the risers so not sure if it'll be an issue, so first I'm going to try it in something newer with the shroud. I'm also going to try it in one of my larger GPU mining cases that has a lot of airflow for another possible long-term solution, but that'll require a riser to get it in line with the airflow. Definitely planning on giving it a shot on milkyway. Hoping it'll pay off well with some crypto mining but I'm getting it more as an experiment to see if there's a good use case for it there, with the backup plan being DC.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
For cooling, I made a fan shroud with recycled cardboard and attached a 90mm pwm server fan that I cannibalized from unused dell T3600 workstation. This is on my Tesla P100 accelerator with passive heatsink. Unfortunately this will take up a lot of space on your motherboard unless you get PCIE extender and put this outside the PC case or use a mining frame. The gpu server case is just too noisy for my liking ;).



P100-shroud-2.jpg P100-fan-2.jpg
 
That is sick! I don't have experience with the MI100, but the MI25 was basically a Vega 64/Vega Frontier Edition/Radeon Pro WX9100 with a different BIOS. I was able to 3D-model and print a standard shroud for it, although I need to revisit this project because some of my screw holes need adjusted in Fusion360.

Fusion is really easy to learn. It's free for personal use and you can find tons of tutorials for free on YouTube.

I wonder if the MI100 has the fan header available like the MI25 did. Probably not, since this is a CDNA product and not just a tweaked consumer card like the MI25 was.

The MI25 has a mini displayport connector hidden behind the PCIe bracket, and can be cross-flashed to the WX9100 in order to function as a normal GPU. I discovered this project here: https://forum.level1techs.com/t/ins...n-a-tesla-m40-for-normal-people-anyway/190104

Edit: Here is the fan I bought for my card. https://www.aliexpress.us/item/2255800216342613.html

jLtBh5Nzpk6XwCkuRRMn9zs=w1280-h964-s-no?authuser=0.jpg

V55moKgkdhenUddXj36Xhc=w2630-h1973-s-no?authuser=0.jpg

f6ZXuotdEFSX6UKZNVM2aE=w2630-h1973-s-no?authuser=0.jpg

P4t96SQvO7nLC8B30I_zyc=w2630-h1973-s-no?authuser=0.jpg

Lm4LkIHHMeXyomOALBXo2k=w2630-h1973-s-no?authuser=0.jpg
 
That is sick! I don't have experience with the MI100, but the MI25 was basically a Vega 64/Vega Frontier Edition/Radeon Pro WX9100 with a different BIOS. I was able to 3D-model and print a standard shroud for it, although I need to revisit this project because some of my screw holes need adjusted in Fusion360.

Fusion is really easy to learn. It's free for personal use and you can find tons of tutorials for free on YouTube.

I wonder if the MI100 has the fan header available like the MI25 did. Probably not, since this is a CDNA product and not just a tweaked consumer card like the MI25 was.

The MI25 has a mini displayport connector hidden behind the PCIe bracket, and can be cross-flashed to the WX9100 in order to function as a normal GPU. I discovered this project here: https://forum.level1techs.com/t/ins...n-a-tesla-m40-for-normal-people-anyway/190104

Edit: Here is the fan I bought for my card. https://www.aliexpress.us/item/2255800216342613.html

I have 5 of these lying around, guess I should flash them (after I basically threw out my dead reference vega 56 GPUs and fans of course...)
 
Ok, so first card was DOA and had to be returned, second card plugged in and works fine (assuming latest version of ubuntu is running & MI100 drivers installed from AMD's website in advance). Trying to mess with overclocking on linux is annoying so right now all my test data is just at stock clocks for the most part :(
 
Ready to try some DC benchmarking such as Milkyway@Home? Can run under linux. There is a Milkyway forum that you can share your result or create a new post it our DC forum here or there. Don't want to hijack Ebay auction thread ;)

A few years ago when the AMD FirePro S9150 was a decent bargain, there was interest as discussed in one of the milkyway forums here.

Since you have a high end card, I wouldn't be bothered benching with 1 task running per card. I would suggest running 2, 4, 6, 8 etc tasks per card (multitasking) and take the average completion time per task. Any question or need help, feel free to post in the DC forum. To run multiple tasks per gpu card, try here. You just need to create a app_config.xml file first with this codes by modifying the <gpu_usage> for example setting to 0.10 means you can run 1 / 0.10 = 10 tasks per card. For the <cpu_usage>, you can allocate less resources since the milkyway task does not use much cpu.
Code:
<app_config>
  <app>
    <name>milkyway</name>
    <gpu_versions>
    <gpu_usage>0.10</gpu_usage>
    <cpu_usage>0.05</cpu_usage>
    </gpu_versions>
  </app>
</app_config>

Thanks!
 
Yeah, going to try setting it up this weekend. So far it’s performing about 50% better than a Radeon VII. I’ll probably leave it running milkyway@home for the foreseeable future. Just have to lookup my old blind credentials.
 
about 50% better than a Radeon VI
Theoretically, Radeon VII FP64 at stock is 3.360 TFLOPS while M100 FP64 is 11.54 TFLOPS. On paper M100 is 3.43 times faster in FP64 compute than VII. Milkyway gpu task uses FP64 heavily. If you can figure out the optimized number of tasks to run per card, I think 3X performance over VII is achievable. Right now, one VII can produce about 1.6M PPD, so 4.8M PPD may be achievable. We just need you to verify, lol.
 
Back
Top