Separate names with a comma.
Discussion in 'AMD Flavor' started by grtitan, May 10, 2017.
I am using some private pools too, but the public ones, Zpool, MPH, Hash refinery, are all good.
Those are all multi mine pools? Auto switching ?
Any point in getting a 480 for mining at this point (used and cheap 4g)
yep all of those are
no they won't do multiming with a 480, need to be CUDA based.
Intel will announce Intel with Radeon Graphics at CES according to Videocardz, and apparently Wikichip leaked the processors models and speeds, etc.
OK. I won't read the article then. LOL!
How will that chip not throttle like hell is beyond me. Seems like the CPU is basically a lower clocked version of the i7-7700. Let's say 35W since it's (I'm assuming) Coffee Lake. So there's 65W for the Vega GPU.
According to this performance is expected to be better than a 1060 MaxQ, which is around 60-70W TDP. With the same TDP envelope.
I'm really curious as to how they managed to achieve this feat, considering how far behind they are in efficiency.
The 1060 wasn't the best perf/watt chip in nV's line up, nV's line up the performance chips give the best perf/watt
When chips are designed there is a certain range of mhz, voltage, power usage that is best for them, going lower or higher than those envelopes will kill efficiency, for Vega, they are still using HBM to manage this feet, if they manage it. Still think it won't beat the gtx 1060 max Q outright.
But if it does, that just shows Vega architecture wasn't made for higher clocks, voltage. Just look at Polaris, its closer to the gtx 1060 in perf/watt % wise than Vega is to the 1070 or 1080.
Yeah, it was well known that AMD had to push Vega clocks like hell to get them to be competive with the 1070 and 1080 and the perf/watt fell off a cliff - but at least the chips could keep clocking higher and higher and the expense of needing ever more cooling.
With less clocks the efficiency jumps way up, so they may be keeping the mobile version in the sweet spot.
They didn't, Intel's numbers are crap. Let's see this thing in a proper review.
I can run my LC Vega ~1300@.900V so I can only assume efficiency is quite good in that range.
Ah here we go. HBCC in action.
Needs more CPU's.
That is marvelous.
_mockingbird was correct,
reference design is phased out.
it always phased out, he stated the norm.
Well it seems that I need to read a mining guide or flip my card on EBAY for a ridiculous price. What coin can I mine by myself? I have a VEGA 64 and Ryzen 1700. There is a RX 480 somewhere around here. Can I install them together and mine? I think that solo mining would be fun if I can track my coin prices over time. But if pool is the way to go then I guess I can do that.
Any proof of work coin that isn't dominated by ASICs will do. Right now that means anything other than SHA256 (Bitcoin algo) and Scrypt (Litecoin, Dogecoin algo). Play around with altcoin mining calculators like WhatToMine to get an idea of what your daily revenue will be. You definitely want to be on a mining pool with a single GPU. You'd be waiting weeks for a payout as a single GPU solo miner.
Yes, you can install two different cards in the same desktop and it works fine as long as your power supply can handle the load. They don't even need to be on x16 PCI-E ports. Lesser known altcoins will usually generate more revenue for small scale miners since they don't get widespread attention from the altcoin mining calculators. Less mining competition = more rewards.
Any Eth based coins and XMR based coins will do just fine.
ps if you have any questions about mining, pm me, I'm more than happy to help. Getting things working for the first time can be frustrating, because its not like miners were made for ease of use lol. Well outside of Nicehash miner, the others take time to get used to and to get the best out of them. Having multiple miners going at once is a strain on the network, small bits of data over multiple miners "clogs" up modems and network cards and adapters, so look into auto restarting using task settings or other programs.
Even using Nicehash, if you only have one or two rigs mining, I would dl the free version of Awesome miner and use that instead of nice hash miner, it will give you a much more transparent view of what is going on and you can compare to what it says and what you get from Nicehash to see your pool luck and other stats.
With my HTPC which has a Radeon FE and a 1700x (not OC and no XFR, 3.4ghz) using NiceHash it gets at the moment ~$13/day. I use Awesome miner too but recommend just using NiceHash right off the bat as you explore other mining methods. I had to spend a considerable amount of time to learn the In's and Out beyond YouTube videos using Awesome Miner.
~15 min mining guide for Vega/Ryzen setup, as in you will be benchmarking in 15min then mining when done:
Download NichHash legacy miner (this works better with Nvidia cards as well)
no need to download any of the other files
Unzip folder to wherever you want it - no need to install, everything is self-contained
Create an account at Nicehash
I recommend using Nicehash wallet, meaning they will hold your mining rewards until you transfer it. Yes they were hacked but now you can transfer daily to Coinbase for free (no fees)
If I remember right, you create the wallet after logging in under Wallet menu at top, you will use the wallet address for your mining address in NiceHash
Open up directory which you save NiceHash in and open up the Nicehash program, it will automatically download miners and install them
Depending on the virus software you use it may flag some of the items - Windows defender no longer flags anything but other virus programs may erase files etc. so you will need to exempt the folder if that is the case and start over (some do not feel comfortable with this and use another machine for more sensitive usage, I have no problem running NiceHash and opening up my bank account, I may pay the price lol)
In the NiceHash program on top is what server you want to use for mining - I just use the USA one but if you are in Europe etc. pick the one closest to you
Put in the wallet address right below that, the same one giving to you at Nicehash.com
Go to benchmarks and hit start ( I just use Standard setting vice precise most of the time)
It will benchmark all the algorithms applicable to your devices so that it knows which one it can use to give you the highest rate so you get paid more (so will they)
You can configure which devices you are going to mine with by the checkboxes on the opening screen, like 1700 and Rx Vega
Start mining by hitting the start button which will open up individual mining console windows which will give you information as well as they mine
You can see $/day etc right on the program interface
You can go to Nicehash.com and see your stats, mining rigs, devices etc.
Currently I am making $59/day at Nicehash according to Nicehash.com but that will vary.
I just got into using NiceHash. It's definitely the easiest way to get started.
Right now I have two rigs running (the 2 in my sig) and I'm getting about $32/day or $950/month.
Only been running for less than 2 days (and for part of one day there was a brown-out and my machines were off until I got home) and I've earned $40 so far.
My room is burning hot now, though. Maybe have to consider leaving the A/C on all day (with the associated electricity cost) but I guess it's not that dangerous right now.
Put a fan in the window and blow the air out, that is the cheapest way to keep things cool, using your ac just burns profits.
Well there are other things there too, Vega 12nm, Navi being pushed out till after Vega 7nm, pretty much Navi is now 7nm too, in all likely hood we will not see it out this year since Vega 7nm will be end of this year. AMD/RTG will be a none factor in gaming this coming year. All AMD has for the foreseeable future is Vega, and that is exactly what he stated. Scott even went on to say the roadmap is not a hard roadmap, anotherwords things have changed and still changing, that is not confidence, that is a lot of uncertainty when they are already waist deep in these designs, something else _mockingbird stated too,
I stopped watching when he said "Vega's performance has gotten stronger uh compared to the competition over time.." I would probably have punched him in the nuts and left if I was the guy with the mic...
Well considering you can't find any Nvidia cards above a 1050 Ti in stock, maybe there is no longer competition.
This thread is so far off track now. Put it in the shitter. Fk this.
Cant find any 1050ti under 250 either.
This shit is getting ridiculous.
So will there be a 12nm Vega? We heard there will be a 7nm Instinct Vega at CES (no timeline). Last year AMD said they will be 12nm Vega's but nothing revealed at CES so far. I guess there is zero need to do any hype since they can't make enough of anything to saturate the market. The first one Nvidia or AMD that gives out specs for a much lower cost to make mining card to the OEMs will probably make out big. Also, I would think there would be more failed Vega's but still able to work with less than 56 compute units - like 52, 48 or whatever that could be used for mining cards at a discount. In fact AMD/Nvidia could make cards that does not even fit into a typical PCIe slot - could be just a 1x pcie and has mounting not even on the ends - in other words making it easier to put several together (maybe even hooking up together).
Anyways if AMD and Nvidia does not protect the gaming market with available hardware it will be stifled and potentially lost. Right now there are virtually no gaming cards, they are mining cards first and gaming is 2nd.
Or, you know, AMD/NV can just produce enough cards to meet demand.
Can't do that overnight...
Obviously they can't - unless they just want to loose money by not selling more cards. Could be memory or other components limiting supplies, for Vega HBM2 and the packaging is probably very limiting.
it also takes time to ramp up production, there are many companies involved to get the allocation of parts......
AMD kinda stated that at CES... there will be a 12nm Vega
I'm not sure if they mean 12 nm as in Vega + Intel, but the way I took it is Vega desktop will be shrunk to 12nm GF. If the process is good, and similar in price to 14nm, it would be wise for them to do it as per chip cost drops.
razor1, I'm pretty sure GF 12nm is just a refined 14nm and there's not much of a shrink there to lower chip cost. I may be wrong, but the real shrink is to pseudo 7nm later on.
Holy hell...wouldn't want to see the amount of electricity let alone heat output of that beast, but sexy it is ^.^
it is it has less layers, so it should be vastly cheaper per chip talking about 25% less. Also has better power profile too. So that will really help with Vega in the short term (not against nV's next gen but current)
Cool. Bring on cheaper video cards!
There is no 12nm Vega, just the 7nm Instinct one.
Only AMD knows this, all I know, is want a new gpu, RX 500 are not currently available least the ones I was eyeballing, and Vega is beyond what budget range I have (as well as can effectively use without bottlenecking the crap out of it)
want to "replace" my Radeon 7870, so even if they were available, not want to drop a tier i.e RX460/560 though cost was right, and 470/570 would have replaced for a higher performance bracket, but also a good chunk more $ (instead of around the same price level as I paid way back when ^.^).
I am hopeful AMD releases a "modern" 7870 i.e 1280 shader 80+ROP/32+TMU on 256bit with 4+gb memory ~150w or less loaded, for its time, it really was a well engineered "performance" gpu.
As for the the less layers bit, that could mean (in theory) ability to clock a bit higher (less resistance to electrical noise and less "metal" probably has an impact albeit likely a small difference) hopefully they not reduce too many layers, already had Intel do that and those buying them suffer IMO from bending chips that should not be, or possibly even to higher than should have temps (not enough wiring to allow voltages be less centralized??)
GF12nm probably is no different than AMD did many years ago with the 4890 from 4870, cut some of the redundancy so they could increase the amount of transistors and clock rates...I would really like to see a maker of these things to truly utilize a die shrink to remake the exact same chip
for example, RX 580, keep the same transistor count etc, but because of shrink probably would allow the higher clocks at a reduction in power and heat (lower cost cause not as much "wasted" on transistors per die)
Everyone (Intel, TSMC, GF, Samsung etc) always seem to say "die shrink to X from Y allows Z amount of performance increase or lower power consumption at a reduced price, but yet, they always seem to end up using more power because they jack in a bunch of extra transistors, not usually end up significantly faster, just pricier, seem to get "hotter" and rarely end up getting any noticeable extra features either.