AMD still clawing share from Intel

That's assuming people will need vast amounts of computer time beyond future desktop CPUs, which I doubt.

Right now what are most people Taxing CPUs with?

I am still running an ancient C2Q, and outside of playing games and doing an occasional video encode, it is usually idling below 10% usage, surfing the web, watching videos, doing taxes, or personal productivity (LibreOffice).

I really only need a new CPU to play more modern games.

You don't have a spouse that fancies herself or himself a amateur photographer and wants to edit large image files and clean them up by hand using a tool like GIMP or wants to re encode video footage to add special transitions and such and effects after the fact. Trust me those people want CPU but just don't know it.

And really why are you even participating in this thread? I don't mean that to sound derogatory but I fail to see your buy in on why you would care about these CPU's in the least. You clearly are missing out on some serious upgrades but don't really care. I appreciate that. It's awesome really. But I scratch my head on why you would give half a turd about the new CPU's until you hit an OS incompatibility.
 
And really why are you even participating in this thread? I don't mean that to sound derogatory but I fail to see your buy in on why you would care about these CPU's in the least. You clearly are missing out on some serious upgrades but don't really care. I appreciate that. It's awesome really. But I scratch my head on why you would give half a turd about the new CPU's until you hit an OS incompatibility.

That does sound derogatory. I am always very interested in technology, whether I am buying soon or not is irrelevant. I also participate in car forums discussing new cars, where no one questions my participation when I have a 10+ year old car.
 
That's assuming people will need vast amounts of computer time beyond future desktop CPUs, which I doubt.

Right now what are most people Taxing CPUs with?

I am still running an ancient C2Q, and outside of playing games and doing an occasional video encode, it is usually idling below 10% usage, surfing the web, watching videos, doing taxes, or personal productivity (LibreOffice).

I really only need a new CPU to play more modern games.


Bruh..... I have a Ryzen 1700 and was amazed at how much faster at encoding it was than a 8350 and a Haswell i7.
I now use x265 even though its slower than x264 in ecoding speed but I have a Ryzen to make up for that....
I also used a Dell Precision t7400 at work for the last 3 years (still have it at home). It is at least 10 years old now. C2Q era Dual socket quad core Xeons with 32GB RAM. That thing is a workhorse with 8 cores (no HT), but sucks power with its 1000w PSU and 130w x2 CPUs....
Old quads still run pretty good on a SSD. Can't say the same for dual cores...

You don't know what you're missing out on until you get one.

Also most client PCs are bogged down with AV, etc.
At my new job, my 8650u i7 4 core 8 thread in my laptop runs at 100% everytime you open a program, extract a file, randomly when idle, because it can't keep up with the demands of every single McAfee enterprise product in existence scanning the SSD that is installed on the laptop. That thing needs 6 cores.
 
An old larger node is not cheaper to produce things on. You get fewer items per wafer compared to a newer node. That's a simple fact. You can argue yields but that always improves over time and is an expected and normal cost early on. And it doesn't even remotely mitigate the cost of a new node. If refining a node was as wonderful as you're trying to convince people it is, there wouldn't be a race to a new node. History and facts refute your argument.

Yes and no.

There are competing factors.

Operating costs are going to be slightly higher on an older process through a combination of larger feature sizes resulting in fewer parts per wafer. On the other hand, the higher yields of a more mature process partially offset this. That said, Silicon wafers,while not cheap due to the refining process, still only cost about $500 per 300mm unit. Out of that wafer you get MANY chips depending on what you are making. 500-800 units per wafer. That brings the raw material wafer cost down to $1-2 per functioning CPU based on yield. So it's not a huge part of the cost.

Then there is labor and maintenance which all have costs associated with them.

So, on average your operating costs for your older process are slightly higher, no doubt. The thing is, operating costs are only a tiny portion of the equation.

The real cost, and why CPU's are expensive when they are made from the most abundant element on earth is in capital and development. It takes lots of expensive qualified engineers YEARS to develop a new process, and when they do, there are lots of purchases of expensive equipment to factor in.

The huge cost benefit an older process has is that it is already paid off. With the R&D and Capital costs already amortized, that older process is now dirt cheap. You've removed the largest single cost from the equation.
 
Bruh..... I have a Ryzen 1700 and was amazed at how much faster at encoding it was than a 8350 and a Haswell i7.
I now use x265 even though its slower than x264 in ecoding speed but I have a Ryzen to make up for that....
I also used a Dell Precision t7400 at work for the last 3 years (still have it at home). It is at least 10 years old now. C2Q era Dual socket quad core Xeons with 32GB RAM. That thing is a workhorse with 8 cores (no HT), but sucks power with its 1000w PSU and 130w x2 CPUs....
Old quads still run pretty good on a SSD. Can't say the same for dual cores...

I know it would encode much faster, but a couple of points.

1: I used to encode OTA TV, but now I moved to the boonies with no viable OTA channels, so I might encode once/month now instead of almost daily before.
2: Even when I encoded regularly. It ran in the background with no real impact on any other usage except gaming. Encoding ran at low priority, so really only absorbed idle CPU cycles, and I would often batch it overnight.
 
That does sound derogatory. I am always very interested in technology, whether I am buying soon or not is irrelevant. I also participate in car forums discussing new cars, where no one questions my participation when I have a 10+ year old car.

Right, and I get that. Again my apologies if I came across derogatory. Wasn't the intent.

I question it because I truly think you don't know what you're missing. I went from a hearlded 2600k i7 to my i7 7700k and it was night and day even with the same video card. All of the subsystems were upgraded and it literally doubled performance in the things I cared about.

That and I'm responsible for larger server farms and Vmware infrastructure so all of this hits close to home. We approach this from different avenues and that isn't a bad thing. Just trying to understand where you are coming from is all.

I'm glad you follow tech. But unless you are using it and feeling it and experiencing it day to day you can't really unederstand it. I have architects trying to tell me XYZ all the time because they read a sales brochure. Get some hands on time and build an understanding. You'll only do yourself favors in the long run.

Not to say you are doing that! Just sharing where I am coming from.
 
I'm glad you follow tech. But unless you are using it and feeling it and experiencing it day to day you can't really unederstand it.

Ridiculous.

It isn't hard to look at some benchmarks and see how much faster encodes would be or even more clearly experience that there are games I want to play (Witcher 3) that my system can't handle at all.

For the home user we are largely all arguing about differences between AMD/Intel CPUs that couldn't be felt, but only realized by running a benchmark and proclaiming: "mine is better than yours" because of X benchmark.

These arguments are really more in the realm of academic interest, than something experienced when comparing state of the art chips.

As far as what your average home user is doing, I would bet it is closer to what I described than running VMware farms. The average home computer likely spends most of it's time with it's CPU close to idle locked on facebook or doing some personal productivity work.
 
Good for AMD, but from a professional perspective I can't chance it, and know many people who feel the same way.

As much as it's no fun, I constantly have to remind myself, "No one's ever been fired for buying Intel." AMD needs a good decade + of on par or better than Intel performance and reliability before these aren't words to live by in the enterprise world (where the real money is).

I would love to play with more TR and Epyc stuff but outside of a few prosumer jobs it hasn't been feasible.
 
Good for AMD, but from a professional perspective I can't chance it, and know many people who feel the same way.

As much as it's no fun, I constantly have to remind myself, "No one's ever been fired for buying Intel." AMD needs a good decade + of on par or better than Intel performance and reliability before these aren't words to live by in the enterprise world (where the real money is).

I would love to play with more TR and Epyc stuff but outside of a few prosumer jobs it hasn't been feasible.
Mindshare is difficult, but remember Intel has a long list of spectacular failures, especially in the last few years. C2000 ring a bell? I can't imagine having invested in a cluster of servers or network appliances based on that chip, only to see them all bricked right after warranty has expired. And then you have the continual stream of security issues with their CPUs that still do not have a hardware fix. From a reliability standpoint, Intel isn't doing too well right now. What recent spectacular problems have been plaguing AMD server products lately?
 
Mindshare is difficult, but remember Intel has a long list of spectacular failures, especially in the last few years. C2000 ring a bell? I can't imagine having invested in a cluster of servers or network appliances based on that chip, only to see them all bricked right after warranty has expired. And then you have the continual stream of security issues with their CPUs that still do not have a hardware fix. From a reliability standpoint, Intel isn't doing too well right now. What recent spectacular problems have been plaguing AMD server products lately?

Who knows? When you only have 3% market share it's hard to tell if you actually have less problems or no one uses your products so no one finds your issues.

I'm not saying Intel is the best or even better but your example of Intel incidents kind of proves my point. No one is getting in trouble for having recommended and implemented Intel servers with spectre and meltdown vulnerabilities. If those same admins had stuck their neck out and pushed for AMD based systems, and then they had this issue, they would be in for a world of backlash of "Why didn't you just use Intel? Everyone uses Intel. This is your fault."

I'm not saying it's right, but it is the reality of the business.
 
Good for AMD, but from a professional perspective I can't chance it, and know many people who feel the same way.

As much as it's no fun, I constantly have to remind myself, "No one's ever been fired for buying Intel." AMD needs a good decade + of on par or better than Intel performance and reliability before these aren't words to live by in the enterprise world (where the real money is).

I would love to play with more TR and Epyc stuff but outside of a few prosumer jobs it hasn't been feasible.

I understand when you have higher ups who swear by Intel or watch everything you do to find reasons to give you grief, or if you would have to do extensive testing to make sure it would work without any compatibility issues in your environment, to not switch to a brand that would outright better in every performance metric, but where those issues aren't present, like home users or small businesses, you would be stupid to pay more for less. That phrase should not be a reason at all to only buy Intel, look at other reasons... If non-tech execs can't trust IT to make tech decisions, then why have IT?
Like if I was just starting a business and i needed a high end cost effective server, I would go with a 7nm Rome because there will probably be nothing that can come close to the price, performance and power efficiency of it (assuming that this Intel 56 core monster exists by then, if not, then its even worse for intel, and assuming AMD doesn't price their 7nm Rome CPUs as absurd as Intel does).
Intel failed hard too (Netburst). If I had a "IT" guy recommend a user to get a P4 over a Athlon 64, I'd fire him. (and make that phrase false lol!)
Same goes for AMD. If I had a "IT" guy recommend a bulldozer or a 2600k for only playing games, I would get rid of him.

If I am a home user encoding videos, multitask, game, all of the above..... I am going to be hitting the f5 button on the day Zen 2 comes out.....

I mean come on, Microsoft and Amazon are rolling out Epyc cloud servers. They wouldn't if they weren't worth it.
If I were a large enterprise, I'd be pissed at Intel right now, Meltdown, Spectre, + others, losing lots of server CPU power with those "fixes" that forces me to upgrade sooner than planned, new CPU still vulerable.
That would make me think long and hard at 7nm Rome (Which I believe has none of those vulerabilities) and would be moving those mountains to switch to AMD
 
Who knows? When you only have 3% market share it's hard to tell if you actually have less problems or no one uses your products so no one finds your issues.

I'm not saying Intel is the best or even better but your example of Intel incidents kind of proves my point. No one is getting in trouble for having recommended and implemented Intel servers with spectre and meltdown vulnerabilities. If those same admins had stuck their neck out and pushed for AMD based systems, and then they had this issue, they would be in for a world of backlash of "Why didn't you just use Intel? Everyone uses Intel. This is your fault."

I'm not saying it's right, but it is the reality of the business.


A good answer to that would be "Because Intel sucked balls at the time for our needs" lol!
Lets not play the "Why didn't you just......" game

I'd say Intel has overall been "better" longer than AMD, I'd also say that AMD is about to pass them up overall in being "better" too.
Businesses move slower than consumers, but businesses are seeing the pitfalls that Intel is making and buying AMD.
I guess time will tell...Will Intel step up...... or will they resort to anti-business practices...............
 
I get where you are coming from. It's completely logical and spoken like an IT person who hangs out on [H] forum. In an environment where that is my target audience I'm all on board with the best product for your needs.

Unfortunately in my experience that's not the customer who comes to me. "You need to replace equipment along with 97% of the world, because a flaw was identified in the chips everyone uses" goes over way better than "You need to replace equipment along with 3% of the world because I convinced you it was better, even theo 97% of the world knew better."
 
I get where you are coming from. It's completely logical and spoken like an IT person who hangs out on [H] forum. In an environment where that is my target audience I'm all on board with the best product for your needs.

Unfortunately in my experience that's not the customer who comes to me. "You need to replace equipment along with 97% of the world, because a flaw was identified in the chips everyone uses" goes over way better than "You need to replace equipment along with 3% of the world because I convinced you it was better, even theo 97% of the world knew better."

It seems like the only way AMD can come back or be on par with Intel as far as marketshare is if people can get fired for buying Intel.
However, Intel is on a sharp downward trend as of late... (not on products but on a compay as a whole as far as trust)
 
Last edited:
It seems like the only way AMD can come back or be on par with Intel as far as marketshare is if people can get fired for buying Intel.

Here's a fun hypothetical since the ARM crowd hasn't shown up yet........ Even if AMD does well, What are the odds ARM or RISC-V takes over the world before AMD manages to grab relevant market share from Intel? MUAHAHAHAH
 
I would love to add their EPYC chips to our VMWare environment but then we couldn't vMotion (live migration) between the older Intel hosts and the new AMD ones.

Plus, I don't even know if Cisco is going to release AMD chips into their UCS bade environment.

I sure as hell would be making a case for them to my boss if we were in the market for another complete forklift.
 
Yes, but it will be hard to justify for PC consumers. Really I bet in a couple of years we will be back to stagnation on the PC CPU market, though we are probably already past overkill for most people.
For 500 of my some odd 600 users, an Intel N3350 is more than enough for them. It is far cheaper and easier to manage heavy lifting being done via large servers through Citrix or RDP than it is to deploy individual expensive fragile machines to them.
 
I would love to add their EPYC chips to our VMWare environment but then we couldn't vMotion (live migration) between the older Intel hosts and the new AMD ones.

Plus, I don't even know if Cisco is going to release AMD chips into their UCS bade environment.

I sure as hell would be making a case for them to my boss if we were in the market for another complete forklift.
What we did to migrate was just create a new VM on the new EPYC server with the same configuration, then copy the VHDD from the old Xeon server to a NAS, then copy it from the NAS to the new server. For most of them it took little more than 30 minutes but they lit right up on the new host with no problems outside of a few programs that registered a change in the hardware mac address which required us to reapply our license to it.

EDIT*

We were also able to successfully do this with our Hyper-V hosts moving from one to the other, except instead of doing an export we did a replication from the Xeon's to the Epyc's then after a day or 2 of successful replications we just waited until after hours and did an unplanned fail over and they took right on going . One of the VM's in that case didn't work correctly but we were able to fix it by copying the VHDD to a USB HDD then pasting it overtop the one that the replication process made. Once that was done we just had to move them from the replication folder into the live folder and update the VM's settings accordingly and 0 issues from that point onwards.

Cisco already has the EPYC's available in the UCS C series in the C125-M5
 
Last edited:
It certainly did with Bulldozer. Not sure why AMD believed it could win the game of low IPC + high frequency.

Thankfully they have abandoned that idea before it bankrupt the company.
 
Last edited:
Anyone else remember netburst? History repeats itself.

Lets hope not, because the last thing we want is another Intel bribe to manufacturers. (History repeating itself, I mean.) I mean, Intel has some really good processors this time around when compared to the Netburst era.
 
It certainly did with Bulldozer. Not sure why AMD believed it could win the game of low IPC + high frequency.

Thankfully they have abandoned that idea before it bankrupt the company.

Honestly, I do not think they expected the FX series of processors were going to be low IPC in comparison to what they should have been. However, by that point, it was too late and they had to go with what they had.
 
Honestly, I do not think they expected the FX series of processors were going to be low IPC in comparison to what they should have been. However, by that point, it was too late and they had to go with what they had.

I still like the idea of having two hardware cores sharing an FPU; of course they marketed those cores as full cores, which, whatever... but I earnestly expected each module to be far faster than they were. When it turned out that those modules were not only slower than a single Intel core, but also slower than the STARS cores they were meant to replace, well, there goes nearly a decade of shipping uncompetitive products for pennies.
 
Cisco already has the EPYC's available in the UCS C series in the C125-M5

https://www.cisco.com/c/en/us/produ...s-c4200-series-rack-server-chassis/index.html

Yup. But we've invested in the 'B' blades.

https://www.cisco.com/c/en/us/produ...mputing/ucs-b-series-blade-servers/index.html

If you're using vCenter and a SAN with datastores that both your AMD and Intel clusters can see, it should be as easy as powering off the VM and forcing it to migrate compute resources. Depending on your vMotion network speed, you could potentially have a downtime of only a few seconds before being able to power the VM back on.

If we had a real large environment with many clusters of both AMD and Intel, I wouldn't have a problem. But our current production setup is just 9 blades in 2 clusters. So we really want everything to be able to migrate without downtime.
 
https://www.cisco.com/c/en/us/produ...s-c4200-series-rack-server-chassis/index.html

Yup. But we've invested in the 'B' blades.

https://www.cisco.com/c/en/us/produ...mputing/ucs-b-series-blade-servers/index.html

If you're using vCenter and a SAN with datastores that both your AMD and Intel clusters can see, it should be as easy as powering off the VM and forcing it to migrate compute resources. Depending on your vMotion network speed, you could potentially have a downtime of only a few seconds before being able to power the VM back on.

If we had a real large environment with many clusters of both AMD and Intel, I wouldn't have a problem. But our current production setup is just 9 blades in 2 clusters. So we really want everything to be able to migrate without downtime.

Mine are just stand alone 2U units no blades for me, maybe some time in the future but I am finding we are migrating more to data centres than to in house stuff. Microsoft is making Azure too cheap for 90% of our process to bother with running them in house any more. So unless that changes my next upgrade cycle will only be the GPU rendering farms for cad and animation work.
 
Lets hope not, because the last thing we want is another Intel bribe to manufacturers. (History repeating itself, I mean.) I mean, Intel has some really good processors this time around when compared to the Netburst era.

Yes let's hope it doesn't go there. Intel got spanked for that one but I'm sure they made more than they paid out. I'm not so sure the market can support that now, who knows.

And yeah the netburst architecture was pretty terrible. Remember how bad the original ones were with the RDRAM? Yuck.
 
And yeah the netburst architecture was pretty terrible. Remember how bad the original ones were with the RDRAM? Yuck.

Fun part: until AMD put the memory controller onto the CPU (Athlon 64), Netburst was trading blows with DDR memory. Even more fun? The boards being released for AMD CPUs, and I don't blame this on AMD, were horrific. BIOS bugs and driver bugs and general instability could be had at a roll of the dice.

I ran a few of those middle Netburst systems for a while just to get away from VIA chipsets.

But as soon as the Athlon 64 was a released, I was out.
 
What we did to migrate was just create a new VM on the new EPYC server with the same configuration, then copy the VHDD from the old Xeon server to a NAS, then copy it from the NAS to the new server. For most of them it took little more than 30 minutes but they lit right up on the new host with no problems outside of a few programs that registered a change in the hardware mac address which required us to reapply our license to it.

EDIT*

We were also able to successfully do this with our Hyper-V hosts moving from one to the other, except instead of doing an export we did a replication from the Xeon's to the Epyc's then after a day or 2 of successful replications we just waited until after hours and did an unplanned fail over and they took right on going . One of the VM's in that case didn't work correctly but we were able to fix it by copying the VHDD to a USB HDD then pasting it overtop the one that the replication process made. Once that was done we just had to move them from the replication folder into the live folder and update the VM's settings accordingly and 0 issues from that point onwards.

Cisco already has the EPYC's available in the UCS C series in the C125-M5


Wouldn't it be easier to just non-live migrate the VM since it'll have to be powered off anyways?

Virtual stuff is pretty cool, when its done right....(last job, the boss loved using Netgear ReadyNAS for the SAN at 1Gb with half the drive bays populated... those ones sucked... SSD's with 10Gbit ran much better)
I've never managed environments with more than like 5 hosts...
What do you guys use for Storage? SSD or HDD SANs, iSCSI, NFS, 10Gbit, etc
 
I've never managed environments with more than like 5 hosts...
What do you guys use for Storage? SSD or HDD SANs, iSCSI, NFS, 10Gbit, etc

Cisco UCS for compute. 10gbs Fabric Interconnect and 10gbs Nexus 3k switching.

Each chassis has 40gbs of bonded bandwidth, each host can get up to 20gbs of that. All is FCoE.

NAS\Backups are a Rubrik with 4 bonded 10gbs links.

Storage is a Pure all-flash array with (I think) 8 bonded 8gb FC links.

It's a really dense rack. We've got room for 7 more blades before our 2 chassis are full, with enough space to add even a new chassis. And we've got enough space to add another pack of disks into our existing Pure AND enough space to add another shelf. And space to add another shelf to the Rubrik if needed.


DR/HA setup in TN is a motley collection of older 1U Dell and HP hosts with the same Pure and Rubrik for SAN/Backups.

It moves perfectly well enough. I'm in no hurry to upgrade to 40gbs links. Maybe some more hosts so we can keep our over-provisioning % down, but that's about it.
 
Wouldn't it be easier to just non-live migrate the VM since it'll have to be powered off anyways?

Virtual stuff is pretty cool, when its done right....(last job, the boss loved using Netgear ReadyNAS for the SAN at 1Gb with half the drive bays populated... those ones sucked... SSD's with 10Gbit ran much better)
I've never managed environments with more than like 5 hosts...
What do you guys use for Storage? SSD or HDD SANs, iSCSI, NFS, 10Gbit, etc
They wouldn’t migrate directly, it would toss an error saying that the destinations servers architecture was different. The host servers just run 12 2TB SAS drives in raid 10. I don’t have the budget for SSD’s from Dell, costs more than the rest of the system.
 
Back
Top