Jim Keller leaves AMD

Problem: legacy compatibility. Virtualisation/emulation would hurt the performance a lot.

If big game developers supported it, gamers would be the first to switch, especially if there's performance benefits over x86. From there everyone else would follow.
 
Intel, with the backing of both Microsoft and HP was not able to successfully transition us to Itanium. A non x86 future ain't in the cards any time soon.
 
VIA still has a license. Odds are if AMD goes belly up, VIA could start to give Intel a run for their money sooner than later.
How do figure that's even in the realm of possibility? VIA was never even close to competitive, and they haven't worked on any high performance x86 designs ever, as far as I am aware.
 
VIA still has a license. Odds are if AMD goes belly up, VIA could start to give Intel a run for their money sooner than later.

As far as embedded x86, DM&P and ZF Micro could also take up the slack.

And then there is IBM, who has been working very closely with nVidia on NVLink with their Power 9 system.

Not to mention, maybe it's about time to end x86 and transition away from it.

Funniest shit I've read all week. Look out, Intel - here comes VIA!!
 
If all these other companies have the potential to be competitive, why would they wait? Why not make their move now?

Would it really be beneficial for them to wait until AMD dies and Intel has a monopoly? Short of them getting hit with anti-competition rulings, Intel would only end up in a stronger market position to squeeze any newcomers.
 
If big game developers supported it, gamers would be the first to switch, especially if there's performance benefits over x86. From there everyone else would follow.

There wouldn't be performance befenfits. At least if ARM would be the alternative. Gaming is not enough and I doubt that gamers enjoy playing bit older games so slowly. It must be possible to run old x86 applications at acceptable speed before there's any chance.
 
The whole friggen game is and has been rigged for a long time anyway. AMD has had superior tech to their counterparts in the CPU arena before. It's quite the shame that the operating systems and software wasn't really there to take advantage.

We are grossly behind where we should be with programming that heavily exploits the use of multithreading and multicore CPUs in everyday software.

Gee... I wonder why...
 
What happened was that Intel and AMD basically turned their CPUs into RISC chips internally while still holding a monopoly on the market due to the fact that they were x86 on the outside. No modern CPU is pure x86. RISC will come back. PowerPC has always been the architecture of the future. Linux will surpass Windows for gaming and enthusiast use. Today's PowerPCs are so powerful that they can fully emulate Windows and x86 without a gaming-significant performance hit. The hit would likely be about enough to reduce a (theoretical) CPU that was as fast as a 12-core Haswell at 4-5 GHz to a 4560K at ~3 GHz. But people would not need to emulate most things. People who game or use other windows-only software would do that in the emulator and it would work comparably to a x86 CPU of the same price. Everything else can run without emulation. And if people start switching to PowerPC in significant numbers, it would probably get game developers and software makers to port their work.

We heard that during the Pentium 4 debacle and when the Apple G5 based workstations hit 12 years back.

I believe you are all but a remnant of the same era who would bend over for Apple and AMD basically.
 
The whole friggen game is and has been rigged for a long time anyway. AMD has had superior tech to their counterparts in the CPU arena before. It's quite the shame that the operating systems and software wasn't really there to take advantage.

We are grossly behind where we should be with programming that heavily exploits the use of multithreading and multicore CPUs in everyday software.

Gee... I wonder why...

Yeah.. that's why... lol
 
I wouldn't be surprised if he looking for a higher role now, but AMD probably coulldn't afford him. Since Zen is probably already done since its scheduled to come out next year. He probably decided to call it at AMD.
 
We are grossly behind where we should be with programming that heavily exploits the use of multithreading and multicore CPUs in everyday software.

I keep hearing this criticism here but I can never get an answer:

What application that isn't already multi-threaded do you think should be multi-threaded and how much are you willing to pay to make it happen?


*crickets*
 
I keep hearing this criticism here but I can never get an answer:

What application that isn't already multi-threaded do you think should be multi-threaded and how much are you willing to pay to make it happen?


*crickets*
Yeah, pretty much this. Here is what most people are doing on their PC:

- Internet
- Gaming
- Email
- Word processing
- Spreadsheets

By and large most people are not doing intensive rendering, CAD, or other applications that would benefit from heavily multi-threaded code, and most of those applications are already multi-threaded. If you are doing those workloads you can buy a 6-core Intel CPU for another $300-400 over the regular i5/i7, and for professional use that's not a huge difference.
 
It's OK. You'll have a bigger effect posting bad AMD press in the GPU forum. ;)

Yeah, the CPU division has done so well for so long I hate to see any long term CPU engineers leave. Something might change that way. /sarc
I have no agenda. I've actually been looking forward to Zen for some time, and hopefully it delivers. There's really not a lot of places to discuss this, and the AMD forum is mostly dead.
 
I keep hearing this criticism here but I can never get an answer:

What application that isn't already multi-threaded do you think should be multi-threaded and how much are you willing to pay to make it happen?


*crickets*

You keep hearing it because it is true. Multi threaded, Multi core performance on any application will make a real difference. Otherwise, why do we even bother using more than 1 core of a gpu at a time?
 
You keep hearing it because it is true. Multi threaded, Multi core performance on any application will make a real difference. Otherwise, why do we even bother using more than 1 core of a gpu at a time?

What application do you use that's still single threaded?

I can only think of Prison Architect at the moment, lol.
 
Plenty of games have launched without multithread CPU support. ARMA 3 at launch was a perfect example where I got 12 - 15 fps on the lowest settings at launch. After enough people complained, the developers patched in multithread CPU capabilities into the engine. Now I get perfectly acceptable frame rates on the highest settings.

Most games today can't talk to the GPU but with one thread at a time. That's why DX12 was invented. There is a ton of room left for innovations in the games industry.
 
You keep hearing it because it is true. Multi threaded, Multi core performance on any application will make a real difference. Otherwise, why do we even bother using more than 1 core of a gpu at a time?

Like I asked, tell me the applications that would benefit from multi-threading that aren't already multi-threaded.

Then we can look at the cost and determine if people are willing to pay for it.
 
Concurrency is frequently a hard_problem. Yes, there are numerous situations where things can be readily multi-threaded (as the lineage of dependencies is fairly straightforward), but given the option, I still want a processor that's a beast in single threaded operations. (Or something like a big.LITTLE architecture, but that makes for tough scheduling!)
 
Concurrency is frequently a hard_problem. Yes, there are numerous situations where things can be readily multi-threaded (as the lineage of dependencies is fairly straightforward), but given the option, I still want a processor that's a beast in single threaded operations. (Or something like a big.LITTLE architecture, but that makes for tough scheduling!)

This is true. As someone that's done in it some for classwork on the C level, and also dabbled some brief low level distributed computing (matrix manipulations on a cluster at my college), single threaded is several orders of magnitude more simple than concurrency. Frankly, some things can't even be made multithreaded. It just won't fit the way the program works. In general, anything that can currently benefit from multithreading is likely already there; after all if competitors have it in their program and you don't, you're obviously at a disadvantage.

It's not like a magic switch where you just change a thing here or there, and wham it's multithreaded. It's often an entire paradigm switch, and you need several asynchronous parts that work without each other. Any communication they have carries a high cost (in parallel and distributed computing, interprocess communication can be a huge bottleneck), and if you have too many sections where they required shared resources you may end up with a slower program. It's a top down design difference.

Frankly when I code I prefer single threaded if at all possible. And if it's something that can be done entirely asynchronously then I just spawn multiple processes that are each single threaded and let the OS handle where each thread goes. For communication between tasks there are databases, MPI, SysV IPC, Unix sockets, sockets in general, etc... but those all generally carry high cost of communication. If you need true speed, then you need a multithreaded C program with critical sections and all this other crap. It needs to be well-planned, too. Not easy. >_>;
 
Back
Top