My calculations were normalized to be per-core. 48 MB of L3 cache across 6 cores means 8 MB per core. Of course, a single core might be using more, and reuse can happen per core. That's because there are six cores per socket, and 384 MB / 6 = 64, and 64 / 6 is 10.67 megabytes. Sure. Either way, each CPU socket doesn't have 200-some megabytes of cache available to it. Yep, that's closer to the truth. Thing is, only 10,400 KB is in the processor package, and it's about 1/20th of the 200-something MB number you were using when you made your claim about about IBMs engineers and their with budget management. Now that you've learned some of the facts, I wonder if you thinking of re-evaluating your position. This is a very strong assertion for which you offer no real evidence. There's six sockets per book, and the number of books in the system depend on how many were ordered or installed in the machine. The Redbook for the system describes its architecture; I'm surprised to think you haven't read it. You're putting words into my mouth in support of your own position. I've made no claim about the success or failure of zEC12 machines. In fact, I'm asking you why you think they're such a failure to educate myself. I've written plenty of code -- certainly enough to know that someone who's trying to minimize lines of code is working to minimize the wrong thing. (And we're talking about hardware anyway, not software.) You'll want to familiarize yourself with my resume before you make further foolish presumptions. I'm not trying to understand the paper at AnandTech -- there's no paper at AnandTech, just an article reporting that somoene made a blog post. I'm trying to understand the claims you've made in your posts. I'm disappointed that your responses to my questions don't bring any clarity. What is "the IBM mainframe myth"? I guess I'm not asking about that because it hasn't come up before this point.