5.1Ghz Bulldozer OC on Air.

Why is all of this centered around single threaded benchmarks when almost nothing is single threaded anymore?

Because when a 4-core processor goes against another 4-core processor (price point) then it comes down to performance per thread as to which is better?

Multi-threaded benches are cool but it wouldn't be a direct comparison if you put the 8-core BD against a 4-core SB (non-HT) SB, price or performance wise.

If you strip the comparison down to price/performance, then you are pitting 4v4 and 8v8 (theoretical), so knowing who performs better PER CORE becomes relevant. At that point it all really boils down to IPC and clock rate.

With multi-threaded stuff it might also boil down to whether the threads break down and run evenly on both platforms, but that's really more application dependant than hardware dependant unless the hardware design is really borked.
 
Just saw your post after posting mine. I looked at doing the lower clock but figured the 2500k turbo's to 3.7 so that would be the better (more reliable) mark to check. I also only picked on the single thread benchmark to emphasize IPC and what it would take to generate equal performance. If you feel the 3.4 PII would be a better route to measure then please let me know why you feel that way. Thanks for the post :D

980be to 2500k makes sense for single threaded - I hadn't thought about turbo ratios.

I thought the PII 3.4g versus 2500k was better comparison since they are (almost) equal clocked quad cores, so it's 4-core for 4-core comparison without those pesky turbo multipliers kicking in. :) For IPC, I think looking at a full suite of benchmarks, both threaded and non-threaded, works better than picking one single threaded benchmark to just IPC.
 
Yeah agreed. AMD had a massive lead back then. A64 was eating P4s and Pentium Ds for breakfast lunch and dinner. But it seems they got lazy and just kept using the same architecture thinking it would beat anything Intel had. I mean I remember AM2 coming out and the 939 CPUs beat the AM2 cpus(The FX-59 I think on 939 beat the FX-60 or 62 on AM2 quite easily). AMD really could have kept dominating if they would have come up with a better architecture back then.

Why do people think AMD was lazy? When a company has a small (compared to Intel) budget for R&D, you can't expect there to be a lot of concurrent design projects. If something goes wrong on one of the projects, you can either:
A. Scrap it entirely (the original K9 and K10 projects) and start on something else
B. Pull resources from one project to another (what happened during Phenom development to Bulldozer).

I can see if AMD came out and said, "We're halting development on new core technologies right now, because what we currently have is good enough." That is being lazy.

And then, you also have problems with the manufacturing side, like what happened to Thoroughbred A (metal layer fix) and Brisbane (slower cache than Windsor).
 
980be to 2500k makes sense for single threaded - I hadn't thought about turbo ratios.

I thought the PII 3.4g versus 2500k was better comparison since they are (almost) equal clocked quad cores, so it's 4-core for 4-core comparison without those pesky turbo multipliers kicking in. :) For IPC, I think looking at a full suite of benchmarks, both threaded and non-threaded, works better than picking one single threaded benchmark to just IPC.

Aye but that's another reason why I picked the single thread instead of multi, because I could be reasonably assurred that it would be 3.7 vs 3.7 instead of "turbo doesn't kick in with more than 2 cores used" or some other funky variable. Also, the intel proc beat amd up pretty bad in that particular BM @ about 35% increased performance. So I figured if I could "worst case scenario" the situation just to figure out what it would take for BD to catch up to SB, then that would give me a better idea of what kind of overall clock I would need in BD to match SB performance.

Chances are that with the architecture change BD could be significantly faster or slower than PII in multi-threaded applications, with the resource sharing and all. To that effect I would think it premature to try and compare multi-threaded scores until we find out how BD even handles multi-threaded apps.

Going by worst-case, if PII is currently running about 35% behind SB in IPC and BD cuts that down to 15%, then I think it would be quite reasonable to expect the 4-core BD to be very competetive with SB at stock clocks. **** Edit for emphasis: AT STOCK CLOCKS, NOT CLOCK FOR CLOCK ****

Interestingly, Toms just put out a new set of charts where a bunch of different cpus were tested as single-core parts and were normalized to 3.0ghz. As a general observation, it seems that the SB arch runs about 20-35% faster than the PII on average. It stands to reason then that if BD improves performance by 20% and they can get higher clocks (stock) than intel, they will have a competetive product.

Of course at [H] we tend to judge it a little differently, so BD is going to be assessed as "not that exciting" unless it clocks closer to 6ghz (based on my previous swag of what it would take to equal an intel 5.0). Good thing is that by and large the general computer using population doesn't care about overclocking, only that it works competetively and is a good price. That, I believe, is within the realm of possibility for AMD. For us it tends to be more about (performance + price) or (performance + headroom) depending on where your priorities are, so for the most part I think people here are going to be disappointed in BD unless it has a lot of headroom to OC.

Truth be told I haven't needed anything more than my 2-core PII and didn't even OC that until earlier this year. Guess I'm not as cool as I used to be, but it also probably has something to do with hardware being well ahead of software these days unless you get freaky with resolution and settings.
 
Why do people think AMD was lazy? When a company has a small (compared to Intel) budget for R&D, you can't expect there to be a lot of concurrent design projects. If something goes wrong on one of the projects, you can either:
A. Scrap it entirely (the original K9 and K10 projects) and start on something else
B. Pull resources from one project to another (what happened during Phenom development to Bulldozer).

I can see if AMD came out and said, "We're halting development on new core technologies right now, because what we currently have is good enough." That is being lazy.

And then, you also have problems with the manufacturing side, like what happened to Thoroughbred A (metal layer fix) and Brisbane (slower cache than Windsor).

Amd was probably internally demoralised from when they had the fastest cpu by far and intel paid companies to use their cpu instead, which in-turn caused unproportionately small market growth compared to what they really could have offered people.
 
Amd was probably internally demoralised from when they had the fastest cpu by far and intel paid companies to use their cpu instead, which in-turn caused unproportionately small market growth compared to what they really could have offered people.

Yeah that's a shitty situation to be in.
 
Amd was probably internally demoralised from when they had the fastest cpu by far and intel paid companies to use their cpu instead, which in-turn caused unproportionately small market growth compared to what they really could have offered people.

Exactly. Which is why I can't really ever see myself supporting Intel. That's just one of the many examples of how they stifled progress and competition. AMD had to sell its Fab business to survive. This has probably pushed GloFlo further behind Intel on the Fab process which is probably one of the reasons why we don't have Bulldozer yet.

AMD was more ahead of Intel when Hammer was out than Intel was ever ahead of AMD. With a first x86_64 bit CPU I remember running 64bit Linux with NUMA support and Intel had nothing that could even attempt to compete.

They had Itanium I guess, lol (which emulated 32 bit x86 code making it slower than PIII in that mode). Sad thing is Intel today still makes more money on Itanium than AMD makes on the server side of things. That's how f-ed up things are.
 
Last edited:
They were also in some serious debt after buying ATI; lets not forget that. That's likely the reason why Phenom felt rushed and was a disaster upon release.

BTW, why would anyone call them "lazy?" That's ridiculous.
 
They were also in some serious debt after buying ATI; lets not forget that. That's likely the reason why Phenom felt rushed and was a disaster upon release.

AMD buying ATI was a two fold necessity for survival as well.

1.) Intel integrating GPU into their CPUs (AMD's need for Fusion).

2.) To prevent nVidia from buying AMD (hostile takeover). For the same reason as #1. As rumors would have it.

Keep in mind Intel had also invested over $2 billion into Larrabee at the time which lucky for AMD turned out to be a disaster.

Basically with majority of computer sales today being Laptops AMD needed APUs to compete. And currently we're finally seeing this plan taking shape.. with Zacate and Llano and soon Trinity. pxc was the one I think to accuse Hector Ruiz and the board of AMD for being dumb, but given what they had to work with I think they did the best they could to keep the company relevant. And it looks like AMD still has some fight left in it, it definitely seems better positioned than nVidia that's for sure.
 
Last edited:
Nvidia also looks very safe, considering that Windows 8 will include ARM support. Who knows, maybe Project Denver may be the next enthusiast rig.
 
Nvidia also looks very safe, considering that Windows 8 will include ARM support. Who knows, maybe Project Denver may be the next enthusiast rig.

It'll be awhile before Windows tablets play a major role. Windows tablets first have to challenge Android and then Ipad to even become relevant.

As far as ARM on desktop/laptop goes, that's probably going to happen.. never.

But let's say somehow Windows tablets do become popular over night. nVidia has companies like Qualcomm, Samsung... various other ARM SOC manufacturers to compete with.

nVidia should have bought VIA for the x86 license when AMD bought ATI.
 
Exactly. Which is why I can't really ever see myself supporting Intel. That's just one of the many examples of how they stifled progress and competition. AMD had to sell its Fab business to survive. This has probably pushed GloFlo further behind Intel on the Fab process which is probably one of the reasons why we don't have Bulldozer yet.

AMD was more ahead of Intel when Hammer was out than Intel was ever ahead of AMD. With a first x86_64 bit CPU I remember running 64bit Linux with NUMA support and Intel had nothing that could even attempt to compete.

They had Itanium I guess, lol (which emulated 32 bit x86 code making it slower than PIII in that mode). Sad thing is Intel today still makes more money on Itanium than AMD makes on the server side of things. That's how f-ed up things are.

Honestly, if I was in Intel's position, being forced to share something I invented for the 'sake of competition', I would stifle my competitors too... But, since AMD has contributed a lot to x86 (and due to FTC investigations around the world), Intel realized that it is better to out-innovate AMD than to actively suppress them.

Think about it, if IBM wasn't adamant about the success of the PS/2, PC/XT and PC/AT system, we'd more than likely be using Motorola 68k derivatives or RISC/MIPS processors.
 
AMD's main problem with buying ATI is they paid waaay too damn much. The idea in of itself was sound they just should've bargained for a better price. The amount of debt they've had to shoulder to get ATI nearly pushed them under and was what really forced them to spin off their fabs as a seperate company which I think long term is a bad idea.
 
Nvidia also looks very safe, considering that Windows 8 will include ARM support. Who knows, maybe Project Denver may be the next enthusiast rig.

Hahah you getting paid to say this or what? ARM has very low power usage for small chips which is great for portable devices but total performance is pretty bad vs. even the mid clocked P4's much less a dual core C2D or Core i3/5. You have to realize that there are no magical ISA fairies and that a scaled up ARM chip, or whatever ISA of choice (look at Intel's EPIC Itanium or nV's VLIW/scalar GPGPU's if you like) will have issues similar to "bloated" x86. Whatever project Denver turns out to be exactly what it won't be is the next enthusiast rig. Besides being too small a market to justify the development costs the software isn't there, you need more than just an OS, you need the games which won't work without a heck of a lot more than a simple recompile. If you think developers will support 3 consoles, the PC, and another new platform from nV when they can barely support the PC as is and that will be a much larger market for a long time to come then I've got a bridge to sell you.

Project Denver is far more likely targeted towards mobile devices or perhaps HPC stand alone cluster work but even that is a very long shot.
 
Last edited:
AMD's main problem with buying ATI is they paid waaay too damn much. The idea in of itself was sound they just should've bargained for a better price. The amount of debt they've had to shoulder to get ATI nearly pushed them under and was what really forced them to spin off their fabs as a seperate company which I think long term is a bad idea.

AMD needed ATI much more than the other way around and ATI knew it. They wouldn't have made much of a compromise due to AMD not being able to reach a deal with Nvidia originally, and they were the only other company that fit the bill.
 
AMD needed ATI much more than the other way around and ATI knew it. They wouldn't have made much of a compromise due to AMD not being able to reach a deal with Nvidia originally, and they were the only other company that fit the bill.

IIRC they could've bought PowerVR too who has been for sale on and off for a very long time. Their TBDR based GPU's are small and very power efficient so its kind've a shame they never took off in the PC space. Also being a TBDR means they get good performance with ho hum bandwidth unlike "traditional" GPU's from nV or ATI which is important for a APU targeting the low to mid range using main system RAM as a frame buffer.

Also given how long its been between when they bought ATI and finally got a Fusion APU out I think its safe to say they could've held out for a better price.
 
Honestly, if I was in Intel's position, being forced to share something I invented for the 'sake of competition', I would stifle my competitors too... But, since AMD has contributed a lot to x86 (and due to FTC investigations around the world), Intel realized that it is better to out-innovate AMD than to actively suppress them.

Think about it, if IBM wasn't adamant about the success of the PS/2, PC/XT and PC/AT system, we'd more than likely be using Motorola 68k derivatives or RISC/MIPS processors.

If I put my self in Otellini's shoes I would think like that, but I put myself in my own shoes and I want more competition, cheaper and faster CPUs. And Motorola's 68k instruction set was much better than x86, I wouldn't have been terribly sad about that.
 
Amd should have bought 3dfx instead of nvidia and then we would have cpu's with inbuilt voodoo XI's :D
 
If I put my self in Otellini's shoes I would think like that, but I put myself in my own shoes and I want more competition, cheaper and faster CPUs. And Motorola's 68k instruction set was much better than x86, I wouldn't have been terribly sad about that.

True. If Commodore didn't fuck up we would probably be using our Amiga 15000 today :D
 
AMD's main problem with buying ATI is they paid waaay too damn much. The idea in of itself was sound they just should've bargained for a better price. The amount of debt they've had to shoulder to get ATI nearly pushed them under and was what really forced them to spin off their fabs as a seperate company which I think long term is a bad idea.

logically, selling off their fabs does sound like a bad long term goal. but as business they have to keep themselves afloat to fight again another day.

buying assets that allow them to merge cpu & gpu lets them focus on selling to a much larger market. they may have paid a pretty penny then but it looks like it might work out for them. amd may well have plans later down the track to bring back internal fabrication once their 'warchest' is fully charged.
 
AMD's main problem with buying ATI is they paid waaay too damn much. The idea in of itself was sound they just should've bargained for a better price. The amount of debt they've had to shoulder to get ATI nearly pushed them under and was what really forced them to spin off their fabs as a seperate company which I think long term is a bad idea.


it was always a long term investment. so it was worth the gamble they made paying the price they did. in the end its worked out great for them. but either way AMD had to spin off their fabs even if they hadn't of bought ATI it was something they would of eventually had to do in the long term.


Honestly, if I was in Intel's position, being forced to share something I invented for the 'sake of competition', I would stifle my competitors too... But, since AMD has contributed a lot to x86 (and due to FTC investigations around the world), Intel realized that it is better to out-innovate AMD than to actively suppress them.

Think about it, if IBM wasn't adamant about the success of the PS/2, PC/XT and PC/AT system, we'd more than likely be using Motorola 68k derivatives or RISC/MIPS processors.

it was a mutual benefit to both companies since AMD had x64 and intel had x86. they could both screw each other over if they wanted.


AMD buying ATI was a two fold necessity for survival as well.

1.) Intel integrating GPU into their CPUs (AMD's need for Fusion).

2.) To prevent nVidia from buying AMD (hostile takeover). For the same reason as #1. As rumors would have it.

Keep in mind Intel had also invested over $2 billion into Larrabee at the time which lucky for AMD turned out to be a disaster.

Basically with majority of computer sales today being Laptops AMD needed APUs to compete. And currently we're finally seeing this plan taking shape.. with Zacate and Llano and soon Trinity. pxc was the one I think to accuse Hector Ruiz and the board of AMD for being dumb, but given what they had to work with I think they did the best they could to keep the company relevant. And it looks like AMD still has some fight left in it, it definitely seems better positioned than nVidia that's for sure.


AMD owned ATI long before Intel ever released their version of fusion processors so that can't really be a reason for "needing" to buy ATI. if anything AMD buying ATI is what pushed intel to release those processors knowing that AMD would have the advantage over them in the fusion processor category if AMD could ever produce them(which we now know they can).

as far as larrabee goes AMD and Nvidia both pretty much pointed out it was a failure from day one due to Intel's design of larrabee, which they were both right.
the biggest mistake Nvidia made in my opinion was staying in the chipset business for to long. after AMD started releasing their chipsets and intel started getting into the high performance chipset market(x38/48/58/p55 and so on) Nvidia should of taken the hint and instead just licensed out SLI to Intel and AMD instead of forcing their chipset on Intel and completely ditching AMD. its pure profit on their end by licensing it out and they wouldn't have to spend a dime on R&D for new chipsets nor the cost to produce them. the profit margin may be a bit lower in the short term but its the way the go in the long term. now all nvidia needs to do is fix the friggin pricing on their cards.
 
Last edited:
AMD owned ATI long before Intel ever released their version of fusion processors so that can't really be a reason for "needing" to buy ATI. if anything AMD buying ATI is what pushed intel to release those processors knowing that AMD would have the advantage over them in the fusion processor category if AMD could ever produce them(which we now know they can).

Whether they did or didn't know, they still anticipated it pretty well. Intel was however selling a crazy amount of GMA GPUs to OEMs and integrators at the time. It was only to be expected Intel was eventually going to integrate the two, for the mobile market.

now all nvidia needs to do is fix the friggin pricing on their cards.

nVidia can do no such thing. AMD's GPUs are cheaper to produce and AMD can afford to sell their GPUs at less no matter how low nVidia goes.
 
logically, selling off their fabs does sound like a bad long term goal. but as business they have to keep themselves afloat to fight again another day.
The cost of their short term survival might be their long term doom though. Its only going to be harder in the future to produce chips and there are only a handful of companies, Intel included, who have the process tech necessary to produce a competitive CPU to Intel's hardware. Its pretty unlikely Intel would be willing to give AMD foundry service and Intel has the best stuff by far so AMD will most always be even more handicapped than they are now in any competition with them if GF goes under and they're stuck with using IBM, Toshiba, or TSMC.
 
logically, selling off their fabs does sound like a bad long term goal. but as business they have to keep themselves afloat to fight again another day.

buying assets that allow them to merge cpu & gpu lets them focus on selling to a much larger market. they may have paid a pretty penny then but it looks like it might work out for them. amd may well have plans later down the track to bring back internal fabrication once their 'warchest' is fully charged.
Splitting off their fab division was a very good move. Fabs are very expensive to build and run. Having $5 billion+ fabs running at 1/2 capacity does not work. By spinning off Global Foundries, they have been able to move to a foundry model and fill capacity with other people's chips.
 
Splitting off their fab division was a very good move. Fabs are very expensive to build and run. Having $5 billion+ fabs running at 1/2 capacity does not work. By spinning off Global Foundries, they have been able to move to a foundry model and fill capacity with other people's chips.

The question is will this help in the R&D costs. The move got the foundry an initial boost of money but I believe GF will need other high end customers (customers who need the latest process nodes) to make this a viable long term solution otherwise AMD will be stuck even further behind Intel in process shrinks.
 
Also it remains to be seen if any of their R&D pays off, and it will have to in a very big way if they're going to have any chance of competing with Intel's ever bigger process advantage. AMD has done some pretty good innovation over the years but they've also made lots of mistakes along the way, maybe too many. We're just going to have to wait and see really.

As far as BD goes if they can get the IPC of Nehalem and get 4-5Ghz clocks on air then they'll be just fine. We still don't really know if they can do that though and AMD's continued silence is even more frustrating.
 
Also it remains to be seen if any of their R&D pays off, and it will have to in a very big way if they're going to have any chance of competing with Intel's ever bigger process advantage. AMD has done some pretty good innovation over the years but they've also made lots of mistakes along the way, maybe too many. We're just going to have to wait and see really.

As far as BD goes if they can get the IPC of Nehalem and get 4-5Ghz clocks on air then they'll be just fine. We still don't really know if they can do that though and AMD's continued silence is even more frustrating.


It still confuses me as to what AMD needs to do in order to increase IPC in its processors without making it more costly.

Add more transistors? (Yes, laughable but worth a shot. lol)

Complete redesign of the actual CPU ?

AMD has indeed come out with good ideas, but poor implementation perhaps?

Even AMD's memory controller doesn't have the same throughput as Intel's Sandy Bridge and Nehaelam lines. And, it makes me wonder what AMD is doing wrong to not have a CPU match Intel's besides lack of R&D funding.

Everyone keeps saying "IPC-this, IPC-that," but hasn't explained fully why and how AMD should go about doing it that way in their processors. And, I highly doubt JF-AMD would be able to answer that.
 
Even AMD's memory controller doesn't have the same throughput as Intel's Sandy Bridge and Nehaelam lines.

That was another thing I was curious about but haven't seen much info on, was how efficient the mem controller on the new chips will be. I know I've seen rumors of BD supporing higher memory frequencies, which should translate to higher bandwidth, but is frequency the reason the SB platform seems to have much higher bandwidth than PII?

It got me curious after seeing some numbers and comparing it to the Nvidia/AMD gpu battles where the AMD gpus seem to have more memory bandwidth available, which can definitely impact performance potential.

Ahh so many questions that won't get answered for a while, so frustrating :(
 
That was another thing I was curious about but haven't seen much info on, was how efficient the mem controller on the new chips will be. I know I've seen rumors of BD supporing higher memory frequencies, which should translate to higher bandwidth, but is frequency the reason the SB platform seems to have much higher bandwidth than PII?

It got me curious after seeing some numbers and comparing it to the Nvidia/AMD gpu battles where the AMD gpus seem to have more memory bandwidth available, which can definitely impact performance potential.

Ahh so many questions that won't get answered for a while, so frustrating :(
I do recall JF saying in a video interview (back in December) that they've improved the performance of their memory controller by 50%. He also said the same about the CPU performance, but not if that was like for like or total throughput. :(

Release the damn things!
 
It still confuses me as to what AMD needs to do in order to increase IPC in its processors without making it more costly.

Add more transistors? (Yes, laughable but worth a shot. lol)

Complete redesign of the actual CPU ?

AMD has indeed come out with good ideas, but poor implementation perhaps?

Even AMD's memory controller doesn't have the same throughput as Intel's Sandy Bridge and Nehaelam lines. And, it makes me wonder what AMD is doing wrong to not have a CPU match Intel's besides lack of R&D funding.

Everyone keeps saying "IPC-this, IPC-that," but hasn't explained fully why and how AMD should go about doing it that way in their processors. And, I highly doubt JF-AMD would be able to answer that.

There are many things you can do in a processor design to increase the IPC. With X number of execution units the goal is to have X number of instructions being worked on at any given time. In reality this is very hard to do. Let's say we are talking about integer instructions, and we have 3 integer execution pipelines. If we can keep those pipelines full, we can execute 3 instructions per clock cycle.

However, the problem is that most programs do not exhibit a high degree of instruction level parallelism. Most programs use a lot of branching statements. Whenever the processor encounters a branch instruction, it has to guess as to whether or not the branch will be taken. If it guesses correctly all is well, but if it guesses wrong it has to discard any instructions that it started executing after the branch and start over with the correct sequence of instructions. So having a good branch predictor is critical to achieving high IPC.

Another way to improve IPC is to make sure that execution units have data when they need it. In order to do this you have to have a good caching scheme, and you also have to be able to get data out of memory quickly. The lower the latency on your cache the better, and the higher the throughput from memory the better. However, there are a lot of trade-offs involved in designing a cache (especially the associativity of the cache).

There are many things you can do to increase IPC, but they all involve trade-offs. Some require a lot of extra complexity to implement, so may or may not be worth it. I've just provided a few examples, but there are many more. I hope that helps.
 
Everyone keeps saying "IPC-this, IPC-that," but hasn't explained fully why and how AMD should go about doing it that way in their processors. And, I highly doubt JF-AMD would be able to answer that.
No they have before. Single thread performance still matters a whole lot and to get single thread performance you need good IPC.

So AMD could put out a chip tomorrow that had say 32 K7 Athlons on it or something which would on paper have huge performance but in the real world would get stomped on by Nehalem and SB in most everything. Dual, quad, octo, etc core chips have been out for years but we're still not really living in a "many core" software environment for the desktop yet. Heck even in the server world they aren't. They just do tons of virtualization to take advantage of multi core.
 
Nice.. the core clock was high as well... Looking forward to seeing these in the wild.
 
No they have before. Single thread performance still matters a whole lot and to get single thread performance you need good IPC.

So AMD could put out a chip tomorrow that had say 32 K7 Athlons on it or something which would on paper have huge performance but in the real world would get stomped on by Nehalem and SB in most everything. Dual, quad, octo, etc core chips have been out for years but we're still not really living in a "many core" software environment for the desktop yet. Heck even in the server world they aren't. They just do tons of virtualization to take advantage of multi core.

To have good single thread performance you need good single thread performance. IPC is a catchphrase used to sell why SB is so good. But honestly SB is quite wide, but it still struggles to utilize it, right now in standard apps only a few will actually get SB up above 1 IPC. Better job keeping the pipelines going and you go from needing a core cabable of doing 4-5 IPC to beat it to a 2-3 IPC core tearing it apart. Also the world of clock speeds aren't similiar something in standard applications capable of doing 10 IPC isn't going to be useful if it only clocks to 1 GHz. Samething applies the other way, you can have the fastest CPU ever designed and still have worse performance then a CPU going half as fast.

Its about finding the right compromises with the money in R&D available.
 
Back
Top