AMD to sell FAB38 to TSMC

I agree that 'todays' code is way behind,and that most of whats out there is likely not anywhere near as streamlined as it should be;no matter the market segment the software is aimed at in the end.

But the above bolded text,is something I know Dan D and a few others would take issue with. ;) :p Overall though I have to say I agree with your original 'rant'.

I have no doubt any number of people would take issue with my statement. In context I was simply pointing out computer power against work done, or to be more exact, work not being done.

Worse, many programmers continue to use variations of existing legacy code such as Linux driven FORTRAN, Perl, Python etc which they continually re-write and optimize in an effort to not break the X86 cycle. Con Kolivas, best known for giving many years to trying to fix the Linux kernel for proper performance just recently quit. He not only quit but shares my views on the state of OS’s in general as they stand today. As it turns out while trying to enhance the kernel by cleaning up the scheduling processes he released a version that people loved while at the same time Linus Torvalds himself rejected.

http://linuxgeekboy.wordpress.com/2007/07/25/a-leading-linux-kernel-developer-quits-but-why/

I’ve exchanged an email or two re this subject and his level of frustration was more then he could take.

Anywho, hope that explains why I put it the way I did.
 
Using an opensource environment is no guarantee that they actually modify this environment, let alone that they completely turn it around and are working with some kind of 'new methodology'.

Of course it’s no guarantee, but the fact is they have openly published what they are doing.

I find it funny that you keep claiming it's all easy to find with Google, but you fail to produce any significant links.
I can find none of these fairytales that you tell.
All I turn up is that the choice for AMD predates Core2 altogether, and that they choose third-party software for their movies.

I will in no way apologize for your shortcomings or inability to do a simple Boolean search. You claim to have all the expertise yet you want me to share my homework?

When the decision was made to go AMD C2D had been leaked and a few reviews were floating around. It was tested and AMD was picked. I’m pretty sure if Intel even stood a chance they would have bought their way into those studios much the way they buy their way into so many consumer oriented computers. Like Microsoft, Intel has never had any sort of aversion to spending money.

Add to that I am in no position to apologize to Intel for yet another late delivery to market of a product that to this day still hasn’t kept the promises Intel made. (like where are all the mid and upper end 3.0 CPUs?). That became a do it yourself project.

Or perhaps, like Pixar, they don't move at all?
Pixar is still on the renderfarm they set up in 2004, and DreamWorks set up their farm in 2005.

See above.

What do you expect, after the false accusations you put towards me in your earlier post?
You basically accuse me of being behind in terms of technology, which I know I'm not (yes, SSE and multithreading are not new now, they were when I first started using them... And CUDA not new? We're still on the first generation of hardware that it was introduced on). Yet you fail to specify what I would be behind on in particular.

My statement remains as is. I don’t care if you bought software written yesterday; IT’S OLD and is certainly based on one form of existing code or another.

I don't need to assume. You made some pretty uninformed statements in this thread, like saying you have a 14 GHz processor.

Does my CPU run at 3.5 GHZ per channel, yes. (No thanks to Intel) Do I have 4 cores, yes. Does 4X3.5=14, I just double checked it and yes, it still does. Obviously I used my point for illustration and since Manny pointed out my post was more of an OP-ED piece then I guess I’ll call it “journalistic license”.

For your information, processing power is not measured in clockspeed. Especially not in the supercomputer arena.
Eg for the top 500 supercomputer list, they use the LINPACK benchmark: http://www.top500.org/

And you wonder why I think you don't have a clue?
Heck, even Intel and AMD don't sell their quadcores as 4*clockspeed, despite the obvious marketing advantage. But it would just be misleading advertisement, which is illegal in many countries.

"Argonne is already installing a new Blue Gene/P that is slower than the 445 teraflop model due for installation next year. When the two are combined, they will operate at 556 teraflops. The lab also operates an older Blue Gene/L model that will continue to run separately at 5.7 teraflops." Quoted from your site.

Would not 5.7 teraflops be the direct result of X CPUs @ X clockspeed producing X ammount of work?

What does that have to do with all this 'new methodology' nonsense you have been spouting?

I believe it was you who first “spouted” methodology in a previous post.

What you mean to say is that I called BS, and you now have to try and save face. You haven't backed up any single claim you made. You simply can't. You're talking nonsense. You're just an AMD fanboy.

Again you start name calling.

Since I am typing this on a Quad Intel with two C2D machines next to me I think that probably disqualifies me as an AMD fanboy. As I stated earlier I have no particular love for either company, although I do have a few Athalons and X-2s sitting around unpluged. They are unpluged by the way because Intel refused to grant Stanford full licensing to write optomized code for AMD. That said it seems the two chips are not identical in function as one would be led to believe. That is what led to my original rant at Stanford and the folding at home project.

That brings me to another point. Although I stated Dream Works hired programmers to write their own software to optimize the production of CGI, I never mentioned how they do that.

Dream Works, like Stanford and a crap load of other companies still continue to use a mix of Java, Python, Perl and FORTRAN. Because they write and use the code in house they don’t want to be bothered with stupid compiler licenses so that is one reason they went with AMD. Because they can now write optimized code, even old code at will they get more production from their machines.

Just so you don’t have to look it up:

http://perl.linuxmovies.org/

Not everyone agrees:

http://aspn.activestate.com/ASPN/Mail/Message/perl5-porters/3170784

Some are hiring:

http://jobs.perl.org/job/7005

http://www.highendcareers.com/Jobs/Operations-Systems-Admin/4938

My whole point from the start was, when will everyone stop supporting legacy code. While I can’t help but admire the effort to resurrect old code isn’t it about time we moved to something that makes today’s desktop run faster?
 
I thought this was about TSMC supposedly buying FAB38 and not an AMD/Intel/programming/legacy code pissing match?

this20thread20has20derailed.jpg


It's official.
 
I know some people here arent gonna like this, but I'm gonna say it anyway.... That is entirely 100% DX's fault. All the way. Beyond the shadow of the tiniest doubt.

And why would that be?
There aren't many OpenGL games out there, but the ones that are around don't strike me as being any more efficient. They just seem to take longer to develop, like Doom3.
 
Worse, many programmers continue to use variations of existing legacy code such as Linux driven FORTRAN, Perl, Python etc which they continually re-write and optimize in an effort to not break the X86 cycle.

This is nonsense. Firstly, Fortran predates the entire x86 architecture by many years... Secondly, Fortran code is very portable, there are compilers for many architectures, and code can easily be recompiled other architectures (as a lot of code was at some point recompiled to x86, since it didn't start there).
Thirdly, Perl and Python are scripting languages, they don't even have binaries. The actual sourcecode is interpreted on the fly. So they have no ties to x86 at all. All you need is an interpreter for your architecture. Since these are opensource and written with portability in mind, it's not hard to port an interpreter to your architecture, if that hasn't been done already.

Con Kolivas, best known for giving many years to trying to fix the Linux kernel for proper performance just recently quit. He not only quit but shares my views on the state of OS’s in general as they stand today. As it turns out while trying to enhance the kernel by cleaning up the scheduling processes he released a version that people loved while at the same time Linus Torvalds himself rejected.

Yes, but these patches are trying to get the poor linux scheduler up-to-date.
It was linux being behind, not linux getting ahead.
See page 17 and beyond here, for a comparison of FreeBSD and linux schedulers:
http://people.freebsd.org/~kris/scaling/7.0 Preview.pdf
The old linux one scaled really badly, so CFS was an improvement to that, there are earlier benchmarks that show this scheduler, so CFS (and other schedulers) were required for FreeBSD.
See this for example:
http://people.freebsd.org/~kris/scaling/scaling.png

FreeBSD doesn't have this problem anymore, neither do various other OSes, such as Solaris and Windows. They scale quite well to many threads, cores, etc.
So it's a non-issue to people who don't use linux, at this point.

Anywho, hope that explains why I put it the way I did.

No, you draw completely wrong conclusions from the little facts you gather, because you don't seem to understand the technical background.
 
Of course it’s no guarantee, but the fact is they have openly published what they are doing.

Really, like where?
If you want to prove something, link it.
And yes, I do have all the expertise. You on the other hand babble on like a buffoon without any clue whatsoever, then fail to provide links. Nobody is fooled. So you might aswell drop the arrogance and admit you don't know what you're talking about.

When the decision was made to go AMD C2D had been leaked and a few reviews were floating around. It was tested and AMD was picked. I’m pretty sure if Intel even stood a chance they would have bought their way into those studios much the way they buy their way into so many consumer oriented computers. Like Microsoft, Intel has never had any sort of aversion to spending money.

Again, prove it, link me to a source that states that they did indeed try Core2, and where they motivate their decision for AMD.

My statement remains as is. I don’t care if you bought software written yesterday; IT’S OLD and is certainly based on one form of existing code or another.

I write my own software, I don't buy it.

Would not 5.7 teraflops be the direct result of X CPUs @ X clockspeed producing X ammount of work?

The thing you fail to understand is that there is no linear relation to the number of CPUs/cores and the amount of processing power.
If X CPUs deliver 5.7 teraflops, then 2*X CPUs will not deliever 11.4 teraflops.
In fact, the linux scheduler patches you refer to are trying to improve the scaling to more CPUs/cores.

As I say, it's like 4 cars driving 100 kph. They don't drive 400 kph together. It's a parallel thing, and that's where Amdahl's law comes in.
You might want to Google that one.

I believe it was you who first “spouted” methodology in a previous post.

Nope, I specifically quoted that from you:
"I have no doubt your program ran as it should. I also have no doubt that the programming you used was based on existing coding methodology. Based on that we have no idea how things might have turned out, for better or worse had an entirely new methodology been used."

So you brought it up, but so far you have failed to establish what exactly this new methodology would be, and now you are even trying to deny that you brought it up in the first place?

ISince I am typing this on a Quad Intel with two C2D machines next to me I think that probably disqualifies me as an AMD fanboy.

No, I think what makes you very much an AMD fanboy is that you are talking out of your arse in an attempt to make AMD look better than Intel, while in reality it's just you talking out of your arse, not a shred of evidence to support your claims.

That brings me to another point. Although I stated Dream Works hired programmers to write their own software to optimize the production of CGI, I never mentioned how they do that.

Dream Works, like Stanford and a crap load of other companies still continue to use a mix of Java, Python, Perl and FORTRAN. Because they write and use the code in house they don’t want to be bothered with stupid compiler licenses so that is one reason they went with AMD. Because they can now write optimized code, even old code at will they get more production from their machines.

Just so you don’t have to look it up:

http://perl.linuxmovies.org/

Not everyone agrees:

http://aspn.activestate.com/ASPN/Mail/Message/perl5-porters/3170784

Some are hiring:

http://jobs.perl.org/job/7005

http://www.highendcareers.com/Jobs/Operations-Systems-Admin/4938

My whole point from the start was, when will everyone stop supporting legacy code. While I can’t help but admire the effort to resurrect old code isn’t it about time we moved to something that makes today’s desktop run faster?

Well, excuse me, but they are using this Perl stuff for scripting animations and such. This is not the performance-critical code they're using on their renderfarms (if it was, it wouldn't be written in a scripting language, Mr. Expert :)).
Did you even read the links?
"Perl or Python are what's typically used as the glue code to tie these separate tools together."
Yes, it's being used as some kind of shell scripts. The tools are where the bottlenecks are, not the shell scripts.
See, just like with those linux patches, you post links that don't support your statements at all, and show a severe lack of understanding from your side.
You sir, are an idiot.

Also, what is this nonsense about compiler licenses? I don't know of any compiler license that has anything to do with the brand of CPU whatsoever.
At least the 'big three', GCC, MSVC++ and Intel CC don't have any restrictions on what brand of CPU to use.
 
Your comment about FORTRAN predating x86 simply makes my point. One could hardly call it a paradigm shift in modern programming.

I mentioned Linux since that has become the darling of the movie producing industry. You shift the subject to FreeBSD. Most programmers I know who work in FreeBSD hardly acknowledge Linux as a viable OS where as the Linux programmers say the same about FreeBSD. Since out of the few hundred Ads I see all the time for Linux programmers in the movie industry I don’t recall seeing any for FreeBSD geeks. So what exactly was your point?

Solaris and Windows and the MAC OS were the first things DW and Lucas tossed out the door by the way.

“And yes, I do have all the expertise.” Your words. If so why are you not there offering your services to show them the way? These are billion dollar companies that are obviously in dire need of help.

In the for what it’s worth department we here at the [H] have a distributed computing team. World wide out team is in first place. You may not heard of it but the program in it’s entieity as of October this year became the single biggest project of its type in the world, over one Petaflop.

http://ps3.advancedmn.com/article.php?artid=5690

Yes, this included the output of the newly added PS-3.

You say there are no licensing issues with Intel. I wish you would tell that to Intel. Stanford’s codebasing remains optimized for Intel for the sole reason Intel are being bastards about allowing the Gromacs cores to be properly compiled for AMD. This has been a much discussed problem between Stanford and Intel as well as in our forums. The sole reason I switched to Intel for now is to keep my contribution at a reasonable level.

Since we have well over one thousand folders at any one time here I think many of us understand the supercomputer concept.

Now, as was mentioned above, we certainly did go off topic, and frankly I’m tired of your emotional outbursts and name calling.

You may have the last word and take your final bash, so enjoy.
 
Your comment about FORTRAN predating x86 simply makes my point. One could hardly call it a paradigm shift in modern programming.

But it isn't used much in modern programming.
Fortran is mainly used for maintaining ancient legacy code and non-programmers who have learned how to use Fortran way back.
Most high-performance code is written in C/C++ these days, including your beloved linux and most of its application base. The percentage of Fortran applications in a modern linux distribution is 0.
Next to C/C++, there are many other languages far more popular than Fortran, such as C#, Java, VB and Delphi.

_
I mentioned Linux since that has become the darling of the movie producing industry. You shift the subject to FreeBSD. Most programmers I know who work in FreeBSD hardly acknowledge Linux as a viable OS where as the Linux programmers say the same about FreeBSD. Since out of the few hundred Ads I see all the time for Linux programmers in the movie industry I don’t recall seeing any for FreeBSD geeks. So what exactly was your point?

My point was that linux had a relatively inefficient scheduler compared to other OSes, which is why your Con Kolivas developed the CFS patches.
Which debunked your claim that this was some kind of revolutionary 'new methodology' in OS development.

“And yes, I do have all the expertise.” Your words. If so why are you not there offering your services to show them the way? These are billion dollar companies that are obviously in dire need of help.

Erm, I *am* offering my services to billion dollar companies on a daily basis. It's my job.

You say there are no licensing issues with Intel. I wish you would tell that to Intel. Stanford’s codebasing remains optimized for Intel for the sole reason Intel are being bastards about allowing the Gromacs cores to be properly compiled for AMD.

This doesn't sound like a licensing issue, but rather the simple fact that the Intel compiler is designed to optimize for the Intel architecture (gee, what a surprise... I guess nobody could foresee that when they decided to use that compiler)?
There's nothing that's keeping you from compiling your sourcecode with a different compiler... unless ofcourse you are using Intel-specific compiler extensions. But this again is not a licensing issue, but rather a choice you made during development.
A license is a legally binding document.
Please tell me the exact wordings in the legally binding document that comes with the Intel compiler, which legally prohibits compiling your sourcecode for AMD properly. Because in my copy, there's no mention of AMD whatsover.

Since we have well over one thousand folders at any one time here I think many of us understand the supercomputer concept.

You obviously don't, judging from your previous posts about clockspeed.

Now, as was mentioned above, we certainly did go off topic, and frankly I’m tired of your emotional outbursts and name calling.

I'm not emotional at all. I'm just correcting your misinformed posts. You then insist that you are right, without any proof whatsoever, and a flurry of totally wrong conclusions based on all sorts of links that aren't even directly related to your original claims.
I find that both idiotic and reeking of fanboyism. You seem to stop at nothing to tell people that AMD is the better company, has the better products, has nicer people on shows etc... Well, you stop at actually providing technical fact to back up your story.
Bye now.
 
Well, excuse me, but they are using this Perl stuff for scripting animations and such. This is not the performance-critical code they're using on their renderfarms (if it was, it wouldn't be written in a scripting language, Mr. Expert :)).

Actually it's quite common.
http://en.wikipedia.org/wiki/Maya_(software)#Scripting_.26_Plugins
http://usa.autodesk.com/adsk/servlet/item?siteID=123112&id=7635770#Scripting

http://en.wikipedia.org/wiki/Blender_(software)#Features
http://www.blender.org/features-gallery/features/

One just in case you don't like to trust wiki. I'm not familiar with any other rendering products but I'd be surprised if they lacked feature parity.
 
Actually it's quite common.

You don't understand.
Yes it's common to script animations and shaders and all that (scripts being executed either to start up various tools that are not in a script, or scripts that are run during certain parts of the rendering process, but not the actual rendering itself, which is mostly done with optimized C/C++ code, sometimes optimized with a dash of assembly... at least, when we're talking about RenderMan, Mental Ray, Maya etc).
But this is NOT related to the multithreading scaling issues in Fortran or whatever 'new methodology' AMD enables, or whatever claims BillR has made. In fact, most of these scripting languages don't support threading at all.
So the argument is a non-sequitur, as most of his arguments were.
He just posts some technical links, hoping that people won't understand it, then goes on making random 'conclusions' that support his twisted view of reality, assuming people will think he knows what he is talking about.
Yes, the technical links are okay, but his arguments are still invalid.
 
You don't understand.
Yes it's common to script animations and shaders and all that (scripts being executed either to start up various tools that are not in a script, or scripts that are run during certain parts of the rendering process, but not the actual rendering itself, which is mostly done with optimized C/C++ code, sometimes optimized with a dash of assembly... at least, when we're talking about RenderMan, Mental Ray, Maya etc).
But this is NOT related to the multithreading scaling issues in Fortran or whatever 'new methodology' AMD enables, or whatever claims BillR has made.
So the argument is a non-sequitur, as most of his arguments were.
He just posts some technical links, hoping that people won't understand it, then goes on making random 'conclusions' that support his twisted view of reality, assuming people will think he knows what he is talking about.
Yes, the technical links are okay, but his arguments are still invalid.

All I commented on was that scripts were not used for performance critical code. They are.
 
All I commented on was that scripts were not used for performance critical code. They are.

No they aren't.
Or what exactly in those links do you think would prove that it is the performance-critical part of the rendering process?
These scripts simply interface with the optimized rendering architecture, they aren't actually part of the optimized rendering architecture themselves.
 
Why don't we just put a stake in the heart of this thread? Its obvious that all that is being accomplished here is a lot of nothing. Let Scali have the last word so he thinks he wins and lets move on...
 
Why don't we just put a stake in the heart of this thread? Its obvious that all that is being accomplished here is a lot of nothing. Let Scali have the last word so he thinks he wins and lets move on...

The only 'win' here would be if people here would know the truth from fanboy babble.
Why aren't more people here interested in technology? Isn't this forum supposed to be for tech enthusiasts?
 
Ive read threw this entire thread. the only thing I learned is that
1. Scali2 thinks he knows everything and Needs to validate himself by proving it
2. We cant seem to stay on topic
3. Intel can do no wrong, and any comparison of an AMD product to an INTEL product is both wrong, but punishable by death.
4. A front page Editor for [H] knows nothing compared to a single user who posts 3 times in a row
5. Shrek 3 would have looked alot better if the code were optimized, and it was rendered on core 2 duos, and even better on core 2 quads!
6. Amd is going to die, and all of their fabs are going to fall into the earth


I gotta tell ya. This thead was both informative, and enriching.
Not only is AMD not going to sell fab38, but AMD is given credit at the end of the latest Dreamworks movie, which leads my to believe that their "Netburts" farm WAS NOT USED for the movie, and that either the farm WAS updated to AMD opterons, Or the decision to use an alternate AMD farm was made for either
1. dreamworks decided it was easier to render on amd opterons
2. dreamworks decided that they could achieve better performance with an AMD farm
3. dreamworks had a budget to work with, when creating their latest farm, and AMD opterons offered the biggest bang for the buck

im no expert, but this seems like common sense to me.
If I wasnt benchmarking SOLELY for the purpose of gaining more points on hwbot, I never would have bothered with owning an intel, let alone an expensive quad core xeon.

Amd simply offered me the best bang for the buck, but for the points I wanted, I chose intel.
 
I gotta tell ya. This thead was both informative, and enriching.
Not only is AMD not going to sell fab38, but AMD is given credit at the end of the latest PIXAR movie, which leads my to believe that their "Netburts" farm WAS NOT USED for the movie, and that either the farm WAS updated to AMD opterons, Or the decision to use an alternate AMD farm was made for either
1. Pixar decided it was easier to render on amd opterons
2. Pixar decided that they could achieve better performance with an AMD farm
3. Pixar had a budget to work with, when creating their latest farm, and AMD opterons offered the biggest bang for the buck

In case you are referring to Shrek 3, that is NOT a Pixar movie, but a DreamWorks movie.
DreamWorks uses AMD, Pixar uses Intel.
The last Pixar movie was Ratatouille: http://www.planetx64.com/index.php?option=com_content&task=view&id=689&Itemid=21
Apparently they got Core-based Xeons now.
 
There, edited. its STILL way off topic.

Who cares though?
At least it's an in-depth technical discussion, which some may find interesting. You don't get too many of those around here.
I mean, how many of you knew that DreamWorks uses an Opteron renderfarm, and how many of you knew that Pixer uses a Xeon renderfarm? We've all seen their movies, I guess (or at least the trailers).
Heck, even I only just found out that apparently Pixar did upgrade for Ratatouille. Didn't find that info before.
For me it's extra interesting because I am in the graphics business myself. I have worked with Pixar's RenderMan and with Maya, Mental Ray and all that. And I have developed my own renderers. I think this sort of stuff is quite exciting, it's at the cutting edge of high-performance parallel processing.
 
I gotta tell ya. This thead was both informative, and enriching.
Not only is AMD not going to sell fab38, but AMD is given credit at the end of the latest Dreamworks movie, which leads my to believe that their "Netburts" farm WAS NOT USED for the movie, and that either the farm WAS updated to AMD opterons, Or the decision to use an alternate AMD farm was made for either
1. dreamworks decided it was easier to render on amd opterons
2. dreamworks decided that they could achieve better performance with an AMD farm
3. dreamworks had a budget to work with, when creating their latest farm, and AMD opterons offered the biggest bang for the buck

Still makes no sense.
Firstly, we don't know what they used before they got their Opteron farm. Probably not Netburst (that was Pixar, remember?), but either Pentium III-based Xeons or Athlon MP... or perhaps something more esoteric from SGI or another classic 'big iron' supplier. Something with MIPS perhaps.

Secondly, DreamWorks upgraded somewhere in 2005. At this time Opteron was superior to the Netburst-based Xeons, especially on the large scale that DreamWorks uses.
Pixar would probably have chosen that platform at the time aswell, but Pixar already upgraded a year earlier, back when Opteron wasn't an option yet, probably.
However, this in no way implies that DreamWorks believes that it is still the best platform today. They just need to ride out their investment first.
Pixar has apparently just upgraded for their latest movie, and they again chose Intel, this time Core.
Chances are that DreamWorks would choose Core if they were upgrading today. We don't know. Chances are that if DreamWorks upgrades in 2 years or so, that AMD again makes more sense.
All we know is that they chose AMD in 2005, and who would argue against that choice?
Also, who would argue against Pixar's choice for Core in 2007?
I think it's safe to assume that with these companies that make multi-billion dollar award-winning movies, that they are careful to hire a competent staff that will make the proper investments in their hardware. Their life depends on it, in a way. Rendering times for these movies are huge. We're talking about hundreds of thousands of frames, a lot of which take hours to render. A faster renderfarm would mean that the movie can be finished months earlier. Trust them to pick the fastest solution for their budget.
 
No they aren't.
Or what exactly in those links do you think would prove that it is the performance-critical part of the rendering process?
These scripts simply interface with the optimized rendering architecture, they aren't actually part of the optimized rendering architecture themselves.

When you're scripting a 2 hour long movie with a high degree of complexity (lots of movement) efficiency of any code is important.
 
I am not going to bother making a new thread even if this one has gone a bit OT, but I thought I'd post this link for some relevance to the OP: http://www.theinquirer.net/gb/inquirer/news/2007/11/16/amd-raise-700-million-share

Very disturbing if true. I knew my 6th sense was telling me something wasn't right.



More dilution of company value,and they wait for a Friday at/after the closing bell,to tell the world that buying ATI and greatly increasing record company debt was a bad thing in retrospect !?!?!? ( AMD has a habit of reporting bad news on a Saturday,late Friday after,everyone has gone home....)


Golly gee really ? :D I still think AMD will sell their fab in the future.They need cash and the Sultans in the Middle East and this 'Lets rob Paul again to pay Peter" crap wont work.

Some layoffs,at all levels and heavy asset liquidation should be the order of the day.
 
More dilution of company value,and they wait for a Friday at/after the closing bell,to tell the world that buying ATI and greatly increasing record company debt was a bad thing in retrospect !?!?!? ( AMD has a habit of reporting bad news on a Saturday,late Friday after,everyone has gone home....)


Golly gee really ? :D I still think AMD will sell their fab in the future.They need cash and the Sultans in the Middle East and this 'Lets rob Paul again to pay Peter" crap wont work.

Some layoffs,at all levels and heavy asset liquidation should be the order of the day.

Sorry to burst your dream bubble, but that aint gonna happen. The worst case scenario that I can see is AMD offering large sections of the company as stock options. They are doing this now. The only place left to go is up.
 
When you're scripting a 2 hour long movie with a high degree of complexity (lots of movement) efficiency of any code is important.

You know the rule though?
90% of the time is spent in 10% of the code?
Well the scripts will usually fall outside those 10%.
Ofcourse you could theoretically conceive scripts that take so much time to execute, that they take more time than the actual rendering, but normally this would not be the case.

If this was a bottleneck, they'd be using compilation, like the shader programs used in Direct3D and OpenGL. They are also more or less 'scripts', but they are compiled once then executed many times, at high performance.

The amount of animation is negligible in the total frame rendertime. You can set up even the most complex animation in just a few seconds. Rendering every pixel of a frame in a high-res movie takes hours. There are simply always more pixels than there are animated objects in a scene.
 
Back
Top