Barcelona @ 1.6GHz benched by Dailytech!

Status
Not open for further replies.
Indeed and why?

Getting shafted in video forums by bogus benchies for so long Iam willing to take the waiting stand. Somehow corporates showing their own benches have always been bit suspicious, I will wait for review sites version of the truth.

That's a good practice. Personally, after owning an Xp2500, a Athlon 64 3500, Athlon x2 4200, my next build is looking like a Core 2 Quad. Right around July 23rd.
 
Please dont put Scali in the same playing field with me. I Scali's mind, Intel is literally god. I have the good sense to understand that AMD is a company. Unlike Scali, I have the good sense to admit that I'm a fan.

Oh please, I think Intel is shit... I think the whole x86 thing is shit.
If there's any brand of CPUs I am/was fan of, it's Motorola.
It's just that Intel makes the better x86... Which still doesn't make me like it, but I'm realistic enough to accept that this is an x86 world, and we had best make the most of it.
 
Obviously you have not seen any of the preliminary fusion benchmarks. If you had, you'd know it is in no way comprable to a dedicated solution. It IS infact, a cheap onboard graphics solution like Intels, it's just slightly better as far as performance is concerned.

I was thinking further ahead.
AMD has announced three steps for Fusion.
First is a GPU-like device that can sit on a socket next to a regular CPU.
Second is both GPU and CPU integrated on a single socket.
Third is the GPU's pipelines actually fused (hence the name) with the execution core of the CPU itself.
Thing is, a GPU just offers a lot of parallel floating point performance (with relatively low precision), and is therefore a niche product.
Eg, it won't accelerate any kind of office/web tasks, and it won't do anything for database servers either.
 
Oh please, I think Intel is shit... I think the whole x86 thing is shit.
If there's any brand of CPUs I am/was fan of, it's Motorola.
It's just that Intel makes the better x86... Which still doesn't make me like it, but I'm realistic enough to accept that this is an x86 world, and we had best make the most of it.

It's just that this, and it's just that that.......

Yep we call them excuses.....
 
Thing is, a GPU just offers a lot of parallel floating point performance (with relatively low precision), and is therefore a niche product.
Eg, it won't accelerate any kind of office/web tasks, and it won't do anything for database servers either.
You'll be able to fold like a madman, though (if folding on ATi video cards is any indication) :D
 
Thing is, a GPU just offers a lot of parallel floating point performance (with relatively low precision), and is therefore a niche product.
Eg, it won't accelerate any kind of office/web tasks, and it won't do anything for database servers either.

And this is why I see it as a gimmicky thing. That and people are expecting programmers to code for it to use that idle processing. If theres anything I've learned from x64 and multi-core development is not to expect a whole lot of enthusiasm from the coders.

Once again I'll state that I think it will do amd good in the low budget market and laptop market. Anything else...theres little to no market for it. Its like buying a physics card but worse b/c you had no choice in the matter.
 
And this is why I see it as a gimmicky thing. That and people are expecting programmers to code for it to use that idle processing. If theres anything I've learned from x64 and multi-core development is not to expect a whole lot of enthusiasm from the coders.

Once again I'll state that I think it will do amd good in the low budget market and laptop market. Anything else...theres little to no market for it. Its like buying a physics card but worse b/c you had no choice in the matter.

I think it'll do wonders for interpreted environments. The days of the fat clients are almost over.....

MS set the industry back 25 years. Thank goodness they are finally figuring out what everybody else figured out a quarter a century ago.
 
Overclocking potential should not even be considered in server chips. If you are dumb enough to overclock a server, you might as well go shoot yourself now.

to my knowledge, barcelona server chips are the same architecture as the desktop chips. if its a low clocker on one end, it should be the same on the other
 
Wow. One bench mark run on pre-production silicone, and it might as well be the final review. Seriously are you people all 14? Arguing about speculated performance figures... Wow.
 
Wow. One bench mark run on pre-production silicone, and it might as well be the final review. Seriously are you people all 14? Arguing about speculated performance figures... Wow.

read my post above


anyways..... am I the only one thinking that these new K10 chips will make excellent laptop chips? I mean, AMD is currently throwing out great low power chips, and with the new power saving features coming our way with K10, these things might be real good for future laptops...
 
I really hope AMD pulls through because this industry needs good competition and without it innovation becomes quite lacking.
 
I really hope AMD pulls through because this industry needs good competition and without it innovation becomes quite lacking.

This is true, and in the past Intel's prices were higher when Cyrix and AMD were just producing knock of processors that used the same sockets.
 
I think it'll do wonders for interpreted environments. The days of the fat clients are almost over.....

MS set the industry back 25 years. Thank goodness they are finally figuring out what everybody else figured out a quarter a century ago.

Care to explain that?
These statements don't mean anything to me...
What do you mean by "interpreted environments", and what is that remark about MS and 25 years?

And how does it all tie to Fusion?
Because if you mean interpreted environments as in interpreted script languages etc... lol again, what do you want with tons of parallel floating point power when you're interpreting?
 
Care to explain that?
These statements don't mean anything to me...
What do you mean by "interpreted environments", and what is that remark about MS and 25 years?

And how does it all tie to Fusion?
Because if you mean interpreted environments as in interpreted script languages etc... lol again, what do you want with tons of parallel floating point power when you're interpreting?

If your asking these questions then you shouldnt be on a tech forum. This forum and others cater to a certain demographic. You should choose a forum that better suites your level of understanding.
 
AI? With floats?
Haha, get a clue.
There is no reason that FP cannot be used to do AI. In fact, if one were not so brainwashed from the start, one might be open-minded enough to realize that there are many times when using a mix of FP and INT is much faster for a given problem than straight INT because there are more available execution units and registers. There is a good handful of operations that could be done in INT or FP, and this number keeps growing as the FP ISA is being extended faster than the INT ISA.

Scali, you always say things like this. Just like your previous claim that MarchingCubes gets no benefit from running on a GPU. Get a clue. Any CS undergrad will tell you that any reasonable implementation of MarchingCubes can be made massively parrallel. Hmm massively parrallel FP... perfect for the GPU. Oh wait, didn't you say somthing like this before: "Thing is, a GPU just offers a lot of parallel floating point performance." I love the way you paint yourself into a corner, over, and over, and over again.

Just because you don't know how to do something, doesn't mean it can't be done or it isn't easy/worth while to do. Grow up and stop trying in insult Dubby every chance you get. IMHO, you look like a fool; perhaps you could at least wait until you have a complaint that actually makes sense.
 
Look at ATi's XPU.... Then re-evaluate your bias. Besides everything is a niche. If you can cover enough of them........

Not at all, a CPU is a *general purpose* processor.
Core2 is currently a better option than Athlon because it is better at *general purpuse* tasks. In other words: it is faster in pretty much EVERY application.
Now, Fusion may give you better performance for folding@home or something like that, but it's a niche, because there are very little f@h users in general, and even among f@h users, it remains to be seen if they care about performance, to the point where they want to spend money on increasing its performance.
It won't affect the performance of most *general purpose* tasks (the Athlon CPU will still be slower than the Core2). Just like how people who don't play games won't spend hundreds of dollars on videocards. Whatever's onboard is good enough, even though a GeForce 8800GTX may be a hundred times faster in games.
 
Wow. One bench mark run on pre-production silicone, and it might as well be the final review. Seriously are you people all 14? Arguing about speculated performance figures... Wow.

But Obi our grief is not so much the performance of this pre-production silicon as it is why would AMD (in their darkest hour) allow these crap numbers to be released in the first place. If you are going to be showing the press at a major event like Computex you have better bring out your star pre-production chip and hammer home the real numbers. Because allowing a vendor to show this off coupled with rumors that the chip will be late ONCE AGAIN frankly scares the living shit out of us AMD investors.

No PR dept would ever allow just these numbers to hit the press without putting out some really favorable benches of their own. Especially when all this doom and gloom is abound. I'm not sticking up for them anymore and more importantly as an investor I want to know what the hell is going on.

Hector Ruiz has got a shit load of explaining to do and frankly I do not know why there is no pressure from the media into making him give one.
 
There is no reason that FP cannot be used to do AI. In fact, if one were not so brainwashed from the start, one might be open-minded enough to realize that there are many times when using a mix of FP and INT is much faster for a given problem than straight INT because there are more available execution units and registers. There is a good handful of operations that could be done in INT or FP, and this number keeps growing as the FP ISA is being extended faster than the INT ISA.

I never said it can't be used. I'm just saying that you don't need a massively parallel grid of stream processors for AI tasks.

Scali, you always say things like this. Just like your previous claim that MarchingCubes gets no benefit from running on a GPU. Get a clue. Any CS undergrad will tell you that any reasonable implementation of MarchingCubes can be made massively parrallel. Hmm massively parrallel FP... perfect for the GPU.

Okay, explain to me how to do this in a way that it runs on a GPU, and actually faster than my parallel CPU implementation, which is one of the fastest in the world.
Prove your claim, or shut up.

Oh wait, didn't you say somthing like this before: "Thing is, a GPU just offers a lot of parallel floating point performance." I love the way you paint yourself into a corner, over, and over, and over again.

How's that painting yourself in the corner?
You don't even have enough knowledge and experience to understand what I meant by that remark, as you've proven over and over again.

Just because you don't know how to do something, doesn't mean it can't be done or it isn't easy/worth while to do. Grow up and stop trying in insult Dubby every chance you get. IMHO, you look like a fool; perhaps you could at least wait until you have a complaint that actually makes sense.

Pathetic. I'm the expert here. I *know* what can and cannot be done, because I spent years on researching, experimenting and prototyping.
You and duby are just people without a clue, giving me a big mouth without ever backing up any claims.
Now shut up and create a MarchingCubes implementation (with or without GPU) that beats mine, if I even want to take you and your stupid pathetic insults seriously.
Because I think you're too dumb to even understand what it is that I have developed and optimized here, and how other implementations compare. You can't just sit here and insult me, and put my work down without having some kind of credibility or proof yourself. So far, you've shown or backed up nothing, I on the other hand have shown a few well-performing implementations, optimized for various situations (and given in-depth technical explanations on how they work). If we are to compare anything, you at least have to have something we can compare to.
 
Pathetic. I'm the expert here. I *know* what can and cannot be done, because I spent years on researching, experimenting and prototyping.
You and duby are just people without a clue, giving me a big mouth without ever backing up any claims.
Now shut up and create a MarchingCubes implementation (with or without GPU) that beats mine, if I even want to take you and your stupid pathetic insults seriously.
Because I think you're too dumb to even understand what it is that I have developed and optimized here, and how other implementations compare. You can't just sit here and insult me, and put my work down without having some kind of credibility or proof yourself. So far, you've shown or backed up nothing, I on the other hand have shown a few well-performing implementations, optimized for various situations (and given in-depth technical explanations on how they work). If we are to compare anything, you at least have to have something we can compare to.

You've done no such thing....

You wrote some code and called it a benchmark. And then proceeded to say this is the end all be all. You refused to show the code, and refused to explain the implementation. The only thing we know is that you possibly wrote it, but again without seeing the code that isnt even sure... We dont know how it performs becouse we have no baseline. We dont know how it scales, becouse we have no reference. We dont know how it is implemented becouse we cant see the code.

All we have is your word on it, and that is pretty much worthless.
 
You've done no such thing....

You wrote some code and called it a benchmark. And then proceeded to say this is the end all be all. You refused to show the code, and refused to explain the implementation. The only thing we know is that you possibly wrote it, but again without seeing the code that isnt even sure... We dont know how it performs becouse we have no baseline. We dont know how it scales, becouse we have no reference. We dont know how it is implemented becouse we cant see the code.

All we have is your word on it, and that is pretty much worthless.

So first you call me a fanboy, and now you're calling me a liar aswell?
I *did* explain the implementation, it's not my fault that people like you don't understand the first thing about it. I said I'm open to any questions and discussions, and I still am. But all I see is pathetic unbased accusations.
Go ahead, ask questions if something about the implementation still isn't clear to you.
As for baseline... Compare it to other implementations and you'll know.
I don't care what you think. A Ph. D. in data visualization and a well-paid job at one of the top companies in medical visualization mean a lot more to me than the opinion of some uneducated fanboy on some forum.

Other than that I never called it a benchmark. It's just a test application for my algorithm, to study performance on various systems.
 
So first you call me a fanboy, and now you're calling me a liar aswell?
I *did* explain the implementation, it's not my fault that people like you don't understand the first thing about it. I said I'm open to any questions and discussions, and I still am. But all I see is pathetic unbased accusations.
Go ahead, ask questions if something about the implementation still isn't clear to you.
As for baseline... Compare it to other implementations and you'll know.
I don't care what you think. A Ph. D. in data visualization and a well-paid job at one of the top companies in medical visualization mean a lot more to me than the opinion of some uneducated fanboy on some forum.

Other than that I never called it a benchmark. It's just a test application for my algorithm, to study performance on various systems.

OK so we have to ask you to explain the implementation? We have to trust your word? I dont think so. Show the code.

You want to prove a point, that is the only way to prove it, Until then it is your worthless word.
 
Am I the only one who actually bought a lot of the Pentium Ds? I will never argue for their performance vs. X2s, but I'll be damned if they weren't cheap as hell when the X2s were still uber expensive.

I remember buying my first 805d for 120 bucks... I was thrilled. Tossed it in one of the Fry's ECS 945-P specials, and I had a dual core machine for under 200 bucks. With the lower clock speed, it wasn't even THAT much of a power hog.

I ended up building a lot of 820ds and 805ds for myself (video encoding work) and for some clients who needed render farms; the price was just too good.

This, of course, was back in the days when the 3800+ X2 was still selling for... what, 300? So now, of course, I could buy X2 3600s or e2160s for a fraction of what I even paid for the Pentium Ds... but that's just how it works.

That being said, I also had a pre-order on an X2 4400+ for my own machine the day it came out; paid 600 bucks for that thing. Amazing.

Ah well. The only thing I regret is that all the 945p boards I used couldn't take C2Ds later... live and learn :D
 
OK so we have to ask you to explain the implementation? We have to trust your word? I dont think so. Show the code.

You want to prove a point, that is the only way to prove it, Until then it is your worthless word.

How hard is it to understand that I don't own all rights to this code? My employer owns the rights. It's in my contract.
And why is my word not good enough?
If I talk nonsense, anyone with half a clue about the algorithm should see right through it.
It's technology, you know. Things have to be logical and make sense, else they don't work.
And my executables prove that it works.
 
Ah well. The only thing I regret is that all the 945p boards I used couldn't take C2Ds later... live and learn :D

Don't feel sad... My brother bought a high-end Asus 975x-board with a Pentium D 950, and he couldn't use a C2D either... and that was only introduced a few months after he bought it :)
(I did tell him to wait for C2D, it would be worth his while... but he, like many people here on the forum, was too sceptic, thought the rumours about C2D were too good to be true. He wants to upgrade again now... At least I've stalled him long enough for P35 to arrive... I'm now trying to get him to wait for the july 22 pricecuts, since he wants to go quadcore).
 
Why is this degenerating into another argument of FSB vs HT? Has Barcelona's poor performance been conceded already?
 
If your asking these questions then you shouldnt be on a tech forum. This forum and others cater to a certain demographic. You should choose a forum that better suites your level of understanding.

So I was right? You did mean interpreters?
Then *you* are the one who doesn't belong on this forum.
 
You two should get a room and get it over with. All of this sexual tension is ruining the forum.
 
Not at all, a CPU is a *general purpose* processor.
Core2 is currently a better option than Athlon because it is better at *general purpuse* tasks. In other words: it is faster in pretty much EVERY application.
Now, Fusion may give you better performance for folding@home or something like that, but it's a niche, because there are very little f@h users in general, and even among f@h users, it remains to be seen if they care about performance, to the point where they want to spend money on increasing its performance.
It won't affect the performance of most *general purpose* tasks (the Athlon CPU will still be slower than the Core2). Just like how people who don't play games won't spend hundreds of dollars on videocards. Whatever's onboard is good enough, even though a GeForce 8800GTX may be a hundred times faster in games.

Do you know why I didn't upgrade to C2D yet? My friend bought an e6600 system with similiar spec as my system but for most *general purpose* tasks that most people do with their computer like internet browsing, musics, movies, word processing and etc, the performance difference with my X2 is negligible, not even worth it to change my mobo and ram to support the C2D. For the same amount of money I would rather get a better GPU like 8800GTX to improve the performance of a *specific purpose* task. Btw my friend bought the e6600 because I've recommended it to him.
 
So I was right? You did mean interpreters?
Then *you* are the one who doesn't belong on this forum.

Brahma, and Peakstream..... Among others... These are just the earliest. SH has some potential.

Interpreted environments, such as Python, Perl, and Mono, among others will be able to utilize these systems. In the end I think CTM will do more for them then anything else.
 
Do you know why I didn't upgrade to C2D yet? My friend bought an e6600 system with similiar spec as my system but for most *general purpose* tasks that most people do with their computer like internet browsing, musics, movies, word processing and etc, the performance difference with my X2 is negligible, not even worth it to change my mobo and ram to support the C2D. For the same amount of money I would rather get a better GPU like 8800GTX to improve the performance of a *specific purpose* task. Btw my friend bought the e6600 because I've recommended it to him.

Sure, but that's the same argument as with Fusion.
The C2D is faster, but not in areas that are important to you (however, if you'd have eg an Athlon 1000 and you were upgrading, you'd probably pick the E6600 over the X2, as you recommended yourself aswell, so the comparison is different. Obviously if you have a reasonably fast system that's not too old, you're not going to think of upgrading that fast, no matter what CPU comes around... Heck, my XP1800+ lasted me for years, I easily sat out until the E6600 arrived. Oh yes, my previous system was an AMD Athlon. Fanboy? You wish).
However, it doesn't look like Fusion is going to be faster than an 8800GTX either.
 
Brahma, and Peakstream..... Among others... These are just the earliest. SH has some potential.

Interpreted environments, such as Python, Perl, and Mono, among others will be able to utilize these systems. In the end I think CTM will do more for them then anything else.

The clue you're missing here is that 'utilizing' means nothing (like my OS can 'utilize' dualcore or quadcore processors... which does nothing for me unless I actually run software that uses it).
Obviously most programming languages/virtual machines etc can be modified to support all kinds of new processors.
This however does not translate to performance gains directly.
The interpreters themselves won't run any faster with a floating point grid. That was my point.
Only specific software would take advantage of it, which brings us back to what I said before: niche.
And it really doesn't matter what language that software was written in (interpreted or not), for this particular point.

Which is why your remark is meaningless... or in fact even dumb.
 
The clue you're missing here is that 'utilizing' means nothing (like my OS can 'utilize' dualcore or quadcore processors... which does nothing for me unless I actually run software that uses it).
Obviously most programming languages/virtual machines etc can be modified to support all kinds of new processors.
This however does not translate to performance gains directly.
The interpreters themselves won't run any faster with a floating point grid. That was my point.
Only specific software would take advantage of it, which brings us back to what I said before: niche.
And it really doesn't matter what language that software was written in (interpreted or not), for this particular point.

Which is why your remark is meaningless... or in fact even dumb.

Excuses. Quite a few niches there... Seems to cover most everything. If I need to write an application that draws a window, I'll need to import the appropriate library. In the same vein, if I want to use the GPU for something, I'll need to import the appropriate library.

With this in mind, and remembering the performance hit an interpreted environment incurs, it will gain the largest benefit. This is the bottom line. The truth is that C, and C++ usage is declining in a big way. They'll both stick around to provide low level functions. But interpreted environments are being used far more often now then ever before. Mono and Python being the two biggest. Perl isnt too far off from there.

Call it stupid, but this is how it is whether you like it or not.
 
Where's Duby229 to explain us that the test is a fake amd Dailytech is an Intel paid pumper ?

Seems to me that you were trolling to try and start an imbroglio and from the look of things you've succeeded. Scali2 has jumped onto the bandwagon picked up where you left off. Scali2, your high minded self-superiority is quite irritating to say the least. I have no doubt you know more than I do about all this technology and I commend you for it, however, your interpersonal skills lag far behind your technical skills IMO. I know that may not matter to you but it does to some of us other readers of this forum
May I remind you all of a post that was placed on this forum some time ago?

The Bikering Will Stop Now
You know who you are and we know who you are. If you think that locking threads and leaving user notes is all we will do to enforce the rules, you are wrong.

Please, everyone take a moment to read over the RULES that you have agreed to by posting here.

We will get this forum back to normal, even if it means that some of you need to be sacrificed. Get the picture?


Why don't we all go back to our respective areas of interest and stop all the bickering shall we? All we really have here is one benchmark, one(!) and everyone is either gloating or worried the sky is falling. Lets wait until production silicon is out and then we can all see what is what. Until then this is just a useless waste of time and effort.
 
Excuses. Quite a few niches there... Seems to cover most everything. If I need to write an application that draws a window, I'll need to import the appropriate library. In the same vein, if I want to use the GPU for something, I'll need to import the appropriate library.


What you're not getting here is that:
1) Pretty much every modern application draws windows, so general purpose.
2) Fusion won't make things like drawing windows faster, because they simply don't consist of massive streams of floating point operations. So, not general purpose.

With this in mind, and remembering the performance hit an interpreted environment incurs, it will gain the largest benefit.

No, because the part of an interpreted environment that incurs the performance hit, is *general purpose*, and adding a Fusion chip won't make it go any faster. In fact, the extra overhead will probably make it slower to use Fusion for most simple floating point operations. Just like a GPU may score many more GFLOPS than a Core2 Duo, but if I want to just calc 1.1 + 2.2, I'm not going to use a GPU, because it'd be WAY faster on the Core2 Duo, since there is no overhead. I send the instruction and get the result immediately. GPUs have incredible amounts of overhead, and therefore you can't use them for general purpose work, even though in theory they pack much more processing power. Fusion is really no different.
Only if you have a large enough workload, Fusion may start to pay off... But this is, as I said many times before: niche.

Call it stupid, but this is how it is whether you like it or not.

The stupid part is that you obviously have no idea what you're talking about... how an interpreter works, and how Fusion fits into this equation, even though I've already spelled it out in quite a few posts.
 
Status
Not open for further replies.
Back
Top