Why did you chose Intel over AMD?

Why did you chose Intel over AMD?

  • Better overclocking

    Votes: 86 46.5%
  • Better motherboard/platform

    Votes: 42 22.7%
  • Brand preference

    Votes: 12 6.5%
  • other (please please list!)

    Votes: 45 24.3%

  • Total voters
    185
Status
Not open for further replies.
2 QPacks , one for gaming , the other one as HTPC/fileserver.Thinking of an upgrade for a while now since S939 doesnt really cut it any more, but as said I havent found a INTEL matx board yet that fits my needs and budget.There are some nice AM2+ boards out there that I like , but in my opinion there is no decent AMD cpu out there right now to upgrade too.
So for now Im still waiting whats to come out.

I'd just get a normal case for the gaming machine. It would open up a ton of options for you.
 
I'd just get a normal case for the gaming machine. It would open up a ton of options for you.

lol ....yeah heard that one quite often, but i love my modded QPacks ,with all the work i have put into.Im not desperate to upgrade either, since DFI is coming out with some nice matx boards others might show up with something new too.INTEL and AMD are about to launch new cpu's before end of the year.... i can wait.:D
 
lol ....yeah heard that one quite often, but i love my modded QPacks ,with all the work i have put into.Im not desperate to upgrade either, since DFI is coming out with some nice matx boards others might show up with something new too.INTEL and AMD are about to launch new cpu's before end of the year.... i can wait.:D

To each their own I guess. I typically buy E-ATX cases so all my systems are massive.

I've got four cases (not counting my HTPC) right now only one of which is "normal" sized.

Silverstone TJ-09 (Gaming rig)
Coolermaster Cosmos (Girlfriend's gaming machine)
Antec P190 (Testing/misc. rig)
Antec Solo
 
To each their own I guess. I typically buy E-ATX cases so all my systems are massive.

I've got four cases (not counting my HTPC) right now only one of which is "normal" sized.

Silverstone TJ-09 (Gaming rig)
Coolermaster Cosmos (Girlfriend's gaming machine)
Antec P190 (Testing/misc. rig)

nice cases and huge ! I guess you could easily fit two computers in one of those.
 
I went intel after I started working for a boutique PC builder. It was around the time intel took off and my opteron 170 at 2.8ghz was starting to feel dated. My first intel cpu was a QX6850. With AMDs performance there is simply no reason to buy amd, even if you are a heavy PC user. Intels just perform better and the price now a days is just about even.

With with way I7 is looking i doubt i'll ever use AMD again.
 
I've have built 3 machines this year for friends who heavily game...All Intel. The last one I built was last week which was a 8400 wolfdale + 4870 video card. Sorry...but that 8400 just totally blows my 6000 out of the water. I actually had this twinge of envy until I remember I only paid $70 for my 6000+. But right now in the $1000 box range....Intel Procs + ATI video card = great holy Mary, mother of God.
 
Back when the Athlon 64 was new I built what was to be my final AMD system. It was flaky and unstable. In the course of troubleshooting I eventually decided to try reseating the CPU. I removed the stock heatsink, following AMD's directions carefully, and found the CPU still stuck to it. Yes, the gooey cohesion between CPU and heatsink was stronger than the CPU socket's grip on the CPU's pins.

I decided then and there that I would never build a system again with a CPU that has pins. LGA775's retention frame around the CPU prevents any such forcible tear-outs.

And I never did figure out what the problem was with that Athlon 64 build. It wouldn't run Windows at all, but it ran BeOS perfectly well... so I used it as a BeOS-only system until I re-accumulated enough cash for a new LGA775 CPU and mobo.
 
Everyone, please ignore kassler's above post. He's a known AMD fanboy who posts false and misleading information. Unlike his assertions, Deneb most likely will not take the performance crown from Intel, and will at BEST match the performance of the current 45nm Intel CPUs while being soundly beaten by Nehalem in every respect.
 

Thanks for reinforcing my statements :). That article clearly shows that Nehalem has double the memory bandwidth of Phenom and about two thirds of the latency, and is twice as fast in PiFast, among other things. In fact, the Phenom chip gets handily beaten by all three of the tested Intel chips in literally every single real-world test, along with most of the synthetic ones as well. I don't see how you could argue that the Phenom is in any way better than Intel's chips in terms of real-world performance considering you just provided hard evidence to directly contradict that claim.
 
In the case of my desktop, because Intel is faster. I work with digital video.

In the case of my server which uses a 35W Conroe-L Celeron, because AMD's current line is 65W+, even the Semprons, and because the Conroe-L Celeron performs reasonably well. (This is a low-utilization home server)
 
:eek:
I don't have the time to write about how a cache works in detail, I don't think you have the time to do that either...

Still trolling I see :rolleyes:
Why don't you take the time....I could use a good laugh, fangirl.
 
Until my latest systems, the last Intel system I built for myself was a Celeron 300. After that I went with a T-Bird 900 through an XP-m 2500+. After that I wanted an X2 but I couldn't afford a new system. I also refused to own a P4 because the Netburst architecture was so inefficient and an obvious dead end. I refused to give Intel any of my money for that horrid abortion of an architecture.

By the time I had the money for a new system, the Core2s had been out for a while and the whole platform had dropped in price. I put together a system based around the E6400 and overclocked it to 3.4Ghz. I overclock the holy living shit out of my systems to squeeze every last bit of performance out of them.

Believe it or not, the next system I built was an X2 4000+. What can I say, I always wanted an X2 system since they were first released. Besides, I was able to get the motherboard, CPU, 2 gig of RAM, heatsink/fan, PSU and a keyboard for $200. The funny part was that I was only looking for a RAM upgrade for my E6400 system at the time but I just couldn't pass up that deal. I have that system sitting here running at 3.0Ghz. I still think it was a damn good deal and I don't regret the purchase. The weird thing is that no matter what OS I've put on the system the X2@3Ghz always felt smoother with multitasking than my [email protected] even though the E6400 was faster in every way, had a faster hard drive and twice the RAM.

Anyway, earlier this year I built a new system with a Q6600 and overclocked it to 3.6Ghz. The price/performance just couldn't be beat. I also ended up replacing the E6400 with a Q6600 which runs 3.51Ghz and sold the E6400 and built yet another Q6600 system which also runs at 3.6Ghz.

For my needs and demands AMD just does not stand a chance in the price/performance arena. If I ran my machines at stock speeds it might be a different story although the Phenoms have been a letdown. If I don't sell the X2 system (as I need money) I may eventually drop a Phenom in it as I have one of the AM2 boards which does support the Phenom.

 
This thread is "Why did you chose Intel over AMD" and in the Intel forums, where it should be. Kassler, if you don't agree with the premise of the thread, you are free to post a "Why is AMD better than Intel" thread over in the AMD forums but you are NOT free to troll and threadcrap this thread.
 
Until my latest systems, the last Intel system I built for myself was a Celeron 300. After that I went with a T-Bird 900 through an XP-m 2500+. After that I wanted an X2 but I couldn't afford a new system. I also refused to own a P4 because the Netburst architecture was so inefficient and an obvious dead end. I refused to give Intel any of my money for that horrid abortion of an architecture.

By the time I had the money for a new system, the Core2s had been out for a while and the whole platform had dropped in price. I put together a system based around the E6400 and overclocked it to 3.4Ghz. I overclock the holy living shit out of my systems to squeeze every last bit of performance out of them.

Believe it or not, the next system I built was an X2 4000+. What can I say, I always wanted an X2 system since they were first released. Besides, I was able to get the motherboard, CPU, 2 gig of RAM, heatsink/fan, PSU and a keyboard for $200. The funny part was that I was only looking for a RAM upgrade for my E6400 system at the time but I just couldn't pass up that deal. I have that system sitting here running at 3.0Ghz. I still think it was a damn good deal and I don't regret the purchase. The weird thing is that no matter what OS I've put on the system the X2@3Ghz always felt smoother with multitasking than my [email protected] even though the E6400 was faster in every way, had a faster hard drive and twice the RAM.

Anyway, earlier this year I built a new system with a Q6600 and overclocked it to 3.6Ghz. The price/performance just couldn't be beat. I also ended up replacing the E6400 with a Q6600 which runs 3.51Ghz and sold the E6400 and built yet another Q6600 system which also runs at 3.6Ghz.

For my needs and demands AMD just does not stand a chance in the price/performance arena. If I ran my machines at stock speeds it might be a different story although the Phenoms have been a letdown. If I don't sell the X2 system (as I need money) I may eventually drop a Phenom in it as I have one of the AM2 boards which does support the Phenom.

You purchased 4 computers in 1 year?

bttw i have Athlon X2 6000 and happy with it, before I had Athlon XP 2500 and it was cool too...
 
I switched simply because Intel had the best price/performance at stock speeds at the time I upgraded.
 
Well. I had an AMD system with 5600+, but it died on me along with the mobo. Then I thought I'd go for an intel system next (I never had an Intel system before)
I bought C2D E6850, which at the time was probably the best dual core CPU out. I was extremely satisfied by it. And after 6 months of use. I sold my E6850 and bought a Q9300, which has been working great for me. My next system will most likely be intel aswell.
 
I had many problems with an AMD processor a while back... Haven't bought one since.
 
what problems you can get with processor? processor work or dont work nothing more...

I don't think that the problems people have had with AMD processor based machines have anything to do with the processors but rather sub-standard motherboards and terrible chipset drivers.
 
You mean you fold? :)

You may be correct in that assertion. Then again, that is also rather GPU strenuous as well.

Besides that, the high clocked quads make it a breeze ripping and encoding my DVDs. As I said earlier in the thread. An old Celeron 300 would take around a full day to encode a DVD into the Divx format. Thanks to these quads, I can do a 2 pass encode with x264 which results in much better image quality in about 40-50 minutes per movie.

 
I picked other.

I have been using Intel since the 386 days....

Better chipsets and more stability in the platform.
 
You may be correct in that assertion. Then again, that is also rather GPU strenuous as well.

Besides that, the high clocked quads make it a breeze ripping and encoding my DVDs. As I said earlier in the thread. An old Celeron 300 would take around a full day to encode a DVD into the Divx format. Thanks to these quads, I can do a 2 pass encode with x264 which results in much better image quality in about 40-50 minutes per movie.

Yes, folding is now moving to GPGPU, with both AMD and nVidia supported in the latest client.
Video encoding seems to be going the same way, with software like Badaboom.
In both cases, even mid-end GPUs outperform even the fastest CPUs quite easily.
So do you think you'll be moving from fast CPUs to fast GPUs in the future?
I think we are.
 
I choose Intel so I can run Intel chipsets with Intel integrated graphics. Whenever possible I try to use boards with Intel ethernet and sound controllers as well.

This makes my life much easier since most of my systems run Linux 99.99% of the time. Intel chipsets and IGP's have top notch Linux support. Production quality open source drivers are very importent to me -- ones written from chip specs provided by the manufacturer not some hacky reverse engineered stuff.

If AMD's current committment to open source continues, I expect that they'll catch up to Intel some day but they're not there yet. I'll be very happy when they match Intel on this; then I'll be able to return to the one criteria I used to use when buying hardware:

PRICE/PERFORMANCE (Why was this not an option on the poll???)
 
So do you think you'll be moving from fast CPUs to fast GPUs in the future?
I think we are.

Nope. There are only certain types of operations and calculations GPUs are good for because it's a specialized processor and not a general processor. When you use a GPU in areas where it's strong it will scream past a general processor in performance but those types of scenarios are actually quite rare.

 
Nope. There are only certain types of operations and calculations GPUs are good for because it's a specialized processor and not a general processor. When you use a GPU in areas where it's strong it will scream past a general processor in performance but those types of scenarios are actually quite rare.

I have two things you can think about:
1) How specialized are GPUs really? They are getting more generic everyday. In fact, so generic that Intel thinks they can convert their CPU architecture to a GPGPU (Larrabee).

2) Yes, scenario's where GPUs can be applied efficiently are still quite rare... However, the same goes for multicore CPUs. And adding more cores to CPUs seems to be pretty much the only way forward in terms of performance.
There seems to be a high amount of overlap between scenarios where GPUs can be applied efficiently and where multicore CPUs can be applied efficiently (think physics, video encoding/decoding, image processing, 3d rendering, folding...).
In fact, you could almost argue that if you can use a multicore CPU to accelerate a certain scenario, it can also be adapted to an even more parallel architecture like a GPU.
Conversely, if it can't be run on multiple cores efficiently, then a single-core or dual-core CPU would be just as fast, and you won't need more cores anyway (which basically means you won't need a faster CPU, because as said, more cores is the only way to make a CPU faster).

Have you thought about it that way yet? There might be exceptions where a multicore CPU would be good and a GPU wouldn't, but I can't think of one off the top of my head. And as GPUs evolve further, they may solve any remaining exceptions along the way.
So it could be that a CPU becomes a commodity in the future.
 
I have two things you can think about:
1) How specialized are GPUs really? They are getting more generic everyday. In fact, so generic that Intel thinks they can convert their CPU architecture to a GPGPU (Larrabee).

2) Yes, scenario's where GPUs can be applied efficiently are still quite rare... However, the same goes for multicore CPUs. And adding more cores to CPUs seems to be pretty much the only way forward in terms of performance.
There seems to be a high amount of overlap between scenarios where GPUs can be applied efficiently and where multicore CPUs can be applied efficiently (think physics, video encoding/decoding, image processing, 3d rendering, folding...).
In fact, you could almost argue that if you can use a multicore CPU to accelerate a certain scenario, it can also be adapted to an even more parallel architecture like a GPU.
Conversely, if it can't be run on multiple cores efficiently, then a single-core or dual-core CPU would be just as fast, and you won't need more cores anyway (which basically means you won't need a faster CPU, because as said, more cores is the only way to make a CPU faster).

Have you thought about it that way yet? There might be exceptions where a multicore CPU would be good and a GPU wouldn't, but I can't think of one off the top of my head. And as GPUs evolve further, they may solve any remaining exceptions along the way.
So it could be that a CPU becomes a commodity in the future.

GPUs are very highly parallel but the nature of the shaders, which is the backbone of current GPUs, is that they are very simple and unable to do many types of calculations efficiently. The reason they are so efficient at what they do is because of their simplicity and their great numbers but that does not translate over to general processing.

To make a GPU which is effective for general processing would require much more logic to be added to each individual unit. This would in turn increase die space, decrease the number of units and take away efficiency from the specialization which basically defeats the purpose. Even if that route was taken, you would just end up with a general processor which we already have.

The following is not the greatest example but I believe it will be sufficient for the point I'm trying to make even though my knowledge of it is very limited. Take the Cell processor. A single, cut down general processor which basically feeds other, smaller cores or SPEs I believe. While this works great and is very efficient for certain types of calculations, it's not the best choice for a general processor and it wasn't meant to be.

You can take the example of the Cell and go another step. Using a current general multicore processor from AMD or Intel and integrating specialized logic from something like a video card in the same way the Cell does, you could increase the overall performance and efficiency of your general processor. An architecture built from the ground up to use this type of hybrid chip would probably allow for the general processor to be scaled back somewhat since some of the logic would no longer be needed since the specialized logic from the integration would take care of that part of it. I believe this is the direction AMD is looking in regards to Fusion.

 
I buy whatever gives me more for my money. Even more so when I'm on a tight budget. And as far as CPUs go, Intel can't be beat at this point. I'm not a particular Intel fan for that matter, just like I wasn't an AMD fan, when I bought an Athlon 64, since I was coming from a P3, which served me quite well. It's all about price/performance and right now, Intel is there.
 
GPUs are very highly parallel but the nature of the shaders, which is the backbone of current GPUs, is that they are very simple and unable to do many types of calculations efficiently. The reason they are so efficient at what they do is because of their simplicity and their great numbers but that does not translate over to general processing.

But that's exactly my argument:
Are there cases where you need both super-flexible general processing AND lots of parallelism?
Shaders are already very advanced, more advanced than the SPEs in a Cell, for example.
The main differences with CPUs are that they aren't as efficient at handling branches, and they generally aren't as fast in the serial sense.
I think things like branching and flexible processing are generally mutually exclusive with highly parallelizable code.
So that comes down to multicore CPUs not being efficient either, because they can only benefit from 1 or 2 cores, and don't scale much beyond that.

To make a GPU which is effective for general processing would require much more logic to be added to each individual unit. This would in turn increase die space, decrease the number of units and take away efficiency from the specialization which basically defeats the purpose. Even if that route was taken, you would just end up with a general processor which we already have.

No, that's not the route that should be taken.
I say you have a combo of a CPU and a GPU, but the CPU doesn't need to have many cores. 2 or maybe 4 cores for the CPU would be good, and the highly parallelizable code will be run on a GPGPU. That's exactly the strength of the combo, that they are NOT the same, they are specialized. The question is, do you need to make a CPU, specialized in serial code, a parallel architecture? So far it hasn't been very successful.

The following is not the greatest example but I believe it will be sufficient for the point I'm trying to make even though my knowledge of it is very limited. Take the Cell processor. A single, cut down general processor which basically feeds other, smaller cores or SPEs I believe. While this works great and is very efficient for certain types of calculations, it's not the best choice for a general processor and it wasn't meant to be.

The Cell mainly lacks in usability. The SPEs are too limited. Even Cuda is more flexible and efficient to program than the Cell is. Cell is basically already outdated. It was a nice idea at the time, but the technology was too limited to really make it work.
But the idea is the same: a combination of a general purpose processing unit, and a highly parallel architecture next to it.

You can take the example of the Cell and go another step. Using a current general multicore processor from AMD or Intel and integrating specialized logic from something like a video card in the same way the Cell does, you could increase the overall performance and efficiency of your general processor. An architecture built from the ground up to use this type of hybrid chip would probably allow for the general processor to be scaled back somewhat since some of the logic would no longer be needed since the specialized logic from the integration would take care of that part of it. I believe this is the direction AMD is looking in regards to Fusion.

That's more or less what I'm saying. Except that integrating both parts is not a smart thing to do, because then they have to share the memory interface. GPUs shine because they have high bandwidth dedicated memory. You can't have a massively parallel architecture when it's limited by the system memory. So for now, videocards + CPUs will be the best combo, not an integrated chip.
 
Like many in this thread, I went for the best bang for my buck at the time of purchase
 
I have two things you can think about:
1) How specialized are GPUs really? They are getting more generic everyday. In fact, so generic that Intel thinks they can convert their CPU architecture to a GPGPU (Larrabee).

Larrabee is a x86 multicore "GPU", so it a CPU coreturned "GPU", but try and make a NVISA/AMD GPU into a CPU...
 
Larrabee is a x86 multicore "GPU", so it a CPU coreturned "GPU", but try and make a NVISA/AMD GPU into a CPU...

They're a lot closer than you think. Do you even know what x86 entails?
I do, and it has little to do with the GPU portion of Larrabee. They just add special 16-wide SIMD units to it, which are remarkably similar to the multiprocessor units in the nVidia GPUs.
 
Status
Not open for further replies.
Back
Top